id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2307.12085
Limiting distribution of dense orbits in a moduli space of rank $m$ discrete subgroups in $(m+1)$-space
We study the limiting distribution of dense orbits of a lattice subgroup $\Gamma\le \text{SL}(m+1,\mathbb{R})$ acting on $H\backslash\text{SL}(m+1,\mathbb{R})$, with respect to a filtration of growing norm balls. The novelty of our work is that the groups $H$ we consider have infinitely many non-trivial connected components. For a specific such $H$, the homogeneous space $H\backslash G$ identifies with $X_{m,m+1}$, a moduli space of rank $m$-discrete subgroups in $\mathbb{R}^{m+1}$. This study is motivated by the work of Shapira-Sargent who studied random walks on $X_{2,3}$.
Michael Bersudsky, Hao Xing
2023-07-22T14:26:54Z
http://arxiv.org/abs/2307.12085v2
Limiting distribution of dense orbits in a moduli space of rank \(m\) discrete subgroups in \((m+1)\)-space ###### Abstract Consider the moduli space \(X_{m,m+1}\) rank-\(m\) discrete subgroups of covolume equal to one in \(\mathbb{R}^{m+1}\). There is a natural action of \(\mathrm{SL}(m+1,\mathbb{R})\) on \(X_{m,m+1}\), and it turns out that for every lattice subgroup \(\Gamma\leq\mathrm{SL}(m+1,\mathbb{R})\), each orbit \(x_{0}.\Gamma\) is dense in \(X_{m,m+1}\). In this paper we compute the limiting distribution of these orbits with respect to a filtration of growing norm balls, where the norm is given by the sum of squares. The main motivation for this result comes from the work of Sargent and Shapira where they studied random walks in \(X_{2,3}\). Another motivation for our work is to extend the scope of applications of the duality principle in homogeneous dynamics. The moduli space \(X_{m,m+1}\) identifies as \(H\backslash\mathrm{SL}(m+1,\mathbb{R})\), and the duality principle recasts the above problem into the problem of establishing certain volume growth properties of growing skewed balls in \(H\) and proving ergodic theorems of the left action of \(H\) on \(\mathrm{SL}(m+1,\mathbb{R})/\Gamma\) along the skewed balls. Specifically, we use the duality principle as developed in the work of Gorodnik and Weiss. Our ergodic theorems are proven by applying theorems of Shah building on the linearisation technique. Previously, the duality principle wasn't applied in the setting where \(H\) has infinitely many non-compact connected components. Our general result is in the setting where \(H\) is a certain subgroup of a minimal parabolic group of \(\mathrm{SL}(m+1,\mathbb{R})\) such that in the Levi-component there is a lattice. ## 1 Introduction In this paper we study the asymptotic distributional properties of the action of a lattice \(\Gamma\leq\mathrm{SL}(m+1,\mathbb{R})\) in the space \(X_{m,m+1}\) of normalized \(m\)-dimensional discrete subgroups of \(\mathbb{R}^{m+1}\) with respect to a filtration given by growing norm balls (see precise definitions below). Such a research direction is a natural continuation1 of the study initiated in [20] which considers random walks on \(X_{2,3}\). See also the more recent work [10] which generalizes [20]. Another motivation for our work is to extend the scope of applications of the duality principle in homogeneous dynamics to the ergodic theory of lattice subgroups, see Section 1.1 below for more details. We start with our results in \(X_{m,m+1}\). In what follows, \(m\) is a natural number strictly larger than \(1\). Footnote 1: This work started during the first-named author’s Ph.D. studies under the guidance of Uri Shapira who suggested the problem about the limiting distribution in \(X_{m,m+1}\) of the action of a lattice \(\Gamma\leq\mathrm{SL}(n,\mathbb{R})\) with respect to growing norm balls, which was inspired by [20]. We say that \(\Lambda\subset\mathbb{R}^{m+1}\) is a \(m\)-lattice if \(\Lambda\) is the \(\mathbb{Z}\)-Span of a tuple of linearly independent vectors \(v_{1},v_{2},...,v_{m}\in\mathbb{R}^{m}\), that is, \[\Lambda:=\mathrm{Span}_{\mathbb{Z}}\{v_{1},v_{2},...,v_{m}\}.\] For \(\Lambda\) we let \[\mathrm{Cov}(\Lambda):=\sqrt{\det(\langle v_{i},v_{j}\rangle)},\]
2310.07721
Optimizing the concentration ratio of multi-faceted focusing heliostats
This technical note aims at optimizing the concentration ratio of multi-faceted focusing heliostats implemented into a solar tower power plant. The ideal shape of a heliostat located off-axis in the field is known to be the local section of a fictitious parabolo\"id whose parameters are varying continuously with the Sun angular position. We describe an optimization procedure applicable to those heliostats. The flux densities formed at the solar receiver and the achievable concentrating ratios are computed using an improved convolution algorithm. It is shown that the optimized heliostat shape can produce typical concentration gains of approximately 10%, even when the heliostats reflect the Sun under large incidence angles.
F. Henault
2023-08-11T19:15:20Z
http://arxiv.org/abs/2310.07721v1
# Optimizing the concentration ratio of multi-faceted focusing heliostats ###### Abstract This technical note aims at optimizing the concentration ratio of multi-faceted focusing heliostats implemented into a solar tower power plant. The ideal shape of a heliostat located off-axis in the field is known to be the local section of a fictitious paraboloid whose parameters are varying continuously with the Sun angular position. We describe and optimization procedure applicable to those heliostats. The flux densities formed at the solar receiver and the achievable concentrating ratios are computed using an improved convolution algorithm. It is shown that the optimized heliostats shape can produce typical concentration gains of approximately 10%, even when the heliostats reflect the Sun under large incidence angles. Solar concentrator; Heliostat; Flux density; Concentration ratio; Optimization ## 1 Introduction It is well known that the ideal shape of a focusing heliostat in a solar tower power plant is the local section of a fictitious paraboloid whose focus is located at the centre of the solar receiver, and the optical axis is parallel to the Sun vector **S** at a given time [1]. Consequently, the ideal shape of the heliostat changes continuously with the time of the day and the day of the year. This drawback may be removed by defining a "Sun reference position" **S\({}_{\text{0}}\)** from which the heliostat parameters are fixed. Such improvement only involves slight re-alignments of the tilt angles of the heliostat mirrors around the horizontal and vertical axes, so that they become tangent to the ideal paraboloid shape. Here is described an optimization procedure applicable to multi-faceted focusing heliostats (Section 2). The flux densities formed at the solar receiver and the achievable concentrating ratios are computed using an improved convolution algorithm (Section 3). It is shown that the optimized heliostat shape can produce gains of approximately 10% in terms of concentration ratio. A brief conclusion is drawn in Section 4. ## 2 Principle ### Solar tower plant configuration Let us consider the case of a solar tower power plant whose general configuration is depicted in Figure 1-A. Two main coordinate systems are defined: * The XY'Z' reference frame attached to the solar receiver with X'-axis directed from South to North, Y'-axis from East to West, and Z'-axis from Nadir to Zenith, * The XYZ reference frame attached to an individual heliostat with X its optical axis and YZ its lateral dimensions along which its geometry is defined (see Figure 1-B and Table 1). * Three vectors are defined in the X'Y'Z' reference frame (Figure 1-A)**S** is a unitary vector directed to the centre of the moving Sun, * **R** is the unitary target vector directed from the heliostat centre to the solar receiver, * **N** is the bisecting vector between both previous ones. The vectors **S**, **R** and **N** obey the Snell-Descartes law for reflection that writes in vectorial form as: \[\textbf{S}+\textbf{R}=2\textbf{(S}\textbf{ N)}\textbf{N}=2\text{cos}i\textbf{ N}\,, \tag{1}\] with \(i\) the Sun incidence angle. The main employed parameters are summarized in Table 1. We consider the case of a heliostat located at coordinates (86.6, 50., 0.) expressed in meters into the X'Y'Z' reference frame. It may be noted that the distance \(d\) from the heliostat to the solar receiver is kept equal to 100 meters and that the heliostat and the solar receiver are located at the same altitude along the Z'-axis, which is considered as the worst and most demanding case. The heliostat is made of \(m\) x \(n\) identical spherical modules of focal length \(f\!=d=100\) m. This is a simplified version of the focusing heliostats equipping the solar tower power plant in Targasonne, France. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Parameter** & **Symbol** & **Value** & **Unit** \\ \hline Target vector from heliostat to receiver & **R** & (86.6, 50., 0.) & m \\ Distance from heliostat to receiver & \(d\) & 100 & m \\ Incidence angle on solar receiver & \(\beta\) & 30 & degrees \\ Heliostat width along Y-axis & \(w\) & 3.4 & m \\ Heliostat height along Z-axis & \(h\) & 3. & m \\ Number of heliostat modules & \(m\) x \(n\) & 4 x 2 & \\ Module width along Y-axis & \(w_{\text{M}}\) & 0.7 & m \\ Module height along Z-axis & \(h_{\text{M}}\) & 1.4 & m \\ Module focal length & \(f\) & 80 \(\leq\!f\!\leq\!120\) & m \\ Solar receiver diameter & \(d\)’ & 1.2 & m \\ Mean Sun angles in azimuth and height & \((a_{0},\,h_{0})\) & \((0.,44.63)\) & degrees \\ Mean Sun incidence angle & \(i_{0}\) & 25.98 & degrees \\ \hline \end{tabular} \end{table} Table 1: Main parameters of the solar power plant and of the focusing heliostat. Figure 1: Solar tower power plant configuration (A). The geometry of the heliostats is shown on the bottom scheme (B). ### Optimized off-axis heliostat The optimized shape of the heliostat is named "off-axis" here after. It differs from the classical "spherical" shape, where the tilt angles of the modules around the Y and Y axes are adjusted in order to coincide with a monolithic sphere of focal length \(f\) equal to the distance \(d\) = 100 m separating the heliostat from the solar receiver. Here it is assumed that all heliostat modules are identical. Then the sole degrees of freedom available for optimizing the off-axis heliostat are the tilt angles of each individual module around the Y and Z axes. The employed optimization procedure is as follows: 1. We firstly define a "Sun reference position" that is assumed to be an averaged position all over the year. It is assumed to be reached at noon on the autumnal equinox day. It corresponds to the Sun reference vector \(\mathbf{S_{0}}\) plotted in Figure 1. 2. Knowing both \(\mathbf{S_{0}}\) and the target vector \(\mathbf{R}\) (that is unchanged) enables determining the unitary vector \(\mathbf{N_{0}}\) normal to the heliostat for that Sun position, by inversion of Eq. 1 it comes \[\mathbf{N_{0}}=\big{(}\mathbf{S_{0}}+\mathbf{R}\big{)}\Big{/}\sqrt{2\,\big{(}1 +\mathbf{S_{0}}\mathbf{R}\big{)}}\,.\] (2) Then the reference incidence angle on the heliostat is equal to \(\,i_{0}=\arccos\big{(}S_{0}N_{0}\big{)}\,.\) 3. From the knowledge of vectors \(\mathbf{S_{0}}\), \(\mathbf{N_{0}}\) and the incidence angle \(i_{0}\); the tilt angles \(a_{\rm i,j}\) and \(h_{\rm i,j}\) of each heliostat module are evaluated using a set of analytical formulas defined by Eqs. 3. These formulas are strictly equivalent to those presented in Ref. [1]. Alternatively, these angles could be determined with the help of standard ray-tracing software such as Zemax(tm). \[\begin{split} a_{\rm i,j}&=\frac{y_{\rm i,j}}{2d} \Bigg{(}\frac{\cos^{2}i_{0}\cos^{2}\phi+\sin^{2}\phi}{\cos i_{0}}\Bigg{)}- \frac{z_{\rm i,j}}{4d}\frac{\sin 2\phi\sin^{2}i_{0}}{\cos i_{0}}\\ h_{\rm i,j}&=-\frac{y_{\rm i,j}}{4d}\frac{\sin 2\phi\sin^{2 }i_{0}}{\cos i_{0}}+\frac{z_{\rm i,j}}{2d}\Bigg{(}\frac{\cos^{2}i_{0}\sin^{2 }\phi+\cos^{2}\phi}{\cos i_{0}}\Bigg{)}\end{split}\] (3) where \(y_{\rm i,j}\) and \(z_{\rm i,j}\) are the coordinates of each module centre and \(\,\phi=\arctan\big{(}s_{0\rm Y}\,,s_{0\rm Z}\big{)}\,\) with \(\,\big{(}s_{0\rm Y}\,,s_{0\rm Z}\big{)}\,\) the cosine directors of the reference Sun vector \(\mathbf{S_{0}}\) along the Y and Z axes. All of them are expressed into the local heliostat reference frame XYZ. 4. Finally, the flux density maps formed by the off-axis heliostat in the receiver plane Y'Z' are computed with a double FFT algorithm described in Ref. [2]. ## 3 Numerical results The values of the optimized angles \(a_{\rm i,j}\) and \(h_{\rm i,j}\) are given in Table 2 for each heliostat module, and compared with those of the spherical heliostat. The flux densities formed at the solar receiver are computed using an improved convolution algorithm for both the spherical and off-axis heliostat cases. They are illustrated by false-colour views in Figure 2. The angular radiance law of the Sun was assumed to follow Jose's formulas [3]. Cross-checking these results with those obtained using a Grid ray-tracing (GRT) model leads to RMS error differences about 1%, which are comparable to those presented in Ref. [2]. Here two different cases are distinguished: 1. Case of one single heliostat located at the coordinates (86.6, 50., 0.) expressed in meters into the X'Y'Z' reference frame, 2. Case of a couple of heliostats being symmetric with respect to the X'-axis and located respectively at the coordinates (86.6, 50., 0.) and (86.6, -50., 0.) meters into the X'Y'Z' reference frame. Then the flux density maps formed by each heliostat are simply added one to the other. The case B is the most commonly encountered since heliostat fields generally present a symmetry with respect to the X'-axis. The achieved concentrating ratios by the spherical and off-axis heliostats are presented in Table 3 for both cases A. and B. It shows a net advantage of about 10 % in terms of concentrating power for the off-axis heliostats. This gain occurs around half time of the day, typically from 10h00 to 14h00 GMT. \begin{table} \begin{tabular}{|c|c c|c c|c c|c|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Spherical heliostat**} & \multicolumn{2}{c|}{**Off-axisheliostat**} & \multicolumn{2}{c|}{**Angles difference**} \\ \hline Indices & Tilt wrt Z & Tilt wrt Y & Tilt wrt Z & Tilt wrt Y & Tilt wrt Z & Tilt wrt Y & Unit \\ i,j & \(a\)\({}_{\mathrm{ij}}\) & \(h\)\({}_{\mathrm{ij}}\) & \(a\)\({}_{\mathrm{ij}}\) & \(h\)\({}_{\mathrm{ij}}\) & \(a\)\({}_{\mathrm{ij}}\) & \(h\)\({}_{\mathrm{ij}}\) & \(h\)\({}_{\mathrm{ij}}\) & \\ \hline 1, 1 & 12,75 & 7,50 & 14,19 & 8,34 & 1,44 & 0,84 & mrad \\ 2, 1 & 4,25 & 7,50 & 5,20 & 7,55 & 0,95 & 0,05 & mrad \\ 3, 1 & -4,25 & 7,50 & -3,85 & 6,78 & 0,40 & -0,72 & mrad \\ 4, 1 & -12,75 & 7,50 & -12,94 & 6,04 & -0,19 & -1,46 & mrad \\ 1, 2 & 12,75 & -7,50 & 12,75 & -5,89 & 0,00 & 1,61 & mrad \\ 2, 2 & 4,25 & -7,50 & 3,80 & -6,70 & -0,45 & 0,80 & mrad \\ 3, 2 & -4,25 & -7,50 & -5,19 & -7,49 & -0,94 & 0,01 & mrad \\ 4, 2 & -12,75 & -7,50 & -14,24 & -8,24 & -1,49 & -0,74 & mrad \\ \hline \end{tabular} \end{table} Table 2: Tilt angles of the spherical and off-axis heliostat modules and their relative differences. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{09-23-2022, Day time GMT} \\ \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{09-23-2022, Day time GMT} \\ \cline{2-6} \multicolumn{1}{c|}{} & T = 09h00 & T = 10h30 & T = 12h00 & T = 13h30 & T = 15h00 \\ \hline Spherical heliostat x 1 & 38,1 & 37,7 & 32,9 & 16,4 & 6,5 \\ \hline Spherical heliostat x 2 & 44,6 & 54,1 & 65,9 & 54,1 & 44,6 \\ \hline Off-axis heliostat x 1 & 32,0 & 35,7 & 35,7 & 24,3 & 6,5 \\ \hline Off-axis heliostat x 2 & 38,5 & 60,0 & 71,4 & 60,0 & 38,5 \\ \hline \end{tabular} \end{table} Table 3: Achieved concentration ratios by both the spherical and off-axis heliostats. Top rows: case of the single heliostat. Bottom rows: case of two heliostats being symmetric with respect to the X’-axis. Figure 2: Flux densities formed at the solar receiver. (A) Case of spherical heliostat. (B) Case of the optimized off-axis heliostat. Red circles indicate the diameter of the ideally focused Sun image. ## 4 Conclusion This short contribution considers the case of a multi-faceted heliostat focusing sunrays at the central receiver of a solar tower power plant. It presents a solution to improve the concentrating ratio of the heliostat in Sun-tracking mode all over daytime operation. The optimization process consists in turning the shape of a classical spherical heliostat into an off-axis shape profile. Assuming that all heliostat modules are identical, the available degrees of freedom for optimizing the spherical heliostat are the tilt angles of each of its individual modules. The optimization procedure firstly defines a Sun reference position on the sky, then slightly modifies these angles so that they are become tangent to an ideal parabolic section. A Fourier transform convolution model is used to evaluate the irradiance maps at the solar receiver and the achieved concentration ratios. Such an "off-axis" solution enables increasing the concentrating ratio of the heliostats by about 10 %. This procedure may be extended to the entire heliostat field, thus maximizing its concentration power at the solar receiver.
2308.12600
PoseSync: Robust pose based video synchronization
Pose based video sychronization can have applications in multiple domains such as gameplay performance evaluation, choreography or guiding athletes. The subject's actions could be compared and evaluated against those performed by professionals side by side. In this paper, we propose an end to end pipeline for synchronizing videos based on pose. The first step crops the region where the person present in the image followed by pose detection on the cropped image. This is followed by application of Dynamic Time Warping(DTW) on angle/ distance measures between the pose keypoints leading to a scale and shift invariant pose matching pipeline.
Rishit Javia, Falak Shah, Shivam Dave
2023-08-24T07:02:15Z
http://arxiv.org/abs/2308.12600v1
# PoseSync: Robust pose based video synchronization + ###### Abstract Pose based video sychronization can have applications in multiple domains such as gameplay performance evaluation, choreography or guiding athletes. The subject's actions could be compared and evaluated against those performed by professionals side by side. In this paper, we propose an end to end pipeline for synchronizing videos based on pose. The first step crops the region where the person present in the image followed by pose detection on the cropped image. This is followed by application of Dynamic Time Warping(DTW) on angle/ distance measures between the pose keypoints leading to a scale and shift invariant pose matching pipeline. Keywords:Pose estimation, object detection, dynamic time warping ## 1 Introduction Video sychronization task refers to time aligning the frames from multiple videos where the persons in both videos are trying to perform same action but there are some mismatches in timing and action. This task that is quite intuitive for humans poses a number of challenges as an automated synchronization task, few of which are listed below: * Pose differences between persons performing the action * Speed difference: would lead to difference in timing of action movements * Scale difference: depending on distance between the person and camera and also inherent size difference * Shift in position of persons within the frame We introduce a tool **PoseSync** that synchronizes any two videos by bringing them in sync using the state of the art models at its backend for performing pose estimation and matching the poses. It consists of three stages: * Video frame cropping * Pose detection * Video syncronization: using DTW PoseSync, first, crops the video-frames using YOLO v5[10] (we also experimented with tracking using Multiple Instance Learning tracker [2] from OpenCV [14] for faster cropping). Cropping operation on the original frames improves the accuracy of pose detection by getting rid of other people in the background/ any spurious information. These cropped frames are passed to pose detection model called MoveNet that returns the pose keypoints for each frame. Finally, Dynamic Time Warping (DTW)[3] is used to map the keypoints for both videos (using distance or angle based metrics described later) and map the test video to reference video. DTW, originally proposed for speech recognition is a general purpose algorithm that can measure the similarity of patterns across different time series. To solve the issue of size differences between two poses, we propose an Angle-Mean Absolute Error metric that computes the MAE between angles of key skeleton joints. This metric is invariant to scale, position and angle of the pose. Open source implementation of the proposed algorithm can be found here. ### Relevant past work Different pose detection models have been proposed in the literarure for detecting keypoints in human poses [6][17][1] from an image. TransPose [17] consists of a CNN feature extractor, a Transformer Encoder, and a prediction head. Transformer's attention layers can capture long range spatial relationships in the image that are key to detecting pose keypoints. And the prediction head detects the precise locations of the keypoints by aggraegating heatmaps generated by the transformer. UniPose [1] is a single stage pose detection model that utilizes Waterfall Atrous Spatial Pooling (WASP) module proposed by the authors. They obtain a large effective field of view (and multi-scale representaions) using dilated convolution [5] layers bunched together using "Waterfall" configuration. Another human pose detection model, MoveNet is neural network based architecture built to track human pose in real-time from video clips. We will further discuss this model in-depth in section 2. To find the similarity and relationship between two time series, various methods like cross-correlation, dynamic time warping (DTW) have been applied. Utpal Kumar et al. [11] concluded that DTW efficiently captures valueable information which helps to detect even minor variations in time series that windowed cross correlation (WCC) [4] fails to catch. In the field of water distribution network, Seubli Lee et al. [12] found that Dynamic Time Warping (DTW) algorithm performs better in searching for the minimum distance between two water data streams by comparing different time steps, rather than applying the Euclidean algorithm, which evaluates the data at same time step. Rao et al. [15] proposed a DTW aided view-invariant similarity measure to determine temporal correspondence between two videos. Dexter et al. [7] took an alternative approach for video matching, they computed self-similarity matrices to describe the features along the image sequence, and then used these view-invariant descriptors for temporal alignment. [9] compares 48 dissimilarity metrics empirically to classify various time-series and found that DTW-based metrics outperform the rest. Our main contributions are as follows: designing an end to end pose based video synchronization model by putting together the building blocks from different domains. We also introduce a metric for comparing the pose keypoints that is a) invariant to rotation/ translation/ scaling and b) gives more weightage to certain keypoints / joints based on specific task requirements. ## 2 Pose detection Movenet [6] is a deep learning architecture specifically built for accurately detecting and tracking human poses in real-time from video streams. It is optimized to efficiently operate on mobile devices with constrained computational resources, achieving high frame rates during execution. It is a bottom-up estimation model which utilizes heatmaps to precisely locate keypoints on the human body. The model comprises of two main components: a feature extractor and a group of prediction heads similar to CenterNet [8]. It utilizes MobileNetV2 [16] as its feature extractor, which is enhanced with a feature pyramid network (FPN) [13]. This combination enables the model to generate semantically rich feature maps with a high resolution output. The feature extractor in MoveNet is accompanied by four prediction heads that are responsible for densely predicting the following: * Person center heatmap: This head predicts the geometric center of individual person instances. * Keypoint regression field: It infers the complete set of keypoints for each person individually, that helps in grouping keypoints into individual instances. * Person keypoint heatmap: This head infers the specific location of all keypoints, regardless of the person instances. * 2D per-keypoint offset field: It predicts local offsets from each pixel in the output feature map to accurately determine the location of each keypoint. ## 3 Dynamic time warping Dynamic Time Warping (DTW) is a method used to calculate the similarity between two time series. The primary goal of DTW is to identify corresponding matching elements in the time series and measure the distance between them. It relies on dynamic programming principles to determine the optimal temporal alignment between elements in two time series [3]. Researchers have successfully applied DTW for analyzing diverse types of sequential data, including audio, video or financial time series. Essentially, any form of data that can be represented as a linear sequence can be effectively analyzed using DTW. DTW assumed that the below conditions stand true for both sequences: * The first index from the first sequence must be matched with the first index from the other sequence (although it may have additional matches). * The last index from the first sequence must be matched with the last index from the other sequence (while allowing for other matches). * Each index from the first sequence must be matched with one or more indices from the other sequence, and vice versa. * The mapping of indices from the first sequence to indices from the other sequence must be strictly increasing. This means that if index j comes after index i in the first sequence, there should not be two indices l and k in the other sequence such that index i is matched with index l and index j is matched with index k. We use DTW to syncronize the sequences of human pose detected from two videos. The elements of the sequences are set of keypoints which are used to compute the cost between any two elements from each series. The cost can be computed using metrics like simple mean absolute error or mean absolute error between the angles derived from the keypoints. The metric computation is covered in depth in section 4. ## 4 Pose matching metric Pose matching can be performed between two unique poses using pose keypoints (human body joint : x,y coordinates) as vector representations and using Mean Absolute Error(MAE) or Mean Squared Error(MSE) as distance metrics. The limitation with these metrics is that they are not scale, rotation and shift invariant. That is: even when poses are similar, MAE or MSE between pose keypoints could be high due to scale or position difference. To overcome this problem, we use angle based mean absolute error. It works by first calculating the joint angles formed by three joint points with one joint as pivot. We then calculate MAE between the angles as distance metric. We use below mentioned 9 joints triplets for angle calculation. We found that these joints are sufficient for most common activities like dancing, exercise, etc: * **left shoulder joint** : left shoulder, left shoulder, left elbow * **right shoulder joint** : right hip, right shoulder, right elbow * **right elbow joint** : right shoulder, right elbow, right wrist * **left elbow joint** : left shoulder,left elbow, left wrist * **right hip joint** : left hip, right hip, right knee * **left hip joint** : right hip, left hip, left knee * **right knee joint** : right hip, right knee, right ankle * **left knee joint** : left hip, left knee, left ankle * **waist joint** : left shoulder, left hip, left knee We use angle MAE as the cost function for Dynamic Time Warping. Depending on the application, we can also give weightage to different joints/ keypoints and that can be helpful in better synchronization of the given videos. ## 5 Results We applied our algorithm, PoseSync on various videos for temporal alignment between actions. The alignment of videos, containing human activities illustrates the robustness of the DTW aided algorithm. Figure 1 and 2 show the video alignment between two videos with different type of human movements. First column consists of reference video key frames and second column contains test video frames at the same index as the reference frames. The third column consists of test frames mapped to reference frames using PoseSync. Results show that it can map the similar poses accurately in videos with similar activities. We used various video combinations such as: original video and video with some noise (different clip of same action) or clip of different action or increased/decreased speed. Test videos are generated by increasing/decreasing speed of entire reference video or beginning/middle/end part, or putting other video clip of same action or different action in the start/middle/end. PoseSync can match the video very well even if other video has noise of 2 sec anywhere in the clip of 10 seconds. In case of different speed of both videos, it is able to syncronize the videos with good accuracy as shown in Table 1. Figure 1: Dance videos synchronization : Key frames of reference video (column 1), corresponding test video frames (column 2) and test video frames mapped to respective reference frames by DTW (column 3) Figure 2: Tennis shots synchronization : Key frames of reference video (column 1), corresponding test video frames (column 2) and test video frames mapped to respective reference frames by DTW (column 3) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Reference Video} & \multicolumn{3}{c|}{Test Video} & \multicolumn{3}{c|}{Video Matching} \\ \hline Length & Description & Length & Description & No. of frames & No. of frame & \multirow{2}{*}{ \begin{tabular}{c} \% Video \\ matched \\ matched \\ \end{tabular} } \\ (in sec) & & (in sec) & & & to match & actually & matched \\ \hline 1 & Sample video & 1 & Same sample video & 25 & 25 & 100 \\ \hline 7 & \begin{tabular}{c} action clips(A,B), \\ each of 2 sec, \\ ordered as A\_B\_A \\ \end{tabular} & 7 & \begin{tabular}{c} action clips(A,B), \\ each of 2 sec, \\ ordered as B\_A\_B \\ \end{tabular} & 163 & 102 & 62.57668712 \\ \hline 12 & \begin{tabular}{c} action clips(A,B), \\ each of 6 sec, \\ ordered as A\_B \\ \end{tabular} & 12 & \begin{tabular}{c} action clips(A,B), \\ each of 6 sec, \\ ordered as B\_A \\ \end{tabular} & 150 & 130 & 86.6666667 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} Normal video(0 - 4) sec + \\ 2 sec noise + \\ normal video(4 - 8) sec \\ \end{tabular} & 194 & 173 & 89.17525773 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} 2 sec noise + \\ normal video (0 - 8) sec \\ \end{tabular} & 194 & 179 & 92.26804124 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} normal video (0 - 8) sec + \\ 2 sec noise \\ \end{tabular} & 194 & 189 & 97.42268041 \\ \hline 8 & Normal video & 9 & \begin{tabular}{c} Normal video (0 - 4) sec + \\ 1 sec noise \\ \end{tabular} & 194 & 191 & 98.45360825 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} same as normal video + \\ 2 sec noise \\ \end{tabular} & 237 & 228 & 96.20253165 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} 2 sec noise + \\ same as normal video \\ \end{tabular} & 239 & 238 & 99.58158996 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} Normal video (0 - 4) sec + \\ 2 sec noise + \\ normal video (4 - 8) sec \\ \end{tabular} & 239 & 219 & 91.63179916 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} Normal video (0 - 4) sec + \\ 2 sec clip of different action \\ + normal video (4 - 8) sec \\ \end{tabular} & 239 & 230 & 96.23430962 \\ \hline 1 & Normal video & 1 & \begin{tabular}{c} flipped normal video \\ \end{tabular} & 25 & 25 & 100 \\ \hline 7 & Normal video & 9 & \begin{tabular}{c} clip of \(\sim\)2 sec slowed \\ down in middle \\ \end{tabular} & 163 & 160 & 98.1595092 \\ \hline 4 & Normal video & 9 & \begin{tabular}{c} video slowed down \\ \end{tabular} & 105 & 104 & 99.04761905 \\ \hline 7 & Normal video & 10 & \begin{tabular}{c} clip of \(\sim\)2 sec slowed \\ down in middle \\ \end{tabular} & 237 & 235 & 99.15611814 \\ \hline 8 & Normal video & 10 & \begin{tabular}{c} clip of \(\sim\)2 sec slowed \\ down in the end \\ \end{tabular} & 211 & 210 & 99.52606635 \\ \hline 8 & Normal video & 9 & \begin{tabular}{c} clip of \(\sim\)2 sec slowed \\ down in the start \\ \end{tabular} & 207 & 199 & 96.1352657 \\ \hline 3 & Normal video & 7 & video slowed down & 102 & 102 & 100 \\ \hline 3 & Normal video & 2 & video sped up & 102 & 100 & 98.03921569 \\ \hline 3 & Normal video & 13 & \begin{tabular}{c} video speed \\ decreased to 25\% \\ \end{tabular} & 102 & 102 & 100 \\ \hline 7 & Normal video & 7 & \begin{tabular}{c} zoomed in \\ video \\ \end{tabular} & 105 & 102 & 96.19 \\ \hline \end{tabular} \end{table} Table 1: Accuracy metrics across various scenarios ## 6 Conclusion We propose a method for sychronizing videos using a rotation/ translation/ scaling invariant metric of pose comparison, called PoseSync. Since MoveNet is limited to detect pose of single person in the image, the video is needed to be cropped first. So video is processed through YOLO v5 or OpenCV Tracker to get the cropped video frames which are passed to pose detection model, MoveNet. This MoveNet model returns 17 keypoints for each frame. Now we have 2 sequences of 17 keypoints as PoseSync takes two videos as input. Then, to sychronize two videos, we pass keypoints sequences to Dynamic Time Warping (DTW) which computes distance based on MAE or/and Angle-MAE between two videos and maps test video to reference video. To solve the issue of size differences between two poses, we use the Angle based metrics, Angle-Mean Absolute Error which computes the MAE between angles of joints and is invariant to scale, position and angle of the pose.
2310.08412
Non-reducible Modal Transition Systems
Modal Transition Systems (MTS) are a well-known formalism that extend Labelled Transition Systems (LTS) with the possibility of specifying necessary and permitted behaviour. Modal refinement ($\preceq_m$) of MTS represents a step of the design process, namely the one in which some optional behaviour is discarded while other optional behaviour becomes necessary. Whenever two MTS are not in modal refinement relation, it could still be the case that the set of implementations of one MTS is included in the set of implementations of the other. The challenge of devising an alternative notion of modal refinement that is both sound and complete with respect to the set of implementations, without disregarding valuable implementations, remains open. We introduce a subset of MTS called Non-reducible Modal Transition Systems (NMTS), together with a novel refinement relation $\preceq_n$ for NMTS. We illustrate through examples the additional constraints imposed by NMTS. Furthermore, we discuss a property holding for NMTS whose implementations are non-deterministic.
Davide Basile
2023-10-12T15:27:12Z
http://arxiv.org/abs/2310.08412v4
# A Sound and Complete Refinement Relation ###### Abstract Modal Transition Systems (MTS) are a well-known formalism that extend Labelled Transition Systems (LTS) with the possibility of specifying necessary and permitted behaviour. Whenever two MTS are not in modal refinement relation, it could still be the case that the set of implementations of one MTS is included in the set of implementations of the other. The challenge of devising an alternative notion of modal refinement that is both sound and complete with respect to the set of implementations, without disregarding valuable implementations, remains open. In this paper, we address this challenge. We introduce a subset of MTS called Non-reducible Modal Transition Systems (NMTS), together with a novel refinement relation \(\preceq_{n}\) for NMTS. We show that \(\preceq_{n}\) is sound and also complete with respect to its set of implementations. We illustrate through examples how the additional constraints imposed by NMTS are necessary for achieving completeness. Furthermore, we discuss a property holding for NMTS whose implementations are non-deterministic. We show that any implementation obtained through \(\preceq_{m}\) but disregarded by \(\preceq_{n}\) is violating this property. ## 1 Introduction Modal Transition Systems (MTS) [13] extend Labelled Transition Systems (LTS) [9] by distinguishing two types of transitions, meant to describe necessary and optional behaviour in a system specification by means of transitions that must _necessarily_ be implemented and transitions that may _optionally_ be implemented. MTS come with a concept of _refinement_, which represents a step of the design process, namely the one in which some optional behaviour is discarded while other optional behaviour becomes necessary. Stepwise refinement of an MTS eventually results in an _implementation_, which is an LTS in which no further refinement is possible. Refinement of MTS is critical for enabling formal reasoning on the correctness of a system's design and implementation, by enabling gradually refining an abstract specification into a concrete one, ensuring that each step is correct. MTS are a well-known specification theory and significant advances have been made so far [1, 10]. It is known that the (modal) refinement of MTS is not complete (cf., e.g., [12]). In other words, there are cases in which two MTS are not in a refinement relation although the set of implementations of one MTS is included in the set of implementations of the other MTS (this relation is known as thorough refinement). Furthermore, while determining MTS refinement can be computed in polynomial time, determining thorough refinement of MTS requires EXPTIME [7]. In [12], the problem of proposing an alternative notion of modal refinement that is both sound and complete with respect to its set of implementations is left open [12]. An important aspect is to argue that the considered set of implementations is also interesting from a practical point of view (i.e., no valuable implementation is disregarded). In this paper, we address this long-standing challenge by proposing a subset of MTS, called _Non-reducible Modal Transition Systems_ (NMTS) together with their alternative notion of modal refinement \(\preceq_{n}\) that is both sound and complete with respect to its set of implementations. A fundamental insight behind NMTS is that states non-deterministically reachable through the execution of identical sequences of actions are related. Specifically, the outgoing transitions sharing the same action label also share the same modality. Furthermore, when a refinement step deactivates one optional transition, this leads to the deactivation of all other transitions that share the same label from all other related (source) states. The contributions of this paper are: 1. we introduce NMTS, a subset of MTS. In NMTS, the transitions sharing the same action label are constrained to also share the same modality whenever they are reachable by the same sequence of actions; 2. we equip NMTS with an alternative notion of modal refinement, called NMTS refinement. The refinement of NMTS is derived from modal refinement by imposing an additional constraint on the optional transitions of the system to be refined; 3. we provide different examples of MTS instances that fail to meet the requisites for being either NMTS or refinements of NMTS. These examples show that the constraints imposed by NMTS and their refinement are necessary to achive a sound and complete refinement relation; 4. we formally prove the soundness (Theorem 2) and completeness (Theorem 3) of NMTS refinement; 5. we introduce the _non-reducible non-determinism_ property concerning optional, non-deterministic actions. This non-determinism is inherent in such actions and should be preserved in any implementation where the action remains active. All implementations accepted by the standard MTS refinement, but discarded by the NMTS refinement, are showed to be implementations violating the non-reducible non-determinism property. OverviewSection 2 introduces background on MTS and modal refinement. Section 3 presents Non-reducible MTS (NMTS) and their refinement, proving that NMTS refinement is both sound and complete. Section 4 discusses the property of non-reducible non-determinism, showing that the implementations discarded by NMTS refinement but accepted by modal refinement are violating this property. Section 5 discusses the related work, while Section 6 concludes the paper and discusses future work. ## 2 Background We start by discussing some background on MTS. The standard definition of MTS accounts for two sets of transitions, _permitted_ (or _may_) transitions, denoted by \(\Delta_{\Diamond}\), and _necessary_ (or _must_) transitions, denoted by \(\Delta_{\Box}\), such that \(\Delta_{\Box}\subseteq\Delta_{\Diamond}\), i.e., all (necessary) transitions are permitted. A transition \((q,a,q^{\prime})\in\Delta_{\Diamond}\) is also denoted as \(q\xrightarrow{a}_{\Diamond}q^{\prime}\) and likewise \(q\xrightarrow{a}_{\Box}q^{\prime}\) if \((q,a,q^{\prime})\in\Delta_{\Box}\). The reader may be misled to think that \(q\xrightarrow{a}_{\Diamond}q^{\prime}\) excludes \(q\xrightarrow{a}_{\Box}q^{\prime}\), and vice versa that \(q\xrightarrow{a}_{\Box}q^{\prime}\) excludes \(q\xrightarrow{a}_{\Diamond}q^{\prime}\). However, the first statement is not always true and the second is always false, since \(\Delta_{\Box}\subseteq\Delta_{\Diamond}\). For our purpose, it is irrelevant to indicate that a transition is permitted. For the sake of simplifying the presentation, we thus opt for a slightly revised definition of MTS, where we partition the set of transitions into _optional_ and _necessary_ transitions, and no longer indicate the fact that all transitions are _permitted_. Definition 1 (Mts): A _Modal Transition System (MTS)_\(S\) is a 5-tuple \(S=(Q,A,\overline{q},\Delta_{\Diamond},\Delta_{\Box})\), with set \(Q\) of states, set \(A\) of actions, initial state \(\overline{s}\in Q\), and transition relation \(\Delta\subseteq Q\times A\times Q\) partitioned into _optional transitions_, denoted by \(\Delta_{\Diamond}\), and _necessary transitions_, denoted by \(\Delta_{\Box}\), i.e., \(\Delta_{\Box}\cap\Delta_{\Box}=\emptyset\). If \((s,a,s^{\prime})\in\Delta_{\Diamond}\), then we also write \(s\xrightarrow{a}_{\Diamond}s^{\prime}\), and likewise we also write \(s\xrightarrow{a}_{\Box}s^{\prime}\) for \((s,a,s^{\prime})\in\Delta_{\Box}\). We write \(s\xrightarrow{a}s^{\prime}\) when \((s,a,s^{\prime})\in\Delta\). We may omit the target state when it is immaterial. Note that the standard definition of MTS is \((Q,A,\overline{q},\Delta_{\Diamond},\Delta_{\Box})\), where \(\Delta_{\Diamond}=\Delta_{\Diamond}\cup\Delta_{\Box}\). An LTS is an MTS where \(\Delta_{\Diamond}=\emptyset\). In the sequel, the conversion from an MTS (and NMTS, cf. Section 3) \((Q,A,\overline{q},\Delta_{\Diamond},\Delta_{\Box})\) with \(\Delta_{\Diamond}=\emptyset\) to an LTS \((Q,A,\overline{q},\Delta)\) with \(\Delta=\Delta_{\Box}\) is implicit. Moreover, we will use subscripts or superscripts to indicate the origin of an element of a tuple, i.e., \(S=(Q_{S},A_{S},\overline{s},\Delta_{S}^{\Diamond},\Delta_{S}^{\Diamond})\). We now define modal refinement of MTS. Definition 2 (modal refinement): An MTS \(S\) is a _(modal) refinement_ of an MTS \(T\), denoted by \(S\preceq_{m}T\), if and only if there exists a _refinement relation_\(\mathcal{R}\subseteq Q_{S}\times Q_{T}\) such that \((\overline{s},\overline{t})\in\mathcal{R}\) and for all \((s,t)\in\mathcal{R}\), the following holds: 1. whenever \(t\xrightarrow{a}_{\Box}t^{\prime}\), for some \(t^{\prime}\in Q_{T}\) and \(a\in A_{T}\), then \(a\in A_{S}\), \(\exists\,s^{\prime}\in Q_{S}:s\xrightarrow{a}_{\Box}s^{\prime}\), and \((s^{\prime},t^{\prime})\in\mathcal{R}\), and 2. whenever \(s\xrightarrow{a}s^{\prime}\), for some \(s^{\prime}\in Q_{S}\) and \(a\in A_{S}\), then \(a\in A_{T}\), \(\exists\,t^{\prime}\in Q_{T}:t\xrightarrow{a}t^{\prime}\), and \((s^{\prime},t^{\prime})\in\mathcal{R}\). We also say that \(S\) (modally) refines \(T\) when \(S\preceq_{m}T\). Intuitively, \(S\) modally refines \(T\) if any necessary transition of \(T\) can be mimicked by a necessary transition of \(S\), and every transition of \(S\) can be mimicked by a transition of \(T\). The set of implementations of an MTS \(S\), written \(Impl_{m}(S)\) is defined as the set of LTS \(I\) such that \(I\preceq_{m}S\). Indeed, LTS cannot be further refined and are considered implementations. In other words, every LTS refinement of an MTS \(S\) is an _implementation_ of \(S\). In [12], it is shown that \(S\preceq_{m}T\) implies \(Impl_{m}(S)\subseteqImpl_{m}(T)\). In other words, modal refinement is _sound_, i.e., each time an MTS \(S\) modally refines an MTS \(T\), it follows that the set of implementations of \(T\) also contains the implementations of \(S\). However, the contrary is not true, i.e., modal refinement is not _complete_. Figure 1, reproduced from [7], shows an example where the set of implementations of \(T\) also contains the implementations of \(S\), but \(S\) does not modally refine \(T\). ## 3 Non-Reducible MTS Refinement All examples documented in the literature (e.g., [7, 12]), which demonstrate that modal refinement is not complete (e.g., Figure 1), involve the utilization of a non-deterministic choice within the system under refinement. This non-deterministic choice, such as the outgoing transitions from state \(\overline{t}\) in \(T\) (as depicted in Figure 1), is subsequently eliminated in the refined system, as it is the case in Figure 1 for system \(S\). Consider Figure 2. Similarly to Figure 1, it shows an example of two MTS \(S\) and \(T\) such that the implementations of \(S\) are included into the implementations of \(T\), although \(S\) and \(T\) are not in modal refinement relation. Figure 2 differs from Figure 1 (and all other similar examples in the literature) in that it preserves the non-deterministic choices of \(T\) within \(S\). In \(T\), the states \(t_{1}\) and \(t_{2}\) are reachable by executing the same action \(c\) and both exhibit outgoing transitions labeled as \(a\). Nonetheless, these two transitions do not share the same modality. _Nmts_ In this section, we identify the subset of MTS that exclusively discards systems as those in Figure 2. Indeed, Figure 2 shows how such MTS instances can result in a violation of completeness. We introduce _Non-reducible Modal Transition Systems_ (NMTS). In NMTS, whenever a sequence of actions leads non-deterministically to different states, all these states are interconnected by the requirement that transitions associated with the same action must also have the same modality. In the following, let \(w=a_{1}\ldots a_{n}\) be a sequence of actions in \(A^{*}\). The sequence of transitions \(\overline{s}\xrightarrow{a_{1}}s_{1}\), \(s_{1}\xrightarrow{a_{2}}s_{2}\), \(\ldots\), \(s_{n-1}\xrightarrow{a_{n}}s\) is written as \(\overline{s}\xrightarrow{w}s\). Furthermore, we write \(\overline{s}\xrightarrow{w}s\) when it is not possible to reach \(s\) from \(\overline{s}\) through the sequence of actions \(w\). Definition 3 (Nmts): A _Non-reducible Modal Transition System (NMTS)_\(S\) is a 6-tuple \(S=(Q,A,\overline{q},\Delta,f_{\Box},f_{\Box},f_{\Box})\), with set \(Q\) of states, set \(A\) of actions, initial state \(\overline{s}\in Q\), and transition relation \(\Delta\subseteq Q\times A\times Q\), where \(\Delta\) is partitioned into \(\Delta_{\Box}\), the set of _optional transitions_, and \(\Delta_{\Box}\), the set of _necessary_ transitions, i.e., \(\Delta_{\Box}\cap\Delta_{\Box}=\emptyset\). Functions \(f_{\Box}:A^{*}\mapsto 2^{A}\) and \(f_{\Box}:A^{*}\mapsto 2^{A}\) are such that Figure 1: From left two right, two MTS \(S\) and \(T\) such that \(S\not\preceq_{m}T\) and \(Impl_{m}(S)\subseteqImpl_{m}(T)\), showing that modal refinement is not complete (reproduced from [7]). Dashed arcs are used to depict optional transitions (\(\Delta_{\Box}\)), while solid arcs depict necessary transitions (\(\Delta_{\Box}\)). * _for all_ \(w\in A^{*}\) _such that_ \(\overline{s}\xrightarrow{w}s\) _it holds that_ \(f_{\Box}(w)\cap f_{\Diamond}(w)=\emptyset\)_,_ * _for all_ \(w_{1},w_{2}\in A^{*}\) _whenever_ \(\overline{s}\xrightarrow{w_{1}}s\) _and_ \(\overline{s}\xrightarrow{w_{2}}s\) _then_ \(f_{\Box}(w_{1})=f_{\Box}(w_{2})\)_,_ \(f_{\Diamond}(w_{1})=f_{\Diamond}(w_{2})\)_,_ * _whenever_ \((s,a,s^{\prime})\in\Delta\) _there exists_ \(w\in A^{*}\) _such that_ \(\overline{s}\xrightarrow{w}s\) _and either_ * \(a\in f_{\Box}(w)\)_, and in this case_ \((s,a,s^{\prime})\in\Delta_{\Box}\)_, or_ * \(a\in f_{\Diamond}(w)\)_, and in this case_ \((s,a,s^{\prime})\in\Delta_{\Diamond}\)_._ * _whenever_ \(a\in f_{\Box}(w)\cup f_{\Diamond}(w)\) _for some_ \(w\in A^{*}\) _there exists a state_ \(s\in Q_{S}\) _such that_ \(\overline{s}\xrightarrow{w}s\) _and_ \((s,a,s^{\prime})\in\Delta\) _for some_ \(s^{\prime}\in Q_{S}\)_._ _We write \(f(w)\) to denote \(f_{\Box}(w)\cup f_{\Diamond}(w)\)._ Definition 3 enhances Definition 1 by including two functions, namely \(f_{\Box}\) and \(f_{\Diamond}\). These functions serve a dual purpose. Firstly, they establish a connection between states reachable through the execution of identical sequences of actions. Secondly, they constraint outgoing transitions from interconnected states that are sharing the same label to also share the same modality. Note that NMTS are a strict subset of MTS because in NMTS, for all \(w\in A^{*}\) such that \(\overline{s}\xrightarrow{w}s\), the condition \(f_{\Box}(w)\cap f_{\Diamond}(w)=\emptyset\) holds (as defined in Definition 3). In contrast, within MTS, it is possible to have \(f_{\Box}(w)\cap f_{\Diamond}(w)\neq\emptyset\). If we were to remove this constraint from Definition 3, then NMTS would become equivalent to MTS. Consider Figure 3. In contrast to Figure 2, Figure 3 presents two systems, denoted as \(S\) and \(T\), satisfying the conditions of Definition 3 (i.e., \(S\) and \(T\) are NMTS), and satisfying the conditions \(Impl_{m}(S)\subseteq Impl_{m}(T)\) and \(S\not\preceq_{m}T\). Similarly to Figure 2, also in Figure 3 the non-deterministic choice in state \(t_{1}\) in \(T\) is maintained in state \(s_{1}\) in \(S\). Figure 3 proves that, for achieving completeness, it is not sufficient to constrain MTS to be NMTS. In the following, we will show that it also necessary to introduce constraints on the refinement relation between NMTS. _NMTS refinement_ We now introduce NMTS modal refinement \(\preceq_{n}\). In contrast to standard modal refinement, an additional condition is introduced, which applies to the optional transitions within the system undergoing refinement. If an optional transition is deactivated during the refinement process, it is required that this deactivation applies uniformly to all other optional transitions sharing the same action. This uniform deactivation rule applies across all source states reachable through the same sequence of actions. Definition 4 (NMTS refinement): An NMTS \(S\) is an NMTS refinement of another NMTS \(T\), denoted as \(S\preceq_{n}T\), if there exists a refinement relation \(\mathcal{R}\subseteq Q_{S}\times Q_{T}\) between the states of the two systems such that \((\overline{s},\overline{t})\in\mathcal{R}\) and for all \((s,t)\in\mathcal{R}\) there exists \(w\in A_{S}^{s}\) such that \(\overline{s}\xrightarrow{w}s\) and \(\overline{t}\xrightarrow{w}t\) and 1. whenever \(t\xrightarrow{a}_{\Box}t^{\prime}\) (for some \(t^{\prime}\in Q_{T}\), \(a\in f_{T}^{\Box}(w)\)), then \(a\in f_{S}^{\Box}(w)\) and there exists a state \(s^{\prime}\in Q_{S}\) such that \(s\xrightarrow{a}_{\Box}s^{\prime}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\). 2. whenever \(t\xrightarrow{a}_{\Box}t^{\prime}\) (for some \(t^{\prime}\in Q_{T}\), \(a\in f_{T}^{\Diamond}(w)\)), then one of the following holds: * \(a\not\in f_{S}(w)\) * \(a\in f_{S}(w)\) and there exists a state \(s^{\prime}\in Q_{S}\) such that \(s\xrightarrow{a}s^{\prime}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\). 3. whenever \(s\xrightarrow{a}s^{\prime}\) (for some \(s^{\prime}\in Q_{S}\), \(a\in f_{S}(w)\)), then \(a\in f_{T}(w)\), and there exists a state \(t^{\prime}\in Q_{T}\) such that \(t\xrightarrow{a}t^{\prime}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\). As discussed earlier, Figure 3 shows that the further constraint imposed by Definition 3 is not sufficient to achieve completeness of modal refinement. Figure 2 and Figure 4 show that the additional constraint imposed by Definition 4 on the refinement relation, when considered independently, is also not sufficient to achieve completeness. Indeed, if we switch \(\preceq_{m}\) with \(\preceq_{n}\) in Figure 2, as showed in Figure 4, it would still hold that \(Impl_{n}(S)\subseteq Impl_{n}(T)\) and \(S\not\preceq_{n}T\), because \(S\) and \(T\) are not NMTS. In other words, the example in Figure 2 proves that if non-deterministic MTS do not meet the criteria to be classified as NMTS, then it is possible to build an example, as the one in Figure 2, showing that both modal refinement and NMTS refinement are not complete. In summary, the examples in Figure 2 and Figure 3 show that the constraints on MTS and their refinement provided by Definition 3 and Definition 4 are both required to achieve completeness. By either dropping the constraints on MTS (i.e., Definition 3) or on their refinement (i.e., Definition 4), it is possible to demonstrate that the resulting refinement relation is not complete. In Theorem 3, we will prove that the constraints Figure 4: Two LTS \(I\) and \(I^{\prime}\) both implementations of the MTS \(S\) and \(T\) of Figure 2. The set \(Impl_{n}(S)\) contains all and only LTS that are strongly bisimilar to either \(I\) or \(I^{\prime}\). It follows that \(Impl_{n}(S)\subseteq Impl_{n}(T)\), and \(S\not\preceq_{n}T\) (under the assumption that \(\preceq_{n}\) is also applicable to MTS) imposed by Definition 3 and Definition 4 are also sufficient to achieve completeness of the refinement relation. Figure 5 depicts another example showcasing the differences between \(\preceq_{m}\) and \(\preceq_{n}\). Consider the LTS \(I_{T}\) obtained by switching all transitions of \(T\) (in Figure 5) to must. Clearly, \(I_{T}\preceq_{n}T\), but \(I_{T}\not\preceq_{n}S\) (note that this is not true for the case of \(\preceq_{m}\)). Due to the coinductive nature of Definition 3, similarly to the complexity of deciding modal refinement or strong bisimulation, also the complexity of deciding an NMTS refinement is polynomial, provided that the input includes the functions \(f_{\Box}\) and \(f_{\Diamond}\). Remark: Note that as an alternative characterisation, the condition outlined in Definition 3 (namely, \(f_{\Box}(w)\cap f_{\Diamond}(w)\neq\emptyset\)) can be omitted, at the cost of modifying the refinement relation \(\preceq_{n}\) to a new form, call it \(\preceq_{n}^{\prime}\), modified as follows. In \(\preceq_{n}^{\prime}\), whenever \(a\in f_{\Box}(w)\cap f_{\Diamond}(w)\) for some \(a\in A\), all transitions reachable via \(w\) and labeled with \(a\) are treated as necessary, even if they are declared optional. We argue that while this alternative characterisation would enable the inclusion of all MTS and not a limited subset, it would introduce ambiguity. This is because it would permit to denote a necessary transition \(\delta\) as optional whenever there exists another necessary transition \(\delta^{\prime}\) with the same action as \(\delta\) and reachable through the execution of the same sequence of actions. Therefore, whenever in an MTS it holds that \(a\in f_{\Box}(w)\cap f_{\Diamond}(w)\) for some \(a\in A\), rather than considering all transitions reachable via \(w\) and labeled with \(a\) as necessary, even if they are denoted as optional, we opt to exclude such MTS from consideration. We show that \(\preceq_{n}\) is a conservative extension of \(\preceq_{m}\). Theorem 4.1: _Let \(S\) and \(T\) be two NMTS. If \(S\preceq_{n}T\) then \(S\preceq_{m}T\)._ Proof: Let \(\mathcal{R}\) be proving \(S\preceq_{n}T\). It holds that \((\overline{s},\overline{t})\in\mathcal{R}\). Furthermore, for any \((s,t)\in\mathcal{R}\), by hypothesis there exists some \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\) and \(\overline{t}\xrightarrow{w}t\). Furthermore: * whenever \(t\xrightarrow{a}_{\Box}t^{\prime}\), it holds that \(a\in f_{S}^{\Box}(w)\) (therefore \(a\in A_{S}\)), \(s\xrightarrow{a}_{\Box}s^{\prime}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\); * whenever \(s\xrightarrow{a}s^{\prime}\) it holds that \(a\in f_{T}(w)\) (therefore \(a\in A_{T}\)), \(t\xrightarrow{a}t^{\prime}\) and it holds that \((s^{\prime},t^{\prime})\in\mathcal{R}\). Consider Figure 3. Since \(S\not\preceq_{m}T\), by Theorem 4.1 it follows that \(S\not\preceq_{n}T\). We now show the relations between the functions \(f^{\Box}\) and \(f^{\Diamond}\) of two systems in NMTS refinement relation. Lemma 1: _Let \(S\) and \(T\) be two NMTS such that \(S\preceq_{n}T\). For all \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\), it holds that \(f_{S}^{\Diamond}(w)\subseteq f_{T}^{\Diamond}(w)\), \(f_{T}^{\Box}(w)\subseteq f_{S}^{\Diamond}(w)\) and \(f_{S}^{\Diamond}(w)\setminus f_{T}^{\Diamond}(w)\subseteq f_{T}^{\Diamond} (w)\setminus f_{S}^{\Diamond}(w)\)._ Proof: Each action \(A_{S}\) appears in some transition in \(\Delta_{S}\) (there are no redundant elements in \(A_{S}\)). By hypothesis \(S\preceq_{n}T\) and by point 3 of Definition 4 it holds that \(A_{S}\subseteq A_{T}\) (we assume that all states are reachable, i.e., there are no redundant states). We first prove that for all \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\) it holds \(f_{T}^{\Box}(w)\subseteq f_{S}^{\Box}(w)\). By contradiction, assume that there exists some \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\) and \(a\in f_{T}^{\Box}(w)\backslash f_{S}^{\Box}(w)\). Hence, by Definition 3 there exists a transition \(\delta\in\Delta_{T}^{\Box}\) labelled with \(a\) for some \(t\in Q_{T}\) source state of \(\delta\) such that \(\overline{t}\xrightarrow{w}t\). By hypothesis, there must be some \(s\in Q_{S}\) such that \((s,t)\in\mathcal{R}\), where \(\mathcal{R}\) is the NMTS refinement relation for \(S\preceq_{n}T\). By Definition 4 it holds that \(s\xrightarrow{a}_{\Box}s^{\prime}\in\Delta_{S}\) and \(a\in f_{S}^{\Box}(w)\). We reached a contradiction. We now show that for all \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\) it holds \(f_{S}^{\Box}(w)\subseteq f_{T}^{\Box}(w)\). By contradiction, assume that there exists some \(w\in A_{S}^{*}\) with \(\overline{s}\xrightarrow{w}s\) and an action \(a\in f_{S}^{\mathcal{O}}(w)\setminus f_{T}^{\mathcal{O}}(w)\). Hence, there exists a transition \(\delta\in\Delta_{S}^{\mathcal{O}}\) reachable via \(w\) and labelled with \(a\). Let \(s\) be the source state of \(\delta\). By Definition 4, since \(A_{S}\subseteq A_{T}\), for some \(t\in Q_{T}\) it holds that \(\overline{t}\xrightarrow{w}t\) and \((s,t)\in\mathcal{R}\). By Definition 4 it holds that \(t\xrightarrow{a}t^{\prime}\in\Delta_{T}\). Since \(a\not\in f_{T}^{\mathcal{O}}(w)\), it must be the case that \(a\in f_{T}^{\Box}(w)\), hence \(a\in f_{S}^{\Box}(w)\). We reached a contradiction. Finally, we prove that for all \(w\in A_{S}^{*}\) such that \(\overline{s}\xrightarrow{w}s\) it holds \(f_{S}^{\Box}(w)\setminus f_{T}^{\Box}(w)\subseteq f_{T}^{\mathcal{O}}(w) \setminus f_{S}^{\mathcal{O}}(w)\). Let \(a\in f_{S}^{\Box}(w)\setminus f_{T}^{\mathcal{O}}(w)\). Since \(f_{S}^{\Box}(w)\cap f_{S}^{\mathcal{O}}(w)=\emptyset\), we have \(a\not\in f_{S}^{\mathcal{O}}(w)\). Moreover, there exists some transition \(s\xrightarrow{a}s^{\prime}\in\Delta_{S}\) with \(\overline{s}\xrightarrow{w}s\). By Definition 4, for some \(t\in Q_{T}\) it holds that \(\overline{t}\xrightarrow{w}t\), \((s,t)\in\mathcal{R}\), \(t\xrightarrow{a}t^{\prime}\in\Delta_{T}\), \(a\in f_{T}(w)\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\). Since \(a\in f_{S}^{\Box}(w)\setminus f_{T}^{\Box}(w)\), it must be the case that \(a\in f_{T}^{\mathcal{O}}(w)\). We now show that, similarly to \(\preceq_{m}\), also \(\preceq_{n}\) is a preorder. Lemma 2: _The relation \(\preceq_{n}\) is a preorder._ Proof: Let \(S\) be an NMTS. Clearly, \(\{(s,s)\mid s\in Q_{S}\}\) shows that \(S\preceq_{n}S\). Let \(T\) and \(U\) be two NMTS such that \(S\preceq_{n}T\) and \(T\preceq_{n}U\). We now prove that the relation \(\mathcal{R}=\{(s,u)\mid(s,t)\in\mathcal{R}_{S\preceq_{n}T},(t,u)\in\mathcal{R} _{T\preceq_{n}U},t\in Q_{T}\}\) shows that \(S\preceq_{n}U\). Clearly \((\overline{s},\overline{u})\in\mathcal{R}\). Whenever \((s,u)\in\mathcal{R}\) for some \(w\in A_{S}^{*}\) where \(\overline{s}\xrightarrow{w}s\) and \(\overline{u}\xrightarrow{w}u\) then: * if \(u\xrightarrow{a}_{\Box}u^{\prime}\), by \((t,u)\in\mathcal{R}_{T\preceq_{n}U}\) it holds \(t\xrightarrow{a}_{\Box}t^{\prime}\) and \((t^{\prime},u^{\prime})\in\mathcal{R}_{T\preceq_{n}U}\). By \((s,t)\in\mathcal{R}_{S\preceq_{n}T}\) it holds \(s\xrightarrow{a}_{\Box}s^{\prime}\), \((s^{\prime},t^{\prime})\in R_{S\preceq_{n}T}\). Therefore, \((s^{\prime},u^{\prime})\in\mathcal{R}\); * if \(u\xrightarrow{a}_{\Diamond}u^{\prime}\) by \((t,u)\in\mathcal{R}_{T\preceq_{n}U}\) we distinguish two cases: * either \(a\not\in f_{T}(w)\). By \((s,t)\in\mathcal{R}_{S\preceq_{n}T}\) and Lemma 1 (i.e., \(f_{S}(w)\subseteq f_{T}(w)\)), it follows \(a\not\in f_{S}(w)\); * \(a\in f_{T}(w)\), \(t\xrightarrow{a}t^{\prime}\) and \((t^{\prime},u^{\prime})\in\mathcal{R}_{T\preceq_{n}U}\). By \((s,t)\in\mathcal{R}_{S\preceq_{n}T}\) either, \(a\not\in f_{S}(w)\) or \(a\in f_{S}(w)\)\(s\xrightarrow{a}s^{\prime}\), \((s^{\prime},t^{\prime})\in\mathcal{R}_{S\preceq_{n}T}\) and \((s^{\prime},u^{\prime})\in\mathcal{R}\); * if \(s\xrightarrow{a}s^{\prime}\) by \((s,t)\in\mathcal{R}_{S\preceq_{n}T}\) it follows that \(t\xrightarrow{a}t^{\prime}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}_{S\preceq_{n}T}\). By \((t,u)\!\in\!\mathcal{R}_{T\preceq_{n}U}\) it follows that \(u\xrightarrow{a}u^{\prime}\) and \((t^{\prime},u^{\prime})\!\in\!\mathcal{R}_{T\preceq_{n}U}\). Hence \((s^{\prime},u^{\prime})\!\in\!\mathcal{R}\). Given an MTS \(S\) we denote with \(Impl_{n}(S)\) the set of LTS \(I\) such that \(I\preceq_{n}S\). The soundness of \(\preceq_{n}\) is straightforward. **Theorem 2** (\(\preceq_{n}\) soundness): _Let \(S\) and \(T\) be two MTS. If \(S\preceq_{n}T\) then \(Impl_{n}(S)\subseteqImpl_{n}(T)\)._ Pick an implementation \(I\preceq_{n}S\), since \(S\preceq_{n}T\) by transitivity \(I\preceq_{n}T\). Before proceeding to prove the completeness of \(\preceq_{n}\), we establish two auxiliary lemmata. The first lemma demonstrates that a refinement can occur by either asserting (i.e., switching to necessary) or removing a set of optional transitions that are reachable through the same sequence of actions and share the same action label. Lemma 3: _Let \(S\) be an NMTS and let \(S^{\prime}=(Q_{S^{\prime}},A_{S^{\prime}},\overline{s},\Delta_{S^{\prime}},f_{S ^{\prime}}^{\Box},f_{S^{\prime}}^{\Diamond})\) be obtained from \(S\) as follows:_ * _there exists a sequence_ \(w\in A_{S}^{*}\) _such that_ \(\overline{s}\xrightarrow{w}s\) _for some_ \(s\in Q_{S}\) _and_ * \(\forall w^{\prime}\in A_{S}^{*}\) _such that_ \((i)\exists s^{\prime}\in Q_{S}.\overline{s}\xrightarrow{w}s^{\prime}\) _and_ \((ii)\)__\(\forall s^{\prime\prime}\in Q_{S}.\overline{s}\xrightarrow{w}s^{\prime\prime }\vee\overline{s}\xrightarrow{w^{\prime}}s^{\prime\prime}\)_, it holds that_ \(f_{S^{\prime}}^{\Box}(w^{\prime})=f_{S}^{\Box}(w^{\prime})\) _and_ \(f_{S^{\prime}}^{\Diamond}(w^{\prime})=f_{S}^{\Diamond}(w^{\prime})\) _and_ \(f_{S^{\prime}}^{\Diamond}(w^{\prime})=f_{S}^{\Diamond}(w^{\prime})\) _and_ \(\mathit{a}\not\in f_{S^{\prime}}^{\Box}(w)\) _(assert action) or_ \(\mathit{a}\not\in f_{S^{\prime}}^{\Box}(w)\) _(remove action)._ _Furthermore \(Q_{S^{\prime}}=\{s\mid s\in Q_{S},s\text{ is reachable in }S^{\prime}\}\) and \(A_{S^{\prime}}=\{a\mid a\in A_{S},\)\((s,a,s^{\prime})\ \in\ \Delta_{S^{\prime}}\) for some \(s,s^{\prime}\in Q_{S^{\prime}}\}\). It holds that \(S^{\prime}\preceq_{n}S\)._ Let \(\mathcal{R}=\{(s,s)\mid s\in Q_{S^{\prime}}\}\). We show that \(\mathcal{R}\) proves \(S^{\prime}\preceq_{n}S\). Trivially \((\overline{s},\overline{s})\). Furthermore, for all couples \((s,s)\in\mathcal{R}\) such that \(\overline{s}\xrightarrow{w}s\) the outgoing transitions of \(s\) are identical in \(S\) and \(S^{\prime}\) and the conditions in Definition 4 hold trivially. When \((s,s)\in\mathcal{R}\) is such that \(\overline{s}\xrightarrow{w}s\) it holds that: * whenever \(s\xrightarrow{a^{\prime}}_{\Box}s^{\prime}\in\Delta_{S}\) (\(a^{\prime}\in f_{S}^{\Box}(w)\)), we need to show that \(a\neq a^{\prime}\), otherwise, in case of remove, we would have \(a\not\in f_{S^{\prime}}^{\Box}(w)\). Since \(a\in f_{S}^{\Diamond}(w)\) and \(f_{S}^{\Diamond}(w)\cap f_{S}^{\Box}(w)=\emptyset\), it follows that \(a\neq a^{\prime}\), \(a^{\prime}\in f_{S^{\prime}}^{\Box}(w)\), \(s\xrightarrow{a^{\prime}}_{\Box}s^{\prime}\in\Delta_{S^{\prime}}\) and \((s^{\prime},s^{\prime})\in\mathcal{R}\); * whenever \(s\xrightarrow{a^{\prime}}_{\Diamond}s^{\prime}\in\Delta_{S}\) (\(a^{\prime}\in f_{S}^{\Diamond}(w)\)), if \(a\neq a^{\prime}\) then \(a\in f_{S^{\prime}}^{\Diamond}(w)\), \(s\xrightarrow{a^{\prime}}_{\Diamond}s^{\prime}\in\Delta_{S^{\prime}}\) and \((s^{\prime},s^{\prime})\in\mathcal{R}\). Otherwise, \(a\not\in f_{S^{\prime}}^{\Diamond}(w)\); * whenever \(s\xrightarrow{a^{\prime}}s^{\prime}\in\Delta_{S^{\prime}}\) (\(a^{\prime}\in f_{S}(w)\)), then \(a^{\prime}\in f_{S}(w)\), \(s\xrightarrow{a^{\prime}}s^{\prime}\in\Delta_{S}\) and \((s^{\prime},s^{\prime})\in\mathcal{R}\). The second lemma shows the conditions under which it is possible to switch a set of necessary transitions (whose source state is reachable by the same sequence of actions) to optional ones, whilst preserving NMTS refinement. Lemma 4: _Let \(S\) and \(T\) be two NMTS such that \(S\preceq_{n}T\), where for some \(s\in Q_{S}\) there exists a sequence \(w\in A_{S}^{*}.\overline{s}\xrightarrow{w}s\) such that \(a\!\in\!f_{S}^{\Box}(w)\setminus f_{T}^{\Box}(w)\). It holds \(S^{\prime}\preceq_{n}T\), where \(S^{\prime}\!=\!(Q_{S},A_{S},\overline{s},\Delta_{S^{\prime}},f_{S^{\prime}}^{ \Box},f_{S^{\prime}}^{\Diamond})\) and:_ * \(\forall w^{\prime}\in A_{S}^{*}\) _such that_ \((i)\exists s^{\prime}\in Q_{S}.\overline{s}\xrightarrow{w}s^{\prime}\) _and_ \((ii)\)__\(\forall s\!\in\!Q_{S}.\overline{s}\!\xrightarrow{w}s\vee\overline{s}\! \xrightarrow{w^{\prime}}s\)_, it holds that_ \(f_{S^{\prime}}^{\Box}(w^{\prime})=f_{S}^{\Box}(w^{\prime})\) _and_ \(f_{S^{\prime}}^{\Diamond}(w^{\prime})=f_{S}^{\Diamond}(w^{\prime})\) _and_ \(f_{S^{\prime}}^{\Box}(w)=f_{S}^{\Diamond}(w)\cup\{a\}\)_._ Proof: Firstly, since \(S\preceq_{n}T\), by Lemma 1, for all \(w^{\prime}\in A_{S}^{*}\) such that \(\overline{s}\stackrel{{ w^{\prime}}}{{\longrightarrow}}s\), it holds \(f_{S}^{\square}(w^{\prime})\setminus f_{T}^{\square}(w^{\prime})\subseteq f_{ T}^{\circlearrowright}(w^{\prime})\setminus f_{S}^{\circlearrowright}(w^{\prime})\). Therefore, by hypothesis, \(a\in f_{T}^{\circlearrowright}(w)\). Assume that \(\mathcal{R}\) proves \(S\preceq_{n}T\). Then, we show that the relation \(\mathcal{R}\) also proves \(S^{\prime}\preceq_{n}T\). Firstly, \((\overline{s},\overline{t})\in\mathcal{R}\). For all \((s,t)\in\mathcal{R}\) such that \(\overline{s}\stackrel{{ w}}{{\longrightarrow}}s\) and \(\overline{t}\stackrel{{ w}}{{\longrightarrow}}t\), we have that the outgoing transitions of \(s\) in \(S^{\prime}\) are identical to those in \(S\), and the conditions in Definition 4 hold trivially. Otherwise, for all \((s,t)\in\mathcal{R}\) such that \(\overline{s}\stackrel{{ w}}{{\longrightarrow}}s\) and \(\overline{t}\stackrel{{ w}}{{\longrightarrow}}t\) it holds: 1. Whenever \(t\stackrel{{ a^{\prime}}}{{\longrightarrow}}t^{\prime}\in \Delta_{T}\): 1. if \(a^{\prime}\in f_{T}^{\square}(w)\), by hypothesis it holds \(a^{\prime}\neq a\). Since \((s,t)\in\mathcal{R}\), it holds \(a^{\prime}\in f_{S^{\prime}}^{\square}(w)\), \(s\stackrel{{ a^{\prime}}}{{\longrightarrow}}s^{\prime}\in \Delta_{S^{\prime}}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\); 2. if \(a^{\prime}\in f_{T}^{\circlearrowright}(w)\) and \(a^{\prime}\in f_{S}(w)\), then \(a^{\prime}\in f_{S^{\prime}}(w)\) and by \((s,t)\in\mathcal{R}\), it holds \(s\stackrel{{ a^{\prime}}}{{\longrightarrow}}s^{\prime}\in \Delta_{S^{\prime}}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\); 3. if \(a^{\prime}\in f_{T}^{\circlearrowright}\) and \(a^{\prime}\not\in f_{S}(w)\), then by construction also \(a^{\prime}\not\in f_{S^{\prime}}(w)\); 2. whenever \(s\stackrel{{ a^{\prime}}}{{\longrightarrow}}s^{\prime}\in\Delta_{S^ {\prime}}\), then by construction \(a^{\prime}\in f_{S}(w)\), thus \(s\stackrel{{ a^{\prime}}}{{\longrightarrow}}s^{\prime}\in\Delta_{S}\), and since \((s,t)\in\mathcal{R}\), we have \(t\stackrel{{ a^{\prime}}}{{\longrightarrow}}t^{\prime}\in\Delta_{T}\) and \((s^{\prime},t^{\prime})\in\mathcal{R}\). We are now ready to prove the main result of this section, the completeness of \(\preceq_{n}\). Theorem 3.1 (\(\preceq_{n}\) completeness): \(Impl_{n}(S)\subseteqImpl_{n}(T)\) _implies \(S\preceq_{n}T\)._ Proof: Let \(I_{S}=(Q_{S},A_{S},\overline{s},\Delta_{I_{S}},f_{I_{S}}^{\circlearrowright},f_ {I_{S}}^{\square})\), where for all \(w\in A_{S}^{*}.f_{I_{S}}^{\square}(w)=f_{S}(w)\), \(f_{I_{S}}^{\circlearrowright}=\emptyset\) be the implementation obtained from \(S\) by repeatedly applying until exhaustion the assert operation from Lemma 3. By Lemma 3, \(I_{S}\preceq_{n}S\), therefore \(I_{S}\inImpl_{n}(S)\). By hypothesis, \(I_{S}\preceq_{n}T\). Note that \(I_{S}\) is an implementation since \(\Delta_{I_{S}}^{\circlearrowright}=\emptyset\), whilst \(\Delta_{I_{S}}^{\square}=\Delta_{S}\). Let \(I_{S}^{\prime}=(Q_{S}^{\prime},A_{S}^{\prime},\overline{s},\Delta_{I_{S^{ \prime}}},f_{I_{S}^{\prime}}^{\circlearrowright},f_{I_{S}^{\prime}}^{\square})\) be the implementation computed from \(S\) by repeatedly applying until exhaustion the remove operation from Lemma 3. It holds that for all \(w\in A_{S}^{*}\), \(f_{I_{S}^{\prime}}^{\square}(w)=f_{S}^{\square}(w)\), \(f_{I_{S}^{\prime}}^{\circlearrowright}(w)=\emptyset\), \(\Delta_{I_{S^{\prime}}}^{\circlearrowright}=\emptyset\), \(\Delta_{I_{S^{\prime}}}^{\circlearrowright}=\Delta_{S}^{\square}\). By Lemma 3, it holds that \(I_{S}^{\prime}\preceq_{n}S\), therefore \(I_{S}^{\prime}\inImpl_{n}(S)\). By hypothesis \(I_{S}^{\prime}\preceq_{n}T\). For any \(w\in A_{S}^{*}\) such that \(\overline{s}\stackrel{{ w}}{{\longrightarrow}}s\) by Lemma 1 and \(f_{T}^{\square}(w)\subseteq f_{I_{S}^{\prime}}^{\square}(w)=f_{S}^{\square}(w)\) it holds \(f_{S}^{\circlearrowright}(w)\cap f_{T}^{\square}(w)=\emptyset\). If for all \(w\in A_{S}^{*}\) such that \(\overline{s}\stackrel{{ w}}{{\longrightarrow}}s\) it holds \(f_{S}^{\circlearrowright}(w)=\emptyset\), then since \(f_{S}^{\circlearrowright}(w)\cap f_{S}^{\square}(w)=\emptyset\) we have that \(S=I_{S}\) and the thesis follows. Hence, assume that for some \(s\in Q_{S}\) such that \(\overline{s}\stackrel{{ w}}{{\longrightarrow}}s\), \(w\in A_{S}^{*}\), it holds \(f_{S}^{\circlearrowright}(w)\neq\emptyset\). We perform two nested iteration loops. In the external loop, we iterate on the states in the set \(P=\{s\,|\,s\!\in\!Q_{S},w\in A_{S}^{*},\overline{s}\stackrel{{ w}}{{ \longrightarrow}}s\), \(f_{S}^{\circlearrowright}(w)\!\neq\!\emptyset\ \}\). We start by selecting an \(s^{1}\in Q_{S}\) and \(w^{1}\in A_{S}^{*}\) such that \(\overline{s}\stackrel{{ w^{1}}}{{\longrightarrow}}s^{1}\) and \(f_{S}^{\circlearrowright}(w^{1})\neq\emptyset\). In the internal loop, for each selected state, we iterate on the actions \(a\in f_{S}^{\circlearrowright}(w^{1})\). We pick an action \(a\in f_{S}^{\circlearrowright}(w^{1})\), thus \(a\not\in f_{T}^{\square}(w^{1})\) and \(a\in f_{I_{S}}^{\square}(w^{1})\). From \(I_{S}\), \(T\), and \(a\) by applying Lemma 4 we obtain an NMTS \(S_{1}^{1}=(Q_{S},A_{S},\overline{s},\Delta_{S_{1}^{1}},f_{S_{1}^{\prime}}^{ \circlearrowright},f_{S_{1}^{\prime}}^{\circlearrowright})\) such that for all \(w^{\prime}\) with \(\overline{s}\stackrel{{ w^{\prime}}}{{\longrightarrow}}s^{1}\) it holds \(f_{S_{1}^{\prime}}^{\square}(w^{\prime})=f_{I_{S}}^{\square}(w^{\prime})\), \(f_{S_{1}^{\prime}}^{\circlearrowright}(w^{\prime})=f_{I_{S}}^{\circlearrowright}(w^{ \prime})=\emptyset\). Furthermore, \(f_{S_{1}^{\prime}}^{\square}(w^{1})=f_{I_{S}}^{\square}(w^{1})\setminus\{a\}\), \(f_{S_{1}^{\prime}}^{\circlearrowright}(w^{1})=\{a\}\). By Lemma 4, since \(I_{S}\preceq_{n}T\), \(a\in f_{I_{S}}^{\square}(w^{1})\setminus f_{T}^{\square}(w^{1})\), it holds \(S_{1}^{1}\preceq_{n}T\). We re-iterate (internal iteration) and pick the next action. From \(S^{1}_{1}\), \(T\), and an action \(b\in f^{\Box}_{S}(w^{1})\), \(b\not\in f^{\Box}_{T}(w^{1})\) such that \(b\neq a\), hence \(b\in f^{\Box}_{S^{1}_{1}}(w^{1})\), we build an NMTS \(S^{2}_{1}=(Q_{S},A_{S},\overline{s},\Delta_{S^{2}_{1}},f^{\Box}_{S^{2}_{1}},f^{ \Diamond}_{S^{2}_{1}})\) such that for all \(w^{\prime}\) with \(\overline{s}\not\xrightarrow{w^{\prime}}s^{1}\) it holds \(f^{\Box}_{S^{1}_{1}}(w^{\prime})=f^{\Box}_{S^{1}_{1}}(w^{\prime})\), \(f^{\Diamond}_{S^{2}_{1}}(w^{\prime})=f^{\Diamond}_{S^{1}_{1}}(w^{\prime})\), and \(f^{\Box}_{S^{2}_{1}}(w^{1})=f^{\Box}_{S^{1}_{1}}(w^{1})\setminus\{b\}\), \(f^{\Diamond}_{S^{2}_{1}}(w^{1})=f^{\Diamond}_{S^{1}_{1}}(w^{1})\cup\{b\}=\{a,b\}\). By Lemma 4, since \(S^{1}_{1}\preceq_{n}T\), \(b\in f^{\Box}_{S^{1}_{1}}(w^{1})\) and \(b\not\in f^{\Box}_{T}(w^{1})\), it holds \(S^{2}_{1}\preceq_{n}T\). We re-iterate (internal iteration) for all actions in \(f^{\Diamond}_{S}(w^{1})\). We obtain an NMTS \(S^{n}_{1}=(Q_{S},A_{S},\overline{s},\Delta_{S^{n}_{1}},f^{\Box}_{S^{n}_{1}},f^ {\Diamond}_{S^{n}_{1}})\) where \(|f^{\Diamond}_{S}(w^{1})|=n\) such that \(f^{\Box}_{S^{n}_{1}}(w^{1})=f^{\Diamond}_{S}(w^{1})\), \(f^{\Diamond}_{S^{n}_{1}}(w^{1})=f^{\Diamond}_{S}(w^{1})\). We repeat again the (external) iteration for all states in \(P\). At the second (external) iteration, we pick a state \(s^{2}\in P\) and a sequence of actions \(w^{2}\in A^{*}_{S}\) such that \(\overline{s}\xrightarrow{w^{2}}s^{2}\), \(f^{\Diamond}_{S}(w^{2})\neq\emptyset\) and \(\overline{s}\not\xrightarrow{w^{2}}s^{i}\) for all \(i<2\). If this last condition is not satisfied (i.e., for all \(w^{2}\in A^{*}_{S}\) it holds that \(\overline{s}\xrightarrow{w^{2}}s^{i}\) for some \(i<2\)) then we skip this iteration and continue with the next (the external counter is incremented nonetheless). We pick an action \(a\in f^{\Diamond}_{S}(w^{2})\), thus \(a\not\in f^{\Box}_{T}(w^{2})\) and \(a\in f^{\Box}_{S^{n}_{1}}(w^{2})=f^{\Box}_{I_{S}}(w^{2})\). From \(S^{n}_{1}\), \(T\), and \(a\) we build an NMTS \(S^{1}_{2}=(Q_{S},A_{S},\overline{s},\Delta_{S^{1}_{2}},f^{\Box}_{S^{1}_{2}},f^ {\Diamond}_{S^{1}_{2}})\) such that for all \(w^{\prime}\) with \(\overline{s}\not\xrightarrow{w^{\prime}}s^{2}\) it holds \(f^{\Box}_{S^{1}_{2}}(w^{\prime})=f^{\Box}_{S^{1}_{1}}(w^{\prime})\), \(f^{\Diamond}_{S^{1}_{2}}(w^{\prime})=f^{\Diamond}_{S^{1}_{1}}(w^{\prime})\), and \(f^{\Box}_{S^{2}_{2}}(w^{2})=f^{\Box}_{S^{n}_{1}}(w^{2})\setminus\{a\}\), \(f^{\Diamond}_{S^{1}_{2}}(w^{2})=\{a\}\). By Lemma 4, since \(S^{n}_{1}\preceq_{n}T\), \(a\in f^{\Box}_{S^{n}_{1}}(w^{2})\) and \(a\not\in f^{\Box}_{T}(w^{2})\), it holds \(S^{1}_{2}\preceq_{n}T\). At the end of the second (external) iteration we obtain an NMTS \(S^{m}_{2}\) where \(m=|f^{\Diamond}_{S}(w^{2})|\) such that \(f^{\Box}_{S^{m}_{2}}(w^{i})=f^{\Diamond}_{S}(w^{i})\), \(f^{\Diamond}_{S^{m}_{2}}(w^{i})=f^{\Diamond}_{S}(w^{i})\) for \(i\in\{1,2\}\). Every subsequent (external) iteration starts by reusing the last NMTS computed at the previous (external) iteration. Let \(o=|P|\) (recall that we incremented the external counter also when skipping some element of \(P\)), and \(p=|f^{\Diamond}_{S}(w^{o})|\), where \(w^{o}\) is the trace selected at the last, \(o\)-th iteration of the procedure. The returned NMTS \(S^{p}_{o}=(Q_{S},A_{S},\overline{s},\Delta_{S^{p}_{1}},f^{\Box}_{S^{p}_{1}},f^ {\Diamond}_{S^{p}_{1}})\) is such that for all \(w\in A^{*}_{S}\), \(f^{\Box}_{S^{p}_{o}}(w)=f^{\Diamond}_{S}(w)\), \(f^{\Diamond}_{S^{p}_{o}}(w)=f^{\Diamond}_{S}(w)\), and by Definition 3, \(\Delta_{S^{p}_{o}}=\Delta_{S}\). Therefore, \(S^{p}_{o}=S\). It follows that \(S\preceq_{n}T\). A practical consequence of Theorem 3.1 is that the complexity of deciding \(Impl_{n}(S)\subseteqImpl_{n}(T)\) is equivalent to the complexity of deciding \(S\preceq_{n}T\). ## 4 Non-determinism of NMTS is Non-reducible In the previous section, we showed how the further constraints imposed on MTS and their refinement (i.e., Definition 3 and Definition 4) are necessary and sufficient to obtain a sound and complete refinement relation. An important challenge discussed in [12] is to argue that the considered set of implementations is also interesting from a practical point of view (i.e., no valuable implementation is disregarded by \(\preceq_{n}\)). In this section we discuss a property concerning non-deterministic optional transitions that is violated by all implementations accepted by \(\preceq_{m}\) and discarded by \(\preceq_{n}\). MTS allow to express transitions that must be enabled in all implementations. In this case, the presence of non-determinism is unaltered in all implementations, because must transitions cannot be disabled. Conversely, optional transitions can be arbitrarily disabled, and in MTS the non-determinism in optional branches can be reduced or fully resolved. However, the standard semantics of MTS do not provide the means to specify actions that are susceptible to both enablement and disablement, yet inevitably yield non-deterministic outcomes. This issue arises since any action capable of being deactivated (i.e., an optional action) also opens the possibility of diminishing its associated non-determinism. We term the property stating that all optional actions in an MTS can be enabled or disabled, while retaining their irreducible non-determinism, as _non-reducible non-determinism_. In formal specifications expressed as MTS, non-determinism is commonly used to express under-specifications. This variant of non-determinism does not necessitate preservation across all implementations of an MTS. Consequently, modal refinement can reduce the non-determinism to fully determine a specification, i.e., modal refinement does not satisfy the non-reducible non-determinism property (see, e.g., Figure 5). There exists a distinction between non-determinism present in all implementations (as showed in the next example) and the non-determinism that characterizes under-specifications. However, both these forms of non-determinism are expressed in an identical way within MTS. This inherent ambiguity contributes to the incompleteness of modal refinement. To address this, we assume that non-deterministic behaviour of MTS is always preserved across all implementations, thereby eliminating non-determinism as a source of under-specification. Consequently, NMTS refinement satisfies the property of non-reducible non-determinism, whilst this is not the case for modal refinement. In the following, we discuss an example showcasing the need to establish the non-reducible non-determism property. Consider the NMTS in Figure 6 (left). This NMTS serves as a model for a coin toss game. We visualize the actions of this NMTS as buttons that light up and can only be pressed when the respective action becomes enabled. Upon pressing an enabled button, the associated action is carried out. Initially, only one action, namely \(toss\), is enabled. Upon executing the \(toss\) action, the outcome can result in either head or tail. If the outcome is head, the \(win\) action becomes enabled, while in the case of tail, the \(lose\) action is enabled. Upon the execution of either \(win\) or \(lose\), the NMTS reverts back to its initial state. The NMTS does not specify whether the coin is biased. The \(toss\) action exemplifies the property of non-reducible non-determinism. In essence, any implementation that enables the \(toss\) action must consistently manifest Figure 6: From left to right, the NMTS modelling a coin toss game, an NMTS implementation allowing infinite plays, an NMTS implementation allowing one play. the same non-deterministic behavior. Notably, the \(toss\) action can be deactivated. An implementation that restricts the coin's outcomes solely to either heads or tails is considered invalid. However, under the standard modal refinement \(\preceq_{m}\), such invalid implementations are deemed acceptable. Figure 6 (center and right) depicts two valid implementations of the NMTS. In one, an indefinite number of plays are feasible, while in the other, only a single play is permitted. Both these implementations are preserving the non-determinisc nature of the \(toss\) action. The introduced NMTS refinement (see Definition 4) exclusively permits implementations like the one showcased in Figure 6 whilst forbidding invalid implementations as those forcing the coin to only return either head or tail. Clearly, by relaxing the constraints in either Definition 3 or Definition 4, it is possible to define systems whose implementations may violate the non-reducible non-determinism property (see Section 3). ## 5 Related Work MTS and their dialects are widely studied in the literature. Given two MTS \(S\) and \(T\), \(S\) is a thorough refinement of \(T\) whenever the set of implementations of \(S\) is included in the set of implementations of \(T\). In [12], four different refinement relations are studied extensively, including thorough refinement, and an MTS is said to be _consistent_ if it admits at least one non-empty implementation. MTS that allow inconsistent specifications, where transitions can be necessary but not permitted, are called Mixed Transition Systems [8, 1]. In [7, Corollary 4.6], it is proved that, similarly to modal refinement, thorough refinement is decidable in polynomial time for deterministic MTS, whilst thorough refinement is decidable in EXPTIME for non-deterministic MTS. The authors describe a tableau-style algorithm [7, Section 6] for deciding thorough refinement, which runs in exponential time in the worst case. While thorough refinement does not always imply modal refinement of MTS, in [6, Lemma 3.6] it is proved that thorough refinement implies modal refinement of a deterministic overapproximation of (non-deterministic) MTS. In [12, Theorem 3], it is proved that any alternative notion \(\preceq_{alt}\) of modal refinement that is both sound and complete cannot be decided in polynomial time unless \(P=NP\). This is obtained by reducing the problem of deciding thorough refinement to the problem of deciding whether a 3-DNF formula is a tautology. However, in this case, thorough refinement considers all implementations obtained through modal refinement \(\preceq_{m}\), and not only those obtained using the alternative notion \(\preceq_{alt}\). The problem of proposing an alternative notion of modal refinement that is both sound and complete with respect to its set of implementations is left open [12]. The main challenge is to argue that the considered set of implementations is also interesting from a practical point of view. In this paper, we addressed this challenge and discussed how all implementations retained by \(\preceq_{m}\) and discarded by \(\preceq_{n}\) are violating the non-reducible non-determinsm property (see Section 4). Parametric MTS (PMTS) [5, 11, 4] were introduced to enhance the expressiveness of MTS. PMTS are LTS equipped with an obligation function \(\Phi\), which is a parametric Boolean proposition over the outgoing transitions from each state. The satisfying as signments of \(\Phi\) yield the allowed combinations of outgoing transitions. When \(\Phi\) is not parametric, PMTS are called Boolean MTS (BMTS). PMTS are capable of expressing, among others, _persistent_ choices (i.e., once some outgoing transition is enabled, it must be enabled also everywhere else). It is shown that MTS are a special case of BMTS, and that BMTS are a special case of PMTS. Rather than extending MTS, in this paper we presented a subset of MTS for which a sound and complete refinement relation is proposed. Thorough refinement is computable in NEXPTIME for both BMTS and PMTS, while we show in this paper that thorough refinement is polynomial for NMTS. Modal refinement of MTS, BMTS, and PMTS is not complete, whereas we show in this paper that NMTS refinement is complete (Lemma 3). The deterministic variants of PMTS and BMTS are called, respectively, DPMTS and DBMTS. When restricting to only deterministic systems, similarly to NMTS, also DBMTS modal refinement is complete, whereas DPMTS modal refinement is still not complete. In [2, 3], Coherent MTS (CMTS) are introduced as a model for software product lines (SPL). In CMTS, the features of an SPL are identified with the actions of an MTS. Therefore, in CMTS an action cannot be the label of both a necessary and an optional transition, since a feature is either mandatory or optional. The notion of 'consistent' product derivation requires that whenever an optional transition is discarded in an implementation, all transitions sharing the same label must also be discarded. This consistency requirement mimicks the aforementioned persistency of PMTS [5, 11, 4] and it is not to be confused with the above mentioned notion of consistency as studied in [12]. In [2] the refinement of CMTS is presented, which is demonstrated to be both sound and complete in relation to its set of implementations. CMTS and their refinement [2] are an important milestone in addressing the long-standing problem proposed at CONCUR 2007 [12]. NMTS and CMTS are currently the only available subsets of MTS that possess the capacity to preserve both non-deterministic specifications and completeness of the refinement relation. In contrast, all the other MTS extensions mentioned above do not possess completeness of refinement in the case of non-deterministic specifications. In CMTS, by interpreting SPL features as MTS actions, 'consistency' and 'coherence' are enforced globally across all system states. NMTS are a generalization of CMTS. In NMTS, the SPL-derived limitations are discarded (i.e., actions are not interpreted as features of an SPL). The constraints that in CMTS are applied globally, in NMTS are instead applied exclusively to the set of states reachable through the same sequence of actions. Consequently, NMTS strictly include CMTS while introducing a refinement concept that remains sound and complete. Differently from the restrictions imposed by CMTS and their refinement, in Section 3, we identified the restrictions imposed by NMTS and their refinement as necessary to achieve completeness in the refinement relation. Furthermore, in Section 4 we presented a property that is violated by all implementations discarded by the NMTS refinement relation but accepted through modal refinement. ## 6 Conclusion We have introduced a subset of Modal Transition Systems (MTS) called Non-reducible MTS (NMTS) and their refinement relation (\(\preceq_{n}\)). In NMTS, states reached through the execution of identical action sequences are related. Outgoing transitions from related states that are labeled by the same action also exhibit the same modality. Disabling an optional transition within a refinement results in the deactivation of all transitions that share both the same action label and are outgoing from related states. We showed that these two conditions are necessary to achieve completeness. If either of these conditions is relaxed, it becomes possible to construct two systems that are not in refinement relation, yet their respective sets of implementations still maintain a relation of set inclusion. We proved that \(\preceq_{n}\) is both sound and complete with respect to its set of implementations. By interpreting the optional non-determinism present in MTS as non-reducible (i.e., non-deterministic behaviour within MTS is consistently maintained in all implementations), we have showed that all implementations permitted by \(\preceq_{m}\) (modal refinement) but rejected by \(\preceq_{n}\) are considered invalid. Future workIn Section 4, we investigated optional non-determinism of MTS, which can be interpreted in two distinct ways: as either under-specifications or optional actions with irreducible non-determinism across all implementations. To resolve this ambiguity and the challenge posed by [12], we opted to associate the latter interpretation with non-deterministic optional actions. However, this decision brings forth a new challenge: introducing the means to express under-specifications while preserving the completeness of the refinement relation requires further investigations.
2307.02296
Bayesian evidence for spectral lag transition due to Lorentz Invariance Violation for 32 Fermi/GBM Gamma-ray Bursts
We use the spectral lag data of 32 long GRBs detected by Fermi/GBM, which has been recently collated in Liu et al (2022) to quantify the statistical significance of a transition in the spectral lag data based on Lorentz invariance violation (LIV) (for both sub-luminal and super-luminal propagation) using Bayesian model selection. We use two different parametric functions to model the null hypothesis of only intrinsic emission: a smooth broken power law model (SBPL) (proposed in Liu et al) as well as a simple power law model, which has been widely used before in literature. We find that for sub-luminal propagation, when we use the SBPL model as the null hypothesis, five GRBs show ``decisive evidence'' based on Jeffreys' scale for linear LIV and quadratic LIV. When we use the simple power-law model as the null hypothesis, we find that 10 and GRBs show Bayesian ``decisive evidence'' for linear and quadratic LIV, respectively. However these results should not be construed as evidence for LIV, as they would be in conflict with the most stringent upper limits. When we did a test for super-luminal LIV, we find that only four and two GRBs show Bayesian ``decisive evidence'' for linear and quadratic LIV, respectively, assuming a simple power law for the intrinsic emission. When we use the SBPL model, one GRB shows Bayesian ``decisive evidence'' for linear and quadratic LIV. This underscores the importance of adequately modelling the intrinsic emission while obtaining constraints on LIV using spectral lags, since inadequate modelling could masquerade as a signature of LIV.
Vibhavasu Pasumarti, Shantanu Desai
2023-07-05T13:52:29Z
http://arxiv.org/abs/2307.02296v2
Bayesian evidence for spectral lag transition due to Lorentz Invariance Violation for 32 Fermi/GBM Gamma-ray Bursts ###### Abstract We use the spectral lag data of 32 long GRBs detected by Fermi/GBM, which has been recently collated in [1] to carry out a search for Lorentz Invariance violation (LIV) using Bayesian model selection. We use two different parametric functions to model the null hypothesis of only intrinsic emission: a smooth broken power law model (SBPL) (proposed in [1]) as well as a simple power law model, which has been widely used before in literature. We find that using the SBPL model as the null hypothesis, only three GRBs show decisive evidence for linear LIV, of which only one shows decisive evidence for quadratic LIV. When we use the simple power-law model as the null hypothesis, we find 15 and 16 GRBs showing decisive evidence for linear and quadratic LIV, respectively. Finally, when we apply the SBPL model to model the intrinsic emission in GRB 1606025B, the evidence for LIV (which was previously reported using the simple power law model) disappears. This underscores the importance of adequately modelling the intrinsic emission while searching for evidence of LIV using spectral lags. ## I Introduction In various theoretical scenarios beyond the Standard Model of Particle Physics, Lorentz Invariance is not an exact symmetry at energies close to the Planck scale (\(E_{pl}\sim 10^{19}\) GeV), and the speed of light \(v(E)\) varies as a function of energy according to [2]: \[v(E)=c\left[1-s_{\pm}\frac{n+1}{2}\left(\frac{E}{E_{QG}}\right)^{n}\right], \tag{1}\] where \(s_{\pm}=\pm 1\) corresponds to either sub-luminal (\(s_{\pm}=+1\)) or super-luminal (\(s_{\pm}=-1\)) Lorentz Invariance Violation (LIV); \(E_{QG}\) denotes the energy scale where LIV effects dominate, and \(n\) represents the order of the modification of the photon group velocity. In all LIV searches in literature, the series expansion is usually restricted to linear (\(n=1\)) or quadratic corrections (\(n=2\)). Both linear and quadratic LIV models are predicted by different theoretical approaches [3]. For more than two decades Gamma-Ray Bursts (GRBs) have been a very powerful probe of LIV searches. GRBs are single-shot explosions located at cosmological distances, which were first detected in 1960s and have been observed over ten decades in energies from keV to over 10 TeV range [4; 5]. They are located at cosmological distances, although a distinct time-dilation signature in the light curves is yet to be demonstrated [6]. GRBs are traditionally divided into two categories based on their durations, with long (short) GRBs lasting more (less) than two seconds [7]. Long GRBs are usually associated with core-collapse SN [8] and short GRBs with neutron star mergers [9]. There are however many exceptions to this conventional dichotomy, and many claims for additional GRB sub-classes have also been made [10; 11] (and references therein). The observable used in almost all the LIV searches with GRBs consists of spectral lags, defined as the arrival time difference between high energy and low energy photons, and is positive if the high energy photons precede the low energy ones. Searches for LIV with spectral lags have been done using single lags from different GRBs (for example [12]), multiple spectral lags from the same GRB (GRB 1606025B, GRB 1901114 C, GRB 1905030A) [13; 14; 15; 16], as well as stacking multiple spectral lags from multiple GRBs [17]. A comprehensive uptodate review of all searches for LIV using GRB spectral lags can be found in [18; 19; 20]. Most recently, Liu et al [1] (L22, hereafter) did a comprehensive study of LIV (assuming sub-luminal propagation with \(s_{\pm}=+1\)) using spectral lags of 32 long GRBs detected by Fermi/GBM. Most of the GRBs studied in L22 contained had a turn-over in the spectral lag data. The intrinsic model which they used consisted of a smooth broken power law as a function of energy. L22 then obtained limits on LIV for both a linear and quadratic model of LIV for each of the 32 GRBs. The characteristic limits they obtained were \(E_{QG}\gtrsim 1.5\times 10^{14}\) GeV and \(E_{QG}\gtrsim 8\times 10^{5}\) GeV for linear and quadratic LIV, respectively. In this work, we supplement the analysis in L22, by calculating the significance for both the models of LIV compared to only the intrinsic astrophysical emission using Bayesian model comparison, similar to our past works [15; 17; 21]. We also test the efficacy of the more prosaic power law model which has been extensively used previously in mainly LIV searches starting from Ref. [13] and compare the two sets of results. The outline of this manuscript is as follows. We discuss the GRB dataset used for this work and the analysis procedure in Sect. II. A brief primer on Bayesian model comparison is given in Sect. IV. We recap our analysis procedure in Sect. III. Our results are outlined in Sect. V and we conclude in Sect. VI. ## II Dataset We briefly describe the data analysis procedure in L22, where more details can be found. The sample chosen in L22 consists of 32 long GRBs in the redshift range \(z\in[0.54,4.35]\) chosen from the Fermi/GBM catalogue [22] We use the spectral lag data of 32 GRB collated in L22, which has been kindly provided to us. These data consists of the observed energy (in keV) and the corresponding spectral lag observed along with the uncertainty in the lag (s). The extraction of light curves followed by the spectral lag calculation were done using the methods described in [23; 24; 25]. The spectral lags were calculated using distinct energy bands as well as bin sizes. The data for all the 32 GRBs including bin size, number of energy bands, energy range used for spectral lag calculation, redshift, and time interval can be found in Table 1 of L22. Each GRB contains about 15-20 spectral lag data in the 10-1000 keV energy range. We note that spectral lags for two of the GRBs in this catalog, namely GRB 1606025B and GRB 190114C have been reported before and used to search for LIV [13; 14]. However, in the previous works the spectral lag data for GRB 1606025B and GRB 190114C were binned in energy, whereas the L22 spectral lag data have been provided at specific energies. ## III Analysis In this analysis, we follow the same analysis as L22 to predict the spectral lag of GRBs. The observed spectral lag is given by: \[\Delta t_{obs}=\Delta t_{int}+\Delta t_{LIV}, \tag{2}\] where \(\Delta t_{int}\) is the intrinsic lag from the GRB radiation which is purely astrophysical and \(\Delta t_{LIV}\) is the contribution from LIV. Since the exact physical process for the intrinsic lag is not yet found and could vary according based on the GRB, we use two different models for the intrinsic emission. The first model which we use is the Smoothly broken power-law (SBPL) to model, proposed in L22 and can be written as: \[\Delta t_{int}=\zeta\left(\frac{E-E_{0}}{E_{b}}\right)^{\alpha_{1}}\left( \frac{1}{2}\bigg{[}1+\left(\frac{E-E_{0}}{Eb}\right)^{\frac{1}{\mu}}\bigg{]} \right)^{(\alpha_{2}-\alpha_{1})\mu}, \tag{3}\] where \(\zeta\) is the normalization parameter, \(E_{b}\) is the transition energy, \(\alpha_{1}\) and \(\alpha_{2}\) are the slopes before and after \(E_{b}\), and \(\mu\) is the transition smoothness. This SBPL transforms into a single power law for \(\alpha_{1}=\alpha_{2}\). The SBPL enables us to account for the negative lags observed in the data [1]. We also use the power-law model first proposed in [13], which has been used in a number of works on LIV [13; 14; 15; 17; 21; 26; 27; 28] and was motivated based on the analysis of single-pulse properties of about 50 GRBs [29]: \[\Delta t_{int}=(1+z)\tau\Big{[}\Big{(}\frac{E}{keV}\Big{)}^{\alpha}-\Big{(} \frac{E_{0}}{keV}\Big{)}^{\alpha}\Big{]}, \tag{4}\] The model for the spectral lag originating from LIV is sames as the one used in [30] and is given by: \[\Delta t_{\rm LIV}=-\frac{1+n}{2H_{0}}\frac{E^{n}-E_{0}^{n}}{E_{\rm QG,n}^{n} }\int_{0}^{z}\frac{\left(1+z^{\prime}\right)^{n}dz^{\prime}}{\sqrt{\Omega_{ \rm m}\left(1+z^{\prime}\right)^{3}+\Omega_{\Lambda}}}, \tag{5}\] Note the above equation assumes that the expansion history of the universe is described by the \(\Lambda\)CDM model. Other expansion histories as well as non-parametric methods to model the expansion histories have also been considered [14; 17; 21; 31] (and references therein). However, it has been found that the final results do not change much compared to using \(\Lambda\)CDM expansion history [21]. Therefore, for this work we use Eq. 5 to calculate the lag due to LIV. The cosmological parameters which we use are \(H_{0}=67.36\) km/sec/Mpc, \(\Omega_{m}=0.315\), \(\Omega_{\Lambda}=0.695\), which are the same as that used in L22 and based on Planck 2020 cosmological parameters [32]. ## IV Bayesian model comparison We evaluate the significance of any LIV using Bayesian Model Comparison. We provide a very brief prelude to Bayesian model comparison, and more details can be found in recent reviews [33; 34; 35; 36]. To evaluate the significance of a model (\(M_{2}\)) as compared to another model (\(M_{1}\)), one usually calculates the Bayes factor (\(B_{21}\)) given by: \[B_{21}=\frac{\int P(D|M_{2},\theta_{2})P(\theta_{2}|M_{2})\,d\theta_{2}}{\int P (D|M_{1},\theta_{1})P(\theta_{1}|M_{1})\,d\theta_{1}}, \tag{6}\] where \(P(D|M_{2},\theta_{2})\) is the likelihood for the model \(M_{2}\) given the data \(D\) and \(P(\theta_{2}|M_{2})\) denotes the prior on the parameter vector \(\theta_{2}\) of the model \(M_{2}\). The denominator in Eq. 6 denotes the same for model \(M_{1}\). If \(B_{21}\) is greater than one, then \(M_{2}\) is preferred over \(M_{1}\) and vice-versa. The significance can be qualitatively assessed using the Jeffreys' scale [33]. In the present paper, the model \(M_{1}\) corresponds to the hypothesis, where the spectral lags are produced only by intrinsic astrophysical emission (Eq. 3 or Eq. 4), whereas \(M_{2}\) corresponds to the lags being described by Eq 2, consisting of both intrinsic and LIV delays. To calculate the Bayes factor, we need a model for the likelihood (\(\mathcal{L}\)), which we define as: \[\mathcal{L}=\prod_{i=1}^{N}\frac{1}{\sigma_{t}\sqrt{2\pi}}\exp\left\{-\frac{[ \Delta t_{i}-f(\Delta E_{i},\theta)]^{2}}{2\sigma_{t}^{2}}\right\}, \tag{7}\] where \(N\) is the total number of spectral lags per GRB; \(\Delta t_{i}\) denotes the observed spectral lag data, and \(\sigma_{t}\) denotes the uncertainty in the observed spectral lags. In this expression, \(f\) corresponds to the particular model being tested, which could either be the two LIV models or the null hypothesis of only astrophysical emission; \(\sigma_{t}\) is the uncertainty on the spectral lag. Finally, to evaluate Eq. 6, we need the priors for the three models. We have used uniform priors for all the intrinsic parameters, and log-uniform priors on \(E_{QG}\). The prior ranges for all these parameters for both the LIV and the two intrinsic models considered can be found in Table 1. \begin{table} \begin{tabular}{c c c} Quantity & Min & Max \\ \hline \(E_{b}\)(keV) & 0 & 5000 \\ \(\alpha_{1}\) & -3 & 10 \\ \(\alpha_{2}\) & -10 & 3 \\ \(\mu\) & 0 & 3 \\ \(\zeta\) & 0 & 4 \\ \(\log_{10}(E_{QG_{1}}/GeV)\) & 0 & 20 \\ \(\log_{10}(E_{QG_{2}}/GeV)\) & 0 & 15 \\ \hline \(\alpha\) & -2 & 1 \\ \(\tau\) & -15 & 10 \\ \(\log_{10}(E_{QG}/GeV)\) & 0 & 20 \\ \hline \end{tabular} \end{table} Table 1: Summary of priors used for the two intrinsic models (Eq. 3 and Eq. 4) as well as the LIV models defined in Eq. 5. Results We now calculate the Bayes factors assuming that the spectral lags can described using a superposition of intrinsic emission along with a model of LIV compared to only intrinsic emission. To evaluate the Bayesian evidence we use the Dynesty Nested Sampler [37]. Along with the Bayes factor, we also check the efficacy of our fits based on the reduced \(\chi^{2}\), where the reduced \(\chi^{2}\) is the \(\chi^{2}\) divided by the total degrees of freedom. These values of the Bayes factor and reduced \(\chi^{2}\) for all the 32 GRBs considered can be found in Table 2 for the SPBL model as the null hypothesis. For linear LIV, we find 3 GRBs with Bayes factor \(>100\). These are GRB 190114C, GRB 130925A, and GRB 131231A. For all other GRBs, the Bayes factors are close to 1 for linear LIV. For quadratic LIV, only one of these GRBs (viz. GRB 131213A) has a Bayes factor \(>100\). For GRB 190114C and GRB 130925A, the Bayes factors reduce to 40 and 66, respectively which roughly correspond to very strong evidence for LIV. We also find that for most GRBs, the Bayes factor for quadratic model of LIV is smaller than compared to the linear model. The corresponding results for the more simple power law model in Eq. 4 can be found in Table 3. In contrast to the SBPL intrinsic model, now we find 15 GRBs with Bayes factors \(>100\) for linear LIV model. On the other hand for the quadratic model of LIV, we find 16 GRBs with Bayes factor \(>100\). Also for most GRBs, the Bayes factor is greater for quadratic LIV as compared to linear LIV. We also find that the reduced \(\chi^{2}\) for the null hypothesis is larger while using Eq. 4, as compared to the SBPL parametrization. In order to validate this with Bayesian model comparison, we compare the Bayesian evidence for both these intrinsic models. We again use the same priors for both these models as before. The Bayes factors for the SBPL model compared to Eq. 4 can be found in Table 4 for all the 32 GRBs. We find that 22 GRBs show decisive evidence in favor of the SBPL model and two others show very strong evidence. The more simple power law mode (Eq. 4) is favored compared to the SBPL for only about six GRBs. This agrees with the conclusions in L22, who pointed out that the SBPL model is more accurate than the simple power model used before in literature. Finally, it is instructive to compare the significance of LIV for GRB 190114C and GRB 1606025B with previous works on the subject, which have used Eq. 4 for the null hypothesis. For this GRB, the natural log of Bayes factor was approximately 175 for linear and quadratic LIV [14]. We find the same to be about 148 and 101 for linear and quadratic LIV respectively using the same intrinsic model. However, when we use the SBPL as the null hypothesis, the corresponding natural log of the Bayes factors reduce to 9.9 and 3.7, corresponding to Bayes factors of around 20,000 and 40, respectively. Therefore, we find that the quadratic LIV model no longer shows decisive evidence compared to the null hypothesis for the SBPL null hypothesis. For GRB 1606025B, frequentist, information theory, and Bayesian model comparison techniques have been used to evaluate the statistical significance of LIV [15; 38]. Gunapati et al. [38] reported the natural log of Bayes factor of 16 and 20, for linear and quadratic LIV using the intrinsic model in Eq. 4. The corresponding values for the natural lag of the Bayes factors, which we get for the same null hypothesis are comparable with values of 12 and 10 for linear and quadratic LIV, respectively. These Bayes factors correspond to decisive evidence for LIV. However, when we consider the SBPL model as the null hypothesis, evidence of LIV completely disappears and the null hypothesis is now favored compared to both the LIV models. This underscores the importance of correctly modelling the intrinsic emission, while drawing conclusions about LIV from spectral lag data. This also agrees with the conclusions in [16], who showed that some of the turn-over in spectral lag data which was previously attributed to LIV could be explained using spectral evolution. Note that we have also uploaded the plots for the best-fits for both sets of models along with the spectral lag data for all the 32 GRBs on github, whose link is provided in Sect. VI \begin{table} \begin{tabular}{|c|c|c|c c|c c|c c|} \hline \multirow{2}{*}{**GRB**} & \multirow{2}{*}{**E0 (keV)**} & \multirow{2}{*}{**Redshift**} & \multicolumn{2}{c|}{**Null**} & \multicolumn{2}{c|}{**Null + Linear LIV**} & \multicolumn{2}{c|}{**Null + Quadratic LIV**} \\ & & & \(\ln(\)**BF**\()\) & **Reduced**\(\chi^{2}\) & \(\ln(\)**BF**\()\) & **Reduced**\(\chi^{2}\) & \(\ln(\)**BF**\()\) & **Reduced**\(\chi^{2}\) \\ \hline GRB 210619B & 10.0 & 1.94 & 0.0 & 4.3 & -4.5 & 4.3 & 0.0 & 4.3 \\ GRB 210610B & 30.0 & 1.13 & 0.0 & 0.9 & -0.9 & 1.4 & -0.1 & 1.1 \\ GRB 210204A & 10.0 & 0.88 & 0.0 & 8.5 & -1.7 & 9.1 & -1.0 & 9.0 \\ GRB 201216C & 15.0 & 1.1 & 0.0 & 1.3 & -1.4 & 1.2 & -0.7 & 1.7 \\ GRB 200829A & 25.0 & 1.25 & 0.0 & 6.0 & 0.8 & 6.2 & -3.0 & 7.7 \\ GRB 200613A & 30.0 & 1.22 & 0.0 & 0.8 & -1.2 & 1.0 & -0.6 & 1.0 \\ GRB 190114C & 10.0 & 0.42 & 0.0 & 5.2 & 9.9 & 2.8 & 3.7 & 4.1 \\ GRB 180720B & 25.0 & 0.65 & 0.0 & 1.2 & -1.4 & 1.1 & -0.6 & 1.7 \\ GRB 180703A & 20.0 & 0.67 & 0.0 & 10.3 & -0.5 & 11.1 & -0.5 & 10.9 \\ GRB 171010A & 10.0 & 0.33 & 0.0 & 0.6 & -1.0 & 1.0 & -0.2 & 0.7 \\ GRB 160625B & 10.0 & 1.41 & 0.0 & 8.8 & -30.3 & 12.5 & -3.7 & 10.4 \\ GRB 160509A & 10.0 & 1.17 & 0.0 & 0.9 & -1.1 & 1.1 & -0.4 & 0.8 \\ GRB 150821A & 10.0 & 0.76 & 0.0 & 0.8 & -0.4 & 1.1 & 0.3 & 0.9 \\ GRB 150514A & 20.0 & 0.81 & 0.0 & 1.3 & -0.8 & 1.2 & -0.3 & 1.4 \\ GRB 150403A & 35.0 & 2.06 & 0.0 & 0.9 & -1.0 & 1.3 & -0.2 & 0.8 \\ GRB 150314A & 20.0 & 1.76 & 0.0 & 7.1 & -1.1 & 7.1 & -0.5 & 7.1 \\ GRB 141028A & 40.0 & 2.33 & 0.0 & 1.0 & -1.3 & 0.9 & -0.5 & 1.5 \\ GRB 140508A & 10.0 & 1.03 & 0.0 & 0.9 & -0.9 & 1.2 & -0.6 & 1.1 \\ GRB 140206A & 20.0 & 2.73 & 0.0 & 6.1 & -1.2 & 7.1 & -0.6 & 6.8 \\ GRB 131231A & 10.0 & 0.64 & 0.0 & 4.9 & 6.7 & 4.3 & 5.8 & 3.6 \\ GRB 131108A & 20.0 & 2.4 & 0.0 & 6.3 & -1.3 & 6.5 & -0.6 & 7.4 \\ GRB 130925A & 10.0 & 0.35 & 0.0 & 4.5 & 5.4 & 3.7 & 4.2 & 3.8 \\ GRB 130518A & 10.0 & 2.49 & 0.0 & 2.9 & -1.2 & 3.0 & -1.0 & 3.0 \\ GRB 130427A & 10.0 & 0.34 & 0.0 & 0.6 & -2.1 & 0.6 & -1.6 & 0.3 \\ GRB 120119A & 25.0 & 1.73 & 0.0 & 1.0 & -1.2 & 1.0 & -0.7 & 1.0 \\ GRB 100728A & 40.0 & 1.57 & 0.0 & 2.6 & -0.8 & 2.6 & -0.1 & 2.6 \\ GRB 091003A & 10.0 & 0.9 & 0.0 & 3.0 & -0.1 & 3.0 & 0.0 & 3.3 \\ GRB 090926A & 10.0 & 2.11 & 0.0 & 0.6 & -1.3 & 0.4 & -0.4 & 0.4 \\ GRB 090618 & 10.0 & 0.54 & 0.0 & 0.5 & -0.7 & 0.8 & -0.4 & 0.4 \\ GRB 090328 & 30.0 & 0.74 & 0.0 & 7.3 & -0.6 & 8.0 & -0.2 & 7.4 \\ GRB 081221 & 10.0 & 2.26 & 0.0 & 0.5 & -0.8 & 0.9 & -0.2 & 0.9 \\ GRB 080916C & 10.0 & 4.35 & 0.0 & 1.1 & -1.6 & 1.0 & -0.8 & 1.2 \\ \hline \end{tabular} \end{table} Table 2: Bayes factors and reduced \(\chi^{2}\) for linear and quadratic LIV models along with the SBPL intrinsic emission model considered in Eq. 3, compared to only intrinsic emission model for every GRB considered in this work. We also list the corresponding redshift and lower value for the energy bin (\(E_{0}\)) used to calculate the spectral lag. For linear LIV, we find 3 GRBs with Bayes factor \(>100\) (or \(\ln\;\mbox{BF}>4.6\)), viz. GRB 190114C, GRB 130925A, and GRB 131231A For quadratic LIV, only one of these GRBs (GRB 131213A) has a Bayes factor \(>100\) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**GRB**} & \multirow{2}{*}{**E0 (keV)**} & \multirow{2}{*}{**Redshift**} & \multicolumn{2}{c|}{**Null**} & \multicolumn{2}{c|}{**Null + Linear LIV**} & \multicolumn{2}{c|}{**Null + Quadratic LIV**} \\ & & & \(\ln(\)**BF**\()\)** & **Reduced \(\chi^{2}\)** & \(\ln(\)**BF**\()\)** & **Reduced \(\chi^{2}\)** & \(\ln(\)**BF**\()\)** & **Reduced \(\chi^{2}\)** \\ \hline GRB 210619B & 10.0 & 1.94 & 0.0 & 8.6 & 21.2 & 6.3 & 15.3 & 6.3 \\ GRB 210610B & 30.0 & 1.13 & 0.0 & 13.3 & 105.7 & 1.9 & 91.3 & 3.4 \\ GRB 210204A & 10.0 & 0.88 & 0.0 & 12.1 & 31.9 & 8.5 & 30.9 & 8.6 \\ GRB 201216C & 15.0 & 1.10 & 0.0 & 1.3 & -1.1 & 1.2 & -0.5 & 1.2 \\ GRB 200829A & 25.0 & 1.25 & 0.0 & 24.3 & 187.6 & 6.1 & 180.3 & 6.7 \\ GRB 200613A & 30.0 & 1.22 & 0.0 & 0.6 & -0.7 & 0.5 & -0.2 & 0.7 \\ GRB 190114C & 10.0 & 0.42 & 0.0 & 22.1 & 148.1 & 5.0 & 101.1 & 10.6 \\ GRB 180720B & 25.0 & 0.65 & 0.0 & 2.4 & 4.9 & 1.0 & 2.2 & 1.5 \\ GRB 180703A & 20.0 & 0.67 & 0.0 & 12.8 & 0.9 & 12.4 & 7.2 & 12.1 \\ GRB 171010A & 10.0 & 0.33 & 0.0 & 3.0 & 18.6 & 0.6 & 18.3 & 0.9 \\ GRB 160625B & 10.0 & 1.41 & 0.0 & 14.8 & 49.7 & 11.5 & 70.8 & 10.2 \\ GRB 160509A & 10.0 & 1.17 & 0.0 & 1.1 & -0.9 & 1.0 & -0.2 & 1.1 \\ GRB 150821A & 10.0 & 0.76 & 0.0 & 2.1 & -0.5 & 1.7 & 1.4 & 2.5 \\ GRB 150514A & 20.0 & 0.81 & 0.0 & 1.1 & -1.0 & 1.1 & -0.4 & 1.0 \\ GRB 150403A & 35.0 & 2.06 & 0.0 & 0.9 & -1.1 & 0.8 & -0.4 & 1.1 \\ GRB 150314A & 20.0 & 1.76 & 0.0 & 7.1 & 0.2 & 6.8 & 1.0 & 6.7 \\ GRB 141028A & 40.0 & 2.33 & 0.0 & 0.9 & -1.3 & 1.0 & -0.5 & 1.0 \\ GRB 140508A & 10.0 & 1.03 & 0.0 & 0.8 & -1.3 & 0.8 & -0.4 & 0.9 \\ GRB 140206A & 20.0 & 2.73 & 0.0 & 13.0 & 45.5 & 5.1 & 44.0 & 5.6 \\ GRB 131231A & 10.0 & 0.64 & 0.0 & 31.3 & 273.5 & 4.6 & 242.2 & 7.8 \\ GRB 131108A & 20.0 & 2.40 & 0.0 & 10.5 & 6.9 & 9.9 & 12.5 & 8.5 \\ GRB 130925A & 10.0 & 0.35 & 0.0 & 20.2 & 106.3 & 5.1 & 118.2 & 3.9 \\ GRB 130518A & 10.0 & 2.49 & 0.0 & 11.5 & 57.5 & 2.9 & 55.4 & 2.8 \\ GRB 130427A & 10.0 & 0.34 & 0.0 & 6.9 & 41.2 & 3.4 & 28.6 & 4.5 \\ GRB 120119A & 25.0 & 1.73 & 0.0 & 1.6 & 1.1 & 0.8 & 1.1 & 1.0 \\ GRB 100728A & 40.0 & 1.57 & 0.0 & 7.5 & 19.7 & 5.5 & 23.0 & 4.2 \\ GRB 091003A & 10.0 & 0.90 & 0.0 & 4.2 & 1.2 & 3.0 & 1.6 & 3.0 \\ GRB 090926A & 10.0 & 2.11 & 0.0 & 1.2 & 3.5 & 0.3 & 4.8 & 0.2 \\ GRB 090618 & 10.0 & 0.54 & 0.0 & 1.4 & 2.4 & 0.4 & 3.4 & 0.3 \\ GRB 090328 & 30.0 & 0.74 & 0.0 & 7.0 & 2.2 & 6.5 & 4.3 & 6.0 \\ GRB 081221 & 10.0 & 2.26 & 0.0 & 2.1 & -0.8 & 1.7 & 0.4 & 2.0 \\ GRB 080916C & 10.0 & 4.35 & 0.0 & 2.1 & 1.3 & 1.3 & 0.7 & 1.4 \\ \hline \end{tabular} \end{table} Table 3: Bayes factors for linear and quadratic LIV models along with the intrinsic emission model considered in Eq. 4, compared to only intrinsic emission. See Table 2 for explaation of the various columns. We find 15 and 16 GRBs with Bayes factors \(>100\) for linear and quadratic LIV models, respectively. ## VI Conclusions In a recent work, L22 carried out a comprehensive study of the spectral lags of 32 long GRBs detected by Fermi-GBM, which had a transition from positive to negative lags. They fit the intrinsic lags using an empirical SBPL (Eq. 3). L22 used this data to constrain LIV and obtained constraints on \(E_{QG}\geq 1.5\times 10^{14}\) GeV and \(E_{QG}\geq 8\times 10^{5}\) GeV. \begin{table} \begin{tabular}{|c|c|} \hline **GRB** & \(\ln\left[\text{BF}_{\text{SBPL}}(\text{Eq. 3})\right]\) - \(\ln\left[\text{BF }(\text{Eq. 4})\right]\) \\ \hline GRB 210619B & 42.9 \\ GRB 210610B & 114.7 \\ GRB 210204A & 42.2 \\ GRB 201216C & -3.1 \\ GRB 200829A & 186.1 \\ GRB 200613A & -2.7 \\ GRB 190114C & 150.5 \\ GRB 180720B & 6.3 \\ GRB 180703A & 30.1 \\ GRB 171010A & 22.0 \\ GRB 160625B & 75.8 \\ GRB 160509A & -1.3 \\ GRB 150821A & 6.5 \\ GRB 150514A & -3.5 \\ GRB 150403A & -3.5 \\ GRB 150314A & 5.4 \\ GRB 141028A & -2.4 \\ GRB 140508A & -3.4 \\ GRB 140206A & 46.2 \\ GRB 131231A & 275.5 \\ GRB 131108A & 36.7 \\ GRB 130925A & 118.5 \\ GRB 130518A & 60.3 \\ GRB 130427A & 74.4 \\ GRB 120119A & 0.5 \\ GRB 100728A & 36.5 \\ GRB 091003A & 8.4 \\ GRB 090926A & 3.8 \\ GRB 090618 & 3.4 \\ GRB 090328 & 4.7 \\ GRB 081221 & 5.1 \\ GRB 080916C & 4.8 \\ \hline \end{tabular} \end{table} Table 4: Bayes factor (in natural log) for the intrinsic model specified by the SBPL model compared to the intrinsic model in Eq. 4. For most GRBs (24),the Bayes model comparison decisively or strongly favors the SBPL parameterization compared to that in Eq. 4. In this work, we extended the original analysis in L22 by evaluating the significance based on Bayesian model selection, that the spectral lags can be adequately modelled by a mixture of intrinsic emission and LIV-induced compared to only intrinsic emission. For the intrinsic emission, we consider two models. One of them is the SBPL model considered in L22. The second intrinsic model we consider is the simple power law model (cf. Eq. 4) first used in [13] and which was used in some of our past works [15; 17; 21]. Our results for the Bayes factor and reduced \(\chi^{2}\) can be found for the Table 2 and Table 3 for the SBPL model and Eq. 4, respectively. To evaluate the relative efficacy of the two models of intrinsic emission, we also calculate the Bayes factor for the SBPL model compared to that in Eq. 4. These Bayes factors can be found for all the GRBs in Tab. 4. Our conclusions are as follows: * We find 3 GRBs (GRB 190114C, GRB 130925A, and GRB 131231A) with Bayes factor \(>100\) (corresponding to decisive evidence) for a model consisting of a superposition of SBPL + linear LIV compared to only the SBPL model. For quadratic LIV, only one of these GRBs, namely GRB 131213A has a Bayes factor \(>100\). * When we replace the SBPL model with Eq. 4 as the null hypothesis, we find 15 and 16 GRBs with decisive evidence for linear and quadratic LIV, respectively. * When the SBPL model is used as the null hypothesis, the Bayes factor for the quadratic LIV model is mostly smaller than the linear LIV model, whereas the opposite is true while using Eq. 4 for the null hypothesis. * For most GRBs, Bayesian model comparison decisively favors the SBPL model compared to Eq. 4. This is in accord with the conclusions in L22. * Previous works [14; 38] have also studied and reported decisive evidence for both linear and quadratic LIV for GRB 190114C and GRB 1606025B, while using Eq. 4 as the null hypothesis. However, when we use SBPL as the null hypothesis to model the intrinsic emission, we find the evidence for LIV for GRB 1606025B completely vanishes and the null hypothesis is preferred. For GRB 190114C, we still get decisive evidence for linear LIV. However, for quadratic LIV, the Bayes factor reduces to about 40, corresponding to "very strong" evidence according to Jeffrey's scale. This underscores the importance of the intrinsic emission model while making any claims for evidence of LIV. Our analysis codes along with supplementary plots showing comparison of the different models on top of data have been uploaded on github and can be found in [https://github.com/DarkWake9/Project-QG](https://github.com/DarkWake9/Project-QG) ## Acknowledgements We are grateful to Zik Liu and Binbin Zhang for generously sharing the spectral lag data used in L22 with us. We acknowledge National Supercomputing Mission (NSM) for providing computing resources of 'PARAM SEVA' at IIT, Hyderabad, which is implemented by C-DAC and supported by the Ministry of Electronics and Information Technology (MeitY) and Department of Science and Technology (DST), Government of India.
2305.06851
Policy Gradient Algorithms Implicitly Optimize by Continuation
Direct policy optimization in reinforcement learning is usually solved with policy-gradient algorithms, which optimize policy parameters via stochastic gradient ascent. This paper provides a new theoretical interpretation and justification of these algorithms. First, we formulate direct policy optimization in the optimization by continuation framework. The latter is a framework for optimizing nonconvex functions where a sequence of surrogate objective functions, called continuations, are locally optimized. Second, we show that optimizing affine Gaussian policies and performing entropy regularization can be interpreted as implicitly optimizing deterministic policies by continuation. Based on these theoretical results, we argue that exploration in policy-gradient algorithms consists in computing a continuation of the return of the policy at hand, and that the variance of policies should be history-dependent functions adapted to avoid local extrema rather than to maximize the return of the policy.
Adrien Bolland, Gilles Louppe, Damien Ernst
2023-05-11T14:50:20Z
http://arxiv.org/abs/2305.06851v3
# Policy Gradient Algorithms Implicitly Optimize by Continuation ###### Abstract Direct policy optimization in reinforcement learning is usually solved with policy-gradient algorithms, which optimize policy parameters via stochastic gradient ascent. This paper provides a new theoretical interpretation and justification of these algorithms. First, we formulate direct policy optimization in the optimization by continuation framework. The latter is a framework for optimizing nonconvex functions where a sequence of surrogate objective functions, called continuations, are locally optimized. Second, we show that optimizing affine Gaussian policies and performing entropy regularization can be interpreted as implicitly optimizing deterministic policies by continuation. Based on these theoretical results, we argue that exploration in policy-gradient algorithms consists in computing a continuation of the return of the policy at hand, and that the variance of policies should be history-dependent functions adapted to avoid local extrema rather than to maximize the return of the policy. Machine Learning, Reinforcement Learning, Policy Gradient Algorithms, Optimization, Optimization, Optimization ## 1 Introduction Applications where one has to control an environment are numerous and solving these control problems efficiently is the preoccupation of many researchers and engineers. Reinforcement learning (RL) has emerged as a solution when the environments at hand have complex and stochastic dynamics (Sutton and Barto, 2018). Direct policy optimization and more particularly (on-policy) policy gradients are methods that have been successful in recent years (Duan et al., 2016; Andrychowicz et al., 2020). We distinguish two basic elements that determine the performance of these methods. As first element, we have the formalization of the optimization problem. It is defined through two main choices: the (functional) parametrization of the policy and the learning objective function, which mostly relies on adding an entropy regularization term to the return. As second element, there is the choice of the local-search algorithm to solve the optimization problem - we focus on stochastic gradient ascent methods in this study. The policy parameterization is the first formalization choice. In theory, there exists an optimal (parametric) deterministic policy (Sutton and Barto, 2018), which can be optimized by deterministic policy gradient (Silver et al., 2014) with a guarantee of converging towards a stationary solution (Xiong et al., 2022). However, this approach gives poor results in practice as it is subject to convergence towards local optima (Silver et al., 2014). It is therefore usual to optimize stochastic policies where this problem is mitigated in practice (Duan et al., 2016; Andrychowicz et al., 2020). For discrete state and action spaces, theoretical guarantees of global convergence hold for softmax or direct policy parameterization (Bhandari and Russo, 2019; Zhang et al., 2021; Agarwal et al., 2020). In the general case of continuous spaces, these results no longer hold and only convergence towards stationarity can be ensured under strong hypotheses (Bhatt et al., 2019; Zhang et al., 2020; Bedi et al., 2021). Recently, convergence under milder assumptions was established assuming that the policy follows a heavy-tailed distribution, which guarantees a sufficiently spread distribution of actions (Bedi et al., 2022). Nevertheless, most of the empirical works have focused on (light-tailed) Gaussian policies (Duan et al., 2016; Andrychowicz et al., 2020) for which convergence is thus not ensured in the general case (Bedi et al., 2022). The importance of a sufficiently spread distribution in policy gradient had already been observed in early works and was loosely interpreted as exploration (Lillicrap et al., 2015; Mnih et al., 2016). This concept originally introduced in bandit theory and value-based RL, where it consists in selecting a suboptimal action to execute in order to refine a statistical estimate (Simon, 1955; Sutton and Barto, 2018), is to our knowledge not well defined for direct policy optimization. In summary, no consensus has yet been reached on the exact policy parameterization that should be used in practice. The second formalization choice is the learning objective and more particularly the choice of entropy regularization. Typically, a bonus enforcing the uniformity of the action distribution is added to the rewards in the objective function (Williams and Peng, 1991; Haarnoja et al., 2019). Intuitively, it avoids converging too fast towards policies with small spread, which are subject to being locally optimal. More general entropy regularizations were applied for encouraging high-variance policies while keeping the distribution sparse (Nachum et al., 2016) or enforcing the uniformity of the state-visitation distribution in addition to the action distribution (Islam et al., 2019). Again, no consensus is reached about the best regularization to use in practice. The importance of introducing sufficient stochasticity and regularizing entropy is commonly accepted in the community. Some preliminary research has been conducted to develop a theoretical foundation for this observation. Ahmed et al. (2019) proposed an empirical analysis of the impact of the entropy regularization term. They concluded that adding this term yields a smoothed objective function. A local-search algorithm will therefore be less prone to convergence to local optima. This problem was also studied by Husain et al. (2021). They proved that optimizing a policy by regularizing the entropy is equivalent to performing a robust optimization against changes in the reward function. This result was recently reinterpreted by Brekelmans et al. (2022) who deduced that the optimization is equivalent to a game where one player adapts the policy while an adversary adapts the reward. The research papers that have been reviewed concentrate solely on learning objectives in the context of entropy regularization, leaving unanswered the question of the relationship between a policy's return and the distribution of actions. This question is of paramount importance for understanding how the formalization of the direct policy optimization problem impacts the resulting control strategy. In this work, we propose a new theoretical interpretation of the effects of the action distribution on the objective function. Our analysis is based on the theory of optimization by continuation (Allgower and Georg, 1980), which consists in locally optimizing a sequence of surrogate objective functions. The latter are called continuations and are often constructed by filtering the optimization variables in order to remove local optima. Our main contributions are twofold. First, we define a continuation for the return of policies and formulate direct policy optimization in the optimization by continuation framework. Second, based on this framework, we study different formulations, i.e., policy parameterization and entropy regularization, of direct policy optimization. Several conclusions are drawn from the analysis. First, we show that the continuation of the return of a deterministic policy is equal to the return of a Gaussian policy. Second, we show that the continuation of the return of a Gaussian policy equals the return of another Gaussian policy with scaled variance. We then derive from the previous results that optimizing Gaussian policies using policy-gradient algorithms and performing regularization can be interpreted as optimizing deterministic policies by continuation. In this regard, exploration as it is usually understood in policy gradients, consists in computing the continuation of the return of the policy at hand. Finally, we show that for a more general continuation, the continuation of the return of a deterministic policy equals the return of a Gaussian policy where the variance is a function of the observed history of states and actions. These results provide a new interpretation for the variance of a policy: it can be seen as a parameter of the policy-gradient algorithm instead of an element of the policy parameterization. Moreover, to fully exploit the power of continuations, the variance of a policy should be a history-dependent function iteratively adapted to avoid the local extrema of the return. Although there is no theoretical guarantee that optimization by continuation converges towards a global optimum, it has been successfully applied to several machine learning applications (Mobahi et al., 2012; Bengio, 2009; Pathak and Paffenroth, 2019). To our knowledge, it has never yet been applied for direct policy optimization. However, optimizing a distribution over the policy parameters rather than directly optimizing the policy is an RL technique that has been used to perform direct policy optimization (Sehnke et al., 2010; Salimans et al., 2017; Zhang et al., 2020). It is equivalent to optimizing the policy by Gaussian continuation (Mobahi et al., 2012; Hazan et al., 2016, 2019). Here the continuation is the convolution of the return by a Gaussian kernel. Another method, called RL with logistic reward-weighted regression (Wierstra et al., 2008; Peters and Schaal, 2007), consists in optimizing a utility function of the return, which can thus be seen as an optimization by continuation method. The paper is organized as follows. In Section 2, the background of direct policy optimization is reminded. The framework for optimizing policies by continuation is developed in Section 3 and theoretical results relating the return of policies to their continuations are presented in Section 4. In Section 5, these results are used for elaborating on the formulations of direct policy optimization. Finally, the results are summarized and further works discussed in Section 6. ## 2 Theoretical Background In this section, we remind the RL background and discuss the direct policy optimization problem. ### Markov Decision Processes We study problems in which an agent makes sequential decisions in a stochastic environment in order to maximize an expected sum of rewards (Sutton and Barto, 2018). The environment is modeled with an infinite-time Markov Decision sion Process (MDP) composed of a state space \(\mathcal{S}\), an action space \(\mathcal{A}\), an initial state distribution with density \(p_{0}\), a transition distribution (dynamic) with conditional density \(p\), a bounded reward function \(\rho\), and a discount factor \(\gamma\in[0,1[\). When an agent interacts with the MDP \((\mathcal{S},\mathcal{A},p_{0},p,\rho,\gamma)\), first, an initial state \(s_{0}\sim p_{0}(\cdot)\) is sampled, then, the agent provides at each time step \(t\) an action \(a_{t}\in\mathcal{A}\) leading to a new state \(s_{t+1}\sim p(\cdot|s_{t},a_{t})\). A sequence of states and actions \(h_{t}=(s_{0},a_{0},\ldots,s_{t-1},a_{t-1},s_{t})\in H\) is a history and \(H\) is the set of all histories. In addition, at each time step \(t\), a reward \(r_{t}=\rho(s_{t},a_{t})\in\mathbb{R}\) is observed. A (stochastic) history-dependent policy \(\eta\in\mathcal{E}=H\to\mathcal{P}(\mathcal{A})\) is a mapping from the set of histories \(H\) to the set of probability measures on the action space \(\mathcal{P}(\mathcal{A})\), where \(\eta(a|h)\) is the associated conditional probability density of action \(a\) given the history \(h\). A (stochastic) Markov policy \(\pi\in\Pi=\mathcal{S}\to\mathcal{P}(\mathcal{A})\) is a mapping from the state space \(\mathcal{S}\) to the set of probability measures on the action space \(\mathcal{P}(\mathcal{A})\), where \(\pi(a|s)\) is the associated conditional probability density of action \(a\) in state \(s\). Finally, deterministic policies \(\mu\in M=\mathcal{S}\to\mathcal{A}\) are functions mapping an action \(a=\mu(s)\in\mathcal{A}\) to each state \(s\in\mathcal{S}\). We note that for each deterministic policy \(\mu\) there exists an equivalent Markov policy, where the probability measure is a Dirac measure on the action \(a=\mu(s)\) in each state \(s\). In addition, for each Markov policy, there exists an equivalent history-dependent policy only accounting for the last state in the history. We therefore write by abuse of notation that \(M\subsetneq\Pi\subsetneq\mathcal{E}\). The function \(J:\mathcal{E}\to\mathbb{R}\) is defined as the function mapping to any policy \(\eta\) the expected discounted cumulative sum of rewards gathered by an agent interacting in the MDP by sampling actions from the policy \(\eta\). The value \(J(\eta)\) is called the return of the policy \(\eta\) and is computed as follows: \[J(\eta)=\operatorname*{\mathbb{E}}_{\begin{subarray}{c}s_{0}\sim p_{0}(\cdot) \\ a_{s}\sim\eta(\cdot|h_{t})\\ s_{t+1}\sim p(\cdot|s_{t},a_{t})\end{subarray}}\left[\sum_{t=0}^{\infty} \gamma^{t}\rho(s_{t},a_{t})\right]\;. \tag{1}\] An optimal agent follows an optimal policy \(\eta^{*}\) maximizing the expected discounted sum of rewards \(J\). ### Direct Policy Optimization **Problem statement.** Let \((\mathcal{S},\mathcal{A},p_{0},p,\rho,\gamma)\) be an MDP and let \(\eta_{\theta}\in\mathcal{E}\) be a policy parameterized by the real vector \(\theta\in\mathbb{R}^{d_{\Theta}}\). The objective of the optimization problem is to find the optimal parameter \(\theta^{*}\in\mathbb{R}^{d_{\Theta}}\) such that the return of the policy is maximized: \[\theta^{*}=\operatorname*{argmax}_{\theta\in\mathbb{R}^{d_{\Theta}}}J(\eta_{ \theta})\;. \tag{2}\] In this work, we consider on-policy policy-gradient algorithms (Andrychowicz et al., 2020). These algorithms optimize differentiable policies with local-search methods using the derivatives of the policies. They iteratively repeat two operations. First, they approximate an ascent direction relying on histories sampled from the policy, with the current parameters, in the MDP. Second, they update these parameters in the ascent direction. **Deterministic Policies.** In an MDP, it is theoretically possible to find an optimal deterministic policy by solving the optimization problem described in equation (2) where the parameterized policy is a (universal) function approximator \(\mu_{\theta}\in M\)(Sutton and Barto, 2018). In practice, optimizing deterministic policies with policy-gradient methods usually results in locally optimal policies (Silver et al., 2014). **Gaussian Policies.** In direct policy optimization, most of the works focus on learning a Gaussian policy \(\pi_{\theta}^{GP}\in\Pi\)(Duan et al., 2016; Andrychowicz et al., 2020), i.e., a policy where the actions follow a Gaussian distribution of mean \(\mu_{\theta}(s)\) and covariance matrix \(\Sigma_{\theta}(s)\) for each state \(s\) and parameter \(\theta\). It thus has the following density: \[\pi_{\theta}^{GP}(a|s)=\mathcal{N}(a|\mu_{\theta}(s),\Sigma_{\theta}(s))\;. \tag{3}\] **Affine Policies.** A parameterized policy (deterministic or stochastic) is said to be affine, if the function approximators used to construct the functional form of the policy are affine functions of the parameter \(\theta\). Formally, each function approximator \(f_{\theta}\) of a history-dependent policy has the following form \(\forall h\in H\): \[f_{\theta}(h)=a(h)^{T}\theta+b(h)\;, \tag{4}\] where \(a\) and \(b\) are general functions of the histories. ## 3 Optimizing Policies by Continuation In this section, we introduce optimization by continuation and formulate direct policy optimization in this framework. ### Optimization by Continuation Optimization by continuation (Allgower and Georg, 1980) is a technique used to optimize nonconvex functions with the objective of avoiding local extrema. A sequence of optimization problems is solved iteratively using the optimum of the previous iteration. Each problem consists in optimizing a deformation of the original function and is typically solved by local search. Through the iterations, the function is less and less deformed. Such procedure is also sometimes referred to as graduated optimization (Blake and Zisserman, 1987) or optimization by homotopy (Watson and Haftka, 1989). Formally, let \(f:\mathcal{X}\to\mathbb{R}\) be the real-valued function to optimize. Let \(g:\mathcal{Y}\to\mathbb{R}\) be another real-valued function used for building the deformation of \(f\). Finally, let the conditional distribution function \(p:\mathcal{X}\to\mathcal{P}(\mathcal{Y})\) be the mapping from an optimization variable \(x\in\mathcal{X}\) to the set of probability measures \(\mathcal{P}(\mathcal{Y})\), such that \(p(y|x)\) is the associated density function for any random event \(y\in\mathcal{Y}\) given \(x\in\mathcal{X}\). The continuation of the function \(f\) under the distribution \(p\) and deformation function \(g\) is defined as the function \(f^{p}:\mathcal{X}\rightarrow\mathbb{R}\) such that \(\forall x\in\mathcal{X}\): \[f^{p}(x)=\operatorname*{\mathbb{E}}_{y\sim p(\cdot|x)}\left[g(y)\right]. \tag{5}\] For the optimization by continuation described hereafter, there must exist a conditional distribution \(p^{*}\) for which \(f^{p}\) equals \(f\) in the limit as \(p\) approaches \(p^{*}\). A typical example is to choose the function \(g\) equal to \(f\), and to use a Gaussian distribution with a constant diagonal covariance matrix for the distribution \(p\). We then have so-called Gaussian continuations (Mobahi and Fisher, 2015). Finally, optimizing a function \(f\) by continuation involves iteratively locally optimizing its continuation for a sequence of conditional distributions approaching \(p^{*}\) with decreasing spread. Formally, let \(p_{0}\succ p_{1}\succ\dots\succ p_{I-1}\) be a sequence of conditional distributions (mononically) approaching \(p^{*}\) with strictly decreasing covariance matrices1. Then, optimizing \(f\) by continuation consists in locally optimizing its continuation \(f^{p_{i}}\) with a local-search algorithm initialized at \(x_{i}^{*}\) for each iteration \(i\). This general procedure is summarized in Algorithm 1. Particular instances of this algorithm are described by Hazan et al. (2016) and Shao et al. (2019). Footnote 1: We consider the convergence of the density functions, implying weak convergence of the distributions, and convergence of the continuations towards the function \(f\). The set of covariance matrices is ordered with the Loewner order (Siotani, 1967). In practice, the optimization process can be approximated by performing a limited number of local-search iterations at each step of the optimization by continuation. In the following sections, we consider that each optimization of the continuation \(f^{p_{i}}\) is approximated with a single gradient ascent step and that the continuation distribution sequence \(p_{0}\succ p_{1}\succ\dots\succ p_{I-1}\) is constructed by iteratively reducing the variance of the distribution \(p_{i}\). Note that if this variance reduction is sufficiently slow, and the stepsize is well chosen, a single gradient ascent step enables to accurately approximate \(x_{i}^{*}\). ``` 1: Provide a sequence \(p_{0}\succ p_{1}\succ\dots\succ p_{I-1}\) 2: Provide an initial variable value \(x_{0}^{*}\in\mathcal{X}\) 3:for all\(i=0,1,\dots,I-1\)do 4:\(x_{i+1}^{*}\leftarrow\) Optimize the continuation \(f^{p_{i}}\) by local search initialized at \(x_{i}^{*}\) 5:endfor 6:return\(x_{I}^{*}\) ``` **Algorithm 1** Optimization by Continuation ### Continuation of the Return of a Policy The direct policy optimization problem usually consists in maximizing a nonconvex function. Optimization by continuation is thus a good candidate for computing a solution. In this section, we introduce a novel continuation adapted to the return of policies. The return of a policy depends on the probability of a sequence of actions through the product of the density \(\eta_{\theta}(a_{t}|s_{t})\) of each action \(a_{t}\) for a given parameter \(\theta\), see equation (1). We define the continuation of interest as the expectation of the return where each factor in the product of densities depends on a different parameter vector. This expectation is taken according to a distribution that disturbs these parameter vectors at each time step with a variance depending on the history. Formally, using the notations from Section 3.1, we optimize the function \(f\) that for all \(x=\theta\) equals the return, \(f(\theta)=J(\pi_{\theta})\), over the set \(\mathcal{X}=\mathbb{R}^{d_{\Theta}}\). Let the covariance function \(\Lambda:H\rightarrow\mathbb{R}^{d_{\Theta}\times d_{\Theta}}\) be a function mapping a history \(h_{t}\in H\) to a covariance matrix \(\Lambda(h_{t})\). Let the continuation distribution \(q\) be a distribution such that \(q(\theta_{t}|\theta,\Lambda(h_{t}))\) is the density of \(\theta_{t}\) distributed with mean \(\theta\) and covariance matrix \(\Lambda(h_{t})\). Then, let \(\mathcal{Y}=\left(\mathcal{S}\times\mathcal{A}\times\mathbb{R}^{d_{\Theta}} \right)^{\mathbb{N}}\) be the set of (infinite) sequences of states, actions and parameters and let \(p\) and \(g\), the two functions defining the continuation, be as follows: \[p(y|x) =p(s_{0})\prod_{t=0}^{\infty}\eta_{\theta_{t}}(a_{t}|h_{t})p_{ \theta}(\theta_{t}|h_{t})p(s_{t+1}|s_{t},a_{t}) \tag{6}\] \[g(y) =\sum_{t=0}^{\infty}\gamma^{t}\rho(s_{t},a_{t})\, \tag{7}\] where \(p_{\theta}(\theta_{t}|h_{t})=q(\theta_{t}|\theta,\Lambda(h_{t}))\) such that the spread of \(p_{\theta}\) depends on the function \(\Lambda\). Taken together, the continuation \(f_{\Lambda}^{q}=f^{p}\) of the return of the policy \(\eta_{\theta}\in\mathcal{E}\) corresponding to the distribution \(q\) and covariance function \(\Lambda\), is defined \(\forall\theta\in\mathbb{R}^{d_{\Theta}}\) as: \[f_{\Lambda}^{q}(\theta)=\operatorname*{\mathbb{E}}_{\begin{subarray}{c}s_{0} \sim p_{0}(\cdot)\\ \theta_{t}\sim p_{0}(\cdot)\\ a_{t}\sim p_{0}(\cdot)\\ s_{t+1}\sim p(\cdot|s_{t},a_{t})\end{subarray}}\left[\sum_{t=0}^{\infty}\gamma^ {t}\rho(s_{t},a_{t})\right]. \tag{8}\] Finally, the continuation equation (8) converges towards the return of \(\eta_{\theta}\) in the limit as the covariance function \(\Lambda\) approaches zero, as required in Section 3.1. This continuation is expected to be well-suited for removing local extrema of the return for three main reasons. First, marginalizing the variables of a function as in our continuation is expected to smooth this function and therefore remove local extrema - the particular case of Gaussian blurring has been widely studied in the literature (Mobahi and Fisher, 2015; Nesterov and Spokoiny, 2017). Second, we underline the interest of considering a continuation in which the disturbance of the policy parameters may vary based on the time step. Indeed, changing the parameter vector of the policy at different time steps (and changing the action distributions) may modify the objective function in significantly different ways. Third, we justify the factorization of the conditional distribution \(p_{\theta}\) equation (6) by the causal effect of actions in the MDP. As the actions only influence the rewards to come, the past history is expected to provide a sufficient statistic for disturbing the parameters in order to remove local optima. We therefore chose parameter probabilities conditionally independent given the past history. This history-dependency is encoded through the covariance function \(\Lambda\) in equation (8). Maximizing \(f_{\Lambda}^{q}\) to solve the optimization problem from Algorithm 1 is a complicated task. A common local-search algorithm used in machine learning is stochastic gradient ascent (Bottou, 2010). The gradient of \(f_{\Lambda}^{q}\) can be computed by Monte-Carlo sampling applying the reparameterization trick (Goodfellow et al., 2016) for simple continuation distributions or relying on the REINFORCE trick (Williams, 1992) in the more general case. These vanilla gradient estimates have practical limitations: the estimates may have large variance, the infinite horizon shall be truncated, and the direction provided is computed in the Euclidean space of parameters rather than in a space of distributions (Peters & Schaal, 2008). Finally, the evaluation of the continuation and its derivatives require one to sample parameter vectors, which may be computationally expensive for complex high-dimensional distributions. The study of different continuation distributions and the application of the optimization procedure from Algorithm 1 to practical problems is left for further works. In this work, we rather rely on the continuation to study direct policy optimization algorithms. ## 4 Mirror Policies and Continuations This section is dedicated to the interpretation of the continuation of the return of a policy. We show it equals the return of another policy, called a mirror policy. The existence and closed form of mirror policies is also discussed. ### Optimizing by Continuation with Mirror Policies **Definition 1**.: _Let \((\mathcal{S},\mathcal{A},p_{0},p,\rho,\gamma)\) be an MDP and let \(\eta_{\theta}\in\mathcal{E}\) be a history-dependent policy parameterized with the vector \(\theta\in\mathbb{R}^{d_{\Theta}}\). In addition, let \(f_{\Lambda}^{q}\) be the continuation of the return of the policy \(\eta_{\theta}\) corresponding to a continuation distribution \(q\) and covariance function \(\Lambda\) as defined in equation (8). We call a mirror policy of the original policy \(\eta_{\theta}\), under the continuation distribution \(q\) and covariance function \(\Lambda\), any history-dependent policy \(\eta_{\theta}^{\prime}\in\mathcal{E}\) such that \(\forall\theta\in\mathbb{R}^{d_{\Theta}}\):_ \[f_{\Lambda}^{q}(\theta)=J(\eta_{\theta}^{\prime}). \tag{9}\] Let us assume we are provided with the continuation \(f_{\Lambda}^{q}\) of the return of an original policy \(\eta_{\theta}\) depending on the parameter \(\theta\) that shall be optimized. In addition, let us assume we can compute a mirror policy \(\eta_{\theta}^{\prime}\) for the original policy \(\eta_{\theta}\). By Definition 1, the continuation of the original policy equals the return of the mirror policy for all \(\theta\). In addition, under smoothness assumptions, all their derivatives are equal too. Therefore, maximizing the continuation of an original policy by stochastic gradient ascent can be performed by maximizing the return of its mirror policy by policy gradient. ### Existence and Closed Form of Mirror Policies In this section, we first show that there always exists a mirror policy. In addition, several closed forms are provided depending on the original policy, the continuation distribution, and the covariance function. **Theorem 1**.: _For any original history-dependent policy \(\eta_{\theta}\in\mathcal{E}\) parameterized with the vector \(\theta\in\mathbb{R}^{d_{\Theta}}\) and for any continuation distribution \(q\) and covariance function \(\Lambda\), there exists a mirror history-dependent policy \(\eta_{\theta}^{\prime}\in\mathcal{E}\) of the original policy \(\eta_{\theta}\) that writes as:_ \[\eta_{\theta}^{\prime}(a|h)=\underset{\theta^{\prime}\sim q(\cdot|\theta, \Lambda(h))}{\mathbb{E}}\left[\eta_{\theta^{\prime}}(a|h)\right]. \tag{10}\] Theorem 1 guarantees the existence of mirror policies. Such a mirror policy is a function depending on the same parameters as its original policy but that has a different functional form and may therefore provide actions following a different distribution compared to the original policy. Theorem 1 leads to two important corollary results. First, as demonstrated in Appendix A, let \(\eta^{\prime\prime}\) be a mirror policy of \(\eta^{\prime}\) and let \(\eta^{\prime}\) be a mirror policy of the original policy \(\eta\) of the form of equation (10). Then, there exists a continuation for which \(\eta^{\prime\prime}\) is a mirror policy of the original policy \(\eta\). It follows that the return of the mirror policy of another mirror policy is itself equal to a continuation of the original policy. Second, Theorem 1 also reveals that for a given original policy and continuation distribution, the variance of the mirror policy is defined through the continuation covariance function \(\Lambda\). Furthermore, we remind that the variance of the continuation is an hyperparameter that shall be selected for each iteration of the optimization by continuation, see Section 3. This choice of hyperparameter is thus reflected as the choice of the variance of a mirror policy. The expert making this choice sees the effect of the disturbed parameters on the environment through the variance of the mirror policy. From a practical perspective, it is probably easier to quantify the effect on the local extrema depending on the variance of the mirror policy rather than depending on the variance of the continuation. **Property 4.1**.: _Let the original policy \(\pi_{\theta}\in\Pi\) be a Markov policy and let the covariance function depend solely on the last state in the history. Then, there exists a mirror Markov policy \(\pi^{\prime}_{\theta}\in\Pi\)._ Property 4.1 is an intermediate result providing sufficient assumptions on the continuation for having mirror Markov policies. Note that for this type of continuation, the parameters of the policy are disturbed independently of the history followed by the agent. **Property 4.2**.: _Let the original policy \(\pi^{GP}_{\theta}\in\Pi\) be a Gaussian policy as defined in equation (3) with affine function approximators. Let the covariance function depend solely on the last state in the history and let the distribution \(q\) be a Gaussian distribution. Then, there exists a mirror Markov policy \(\pi^{\prime}_{\theta}\in\Pi\) such that for all states \(s\in\mathcal{S}\), it converges towards a Gaussian policy in the limit as the affine coefficients of the covariance matrix \(\Sigma_{\theta}(s)\) approaches zero (\(\|\nabla_{\theta}\Sigma_{\theta}(s)\|\to 0\)):_ \[\pi^{\prime}_{\theta}(a|s)\rightarrow\mathcal{N}(a|\mu_{\theta}(s),\Sigma^{ \prime}_{\theta}(s))\;, \tag{11}\] _where \(\Sigma^{\prime}_{\theta}(s)=C_{\theta}(s)+\Sigma_{\theta}(s)\) and \(C_{\theta}(s)=\nabla_{\theta}\mu_{\theta}(s)^{T}\Lambda(s)\;\nabla_{\theta} \mu_{\theta}(s)\)._ Under the assumptions of Property 4.2, a mirror policy can be approached by a policy that only differs from the original one by having a variance which is increased by the term \(C_{\theta}(s)\) proportional to the variance of the continuation. In particular, when the variance of the original policy \(\pi^{GP}_{\theta}\) is solely dependent on the state, then \(\|\nabla_{\theta}\Sigma_{\theta}(s)\|=0\) and \(\pi^{\prime}_{\theta}(a|s)=\mathcal{N}(a|\mu_{\theta}(s),\Sigma^{\prime}_{ \theta}(s))\). In this case, for any \(\theta\), the covariance matrix of this mirror policy is additionally bounded from below such that \(\Sigma^{\prime}_{\theta}(s)\succeq C_{\theta}(s)\). **Property 4.3**.: _Let the original policy \(\mu_{\theta}\in M\) be an affine deterministic policy. Let the covariance function depend solely on the last state in the history and let the distribution \(q\) be a Gaussian distribution. Then, the Markov policy \(\pi^{GP^{\prime}}_{\theta}\in\Pi\) is a mirror policy:_ \[\pi^{GP^{\prime}}_{\theta}(a|s)=\mathcal{N}(a|\mu_{\theta}(s),\Sigma^{\prime} _{\theta}(s))\;, \tag{12}\] _where \(\Sigma^{\prime}_{\theta}(s)=\nabla_{\theta}\mu_{\theta}(s)^{T}\;\Lambda(s)\; \nabla_{\theta}\mu_{\theta}(s)\)._ Therefore, under some assumptions, disturbing a deterministic policy and optimizing it afterwards can be interpreted as optimizing the continuation of the return of this policy. **Property 4.4**.: _Let the original policy \(\mu_{\theta}\in M\) be an affine deterministic policy. Let the distribution \(q\) be a Gaussian distribution. Then, the policy \(\eta^{\prime}_{\theta}\in\mathcal{E}\) is a mirror policy:_ \[\eta^{\prime}_{\theta}(a|h)=\mathcal{N}(a|\mu_{\theta}(s),\Sigma^{\prime}_{ \theta}(h))\;, \tag{13}\] _where \(\Sigma^{\prime}_{\theta}(h)=\nabla_{\theta}\mu_{\theta}(s)^{T}\;\Lambda(h)\; \nabla_{\theta}\mu_{\theta}(s)\)._ Property 4.4 extends Property 4.3 to more general continuation distributions. This extension is used later to justify the interest of optimizing history-dependent policies in order to optimize an underlying deterministic policy by continuation. The theorem and properties are shown in Appendix B. ## 5 Implicit Optimization by Continuation In this section, two formulations, i.e., a parameterized policy and a learning objective each, used by several policy-gradient algorithms are analyzed relying on original and mirror policies. In Section 5.1, we show that optimizing each formulation by local search corresponds to optimizing a continuation. The optimized policy is thus the mirror policy of an unknown original policy. We show the existence of the corresponding continuation and original policy and discuss their closed form. This analysis provides a novel interpretation of the state-of-the-art algorithms for direct policy optimization. We discuss the role of stochastic policies in light of this interpretation in Section 5.2. ### Gaussian Policies and Regularization The policy-gradient literature has mainly focused on optimizing two problem formulations by local search - typically with stochastic gradient ascent and (approximate) trust-region methods. First, the vast majority of works focuses on optimizing the return of Gaussian policies (Duan et al., 2016; Andrychowicz et al., 2020). Second, in many formulations this objective function is extended by adding a bonus to the entropy of the optimized policy (Williams and Peng, 1991; Haarnoja et al., 2019). We show that when optimizing a policy according to these formulations, there exists an (unknown) deterministic original policy and a continuation under which the optimized policy is a mirror policy. Provided with the local-search algorithm from the policy-gradient method, we conclude that optimizing both formulations is equivalent to implicitly optimizing a deterministic policy by continuation. First, we remind that under Property 4.3, for any affine deterministic policy \(\mu_{\theta}\), there exists an affine Gaussian mirror policy \(\pi^{GP^{\prime}}_{\theta}\) as defined by equation (12). In Property 5.1, the converse of Property 4.3 is stated, which answers to the question: _under which conditions a Gaussian policy is the mirror policy of an (unknown) deterministic policy._ For this converse statement to be true, the transformation between covariance functions in Property 4.3 must be surjective, which is guaranteed if \(d_{\mathcal{A}}\leq d_{\Theta}\) and \(\nabla_{\theta}\mu_{\theta}(s)\) is full rank. The first assumption is always met in practice and the second is met when no action is a deterministic function of the others. **Property 5.1**.: _Let \(\pi_{\theta}^{GP^{\prime}}\) be an affine Gaussian policy with mean function \(\mu_{\theta}\), and with covariance function \(\Sigma_{\theta}^{\prime}=\Sigma^{\prime}\) constant with respect to the parameters of the policy (i.e., a function depending solely on the state). If \(d_{\mathcal{A}}\leq d_{\Theta}\) and if \(\nabla_{\theta}\mu_{\theta}(s)\) is full rank, then, there exists a continuation, with covariance \(\Lambda\) proportional to \(\Sigma^{\prime}\), for which \(\pi_{\theta}^{GP^{\prime}}\) is a mirror policy of the original policy \(\mu_{\theta}\)._ Entropy regularization ensures that the variance of the policy remains sufficiently large during the optimization process.2 Similar objectives are pursued with maximum entropy reinforcement learning (Haarnoja et al., 2019) or with (approximate) trust-region methods where the trust-region constraint is dualized (Schulman et al., 2015, 2017). Let us consider an affine Gaussian original policy \(\pi_{\theta}^{GP}\) with constant covariance \(\Sigma_{\theta}=\Sigma\). Under Property 4.2, there exists another affine Gaussian policy \(\pi_{\theta}^{GP^{\prime}}\) that is a mirror policy of \(\pi_{\theta}^{GP}\). This mirror policy has the same mean function and a covariance function bounded from below by \(C_{\theta}=C\). Property 5.2 provides the converse and answers to the question: _under which conditions a Gaussian policy with sufficiently large covariance is the mirror policy of an (unknown and Gaussian) policy._ Similar to the previous property, this is guaranteed when \(d_{\mathcal{A}}\leq d_{\Theta}\) and \(\nabla_{\theta}\mu_{\theta}(s)\) is full rank. Footnote 2: Formally, for two matrices \(A\) and \(B\), we have that \(A\succeq B\Rightarrow|A|\geq|B|\)(Siotani, 1967). As the entropy of a Gaussian policy is a concave function of the determinant of the covariance matrix, a bounded covariance matrix implies a bounded entropy. The entropy-regularization learning objective can therefore be interpreted as the Lagrangian relaxation of the latter entropy-bounded optimization problem. **Property 5.2**.: _Let \(\pi_{\theta}^{GP^{\prime}}\) be an affine Gaussian policy with mean function \(\mu_{\theta}\), and with covariance function \(\Sigma_{\theta}^{\prime}=\Sigma^{\prime}\succeq C\) constant with respect to the parameters of the policy (i.e., a function depending solely on the state) and bounded from bellow by \(C\). If \(d_{\mathcal{A}}\leq d_{\Theta}\) and if \(\nabla_{\theta}\mu_{\theta}(s)\) is full rank, then, there exists a continuation, with covariance \(\Lambda\) proportional to \(C\), for which \(\pi_{\theta}^{GP^{\prime}}\) is a mirror policy of an original Gaussian policy \(\pi_{\theta}^{GP}\) with the same mean function \(\mu_{\theta}\) and with constant covariance function \(\Sigma\preceq\Sigma^{\prime}\)._ The two previous properties indicate that a Gaussian policy is guaranteed to be a mirror policy of another policy, Gaussian or deterministic, under some assumptions. If we furthermore guarantee that the continuation covariance decreases during the optimization, policy-gradient algorithms optimizing affine Gaussian policies can be interpreted as algorithms optimizing an original policy by continuation. Let us consider two cases, each corresponding to a problem formulation, where we optimize by policy gradient an affine Gaussian policy \(\pi_{\theta}^{GP^{\prime}}\) with covariance function constant with respect to the parameters of the policy. First, we consider the case where its covariance matrix decreases during the optimization through a manual scheduling. In this context, under property 5.1, there exists an original deterministic policy and the covariance of the continuation decreases through the optimization, such that the policy-gradient algorithm optimizes this policy by continuation. Second, we consider the case where the entropy is regularized with a decreasing regularization term (e.g., by scheduling the Lagrange multiplier). Then, as entropy regularization can be seen as a constraint on the covariance of the policy, under property 5.2, there exists an original Gaussian policy and the covariance of the continuation decreases through the optimization, such that the policy-gradient algorithm optimizes this stochastic policy by continuation. Finally, as stated previously and shown in Theorem 2 in Appendix B, optimizing the return of the mirror policy of another mirror policy is equivalent to optimizing a continuation of the original policy. Therefore, policy-gradient algorithms that optimize affine Gaussian policies with both discounted covariance and decreasing regularization by local search can also be interpreted as algorithms optimizing the mean function (i.e., a deterministic policy) of this policy by continuation. We now illustrate how policy-gradient algorithms implicitly optimize by continuation. We take as example an environment in which a car moves in a valley and must reach its lowest point (positioned in \(x_{target}\)) to maximize the expected sum of rewards gathered by the agent, see Appendix C. We assume we want to find the best K-controller, i.e., a deterministic policy \(\mu_{\theta}(x)=\theta\times(x-x_{target})\), where \(x\) is the position of the car. Directly optimizing such a policy is in practice subject to converging to a local extremum, as explain hereafter. We thus consider the Gaussian policy \(\pi_{\theta}^{GP}(a|x)=\mathcal{N}(a|\mu_{\theta}(x),\sigma^{\prime})\), where \(\mu_{\theta}(x)\) and \(\sigma^{\prime}\) are the mean and variance of the policy, respectively. This policy is a mirror policy of the deterministic policy \(\mu_{\theta}\) under a continuation of variance \(\lambda=\sigma^{\prime}/(x-x_{target})^{2}\), see Property 4.3. As can be seen in Figure 1, for each value of \(\sigma^{\prime}\), the return of the mirror policy equals the smoothed return of the original deterministic policy \(\mu_{\theta}\). Consequently, optimizing by policy gradient the Gaussian policy is equivalent to optimizing the deterministic policy by continuation. For a well-chosen sequence of \(\sigma^{\prime}\), with a fixed scheduling or with adequate entropy regularization, the successive solutions found by local search will escape the basin of attraction of the suboptimal parameter for any initial parameter of the local search - whereas optimizing the deterministic policy directly would provide suboptimal solutions. In this section, we have established an equivalence between the optimization of some policies by policy gradient and the optimization of an underlying policy by continuation. It opens up new questions about the hypothesis space of the (mirror) policy to consider in practice in order to exploit the properties of continuations at best. These considerations are made in the next section. We finally recall that a central assumption in the previous results is the affinity of policies. Such policies are often considered in theoretical studies (Busoniu et al., 2017) and perform well on complex tasks in practice (Rajeswaran et al., 2017). ### Continuations for Interpreting Stochastic Policies In practice, we know that optimizing stochastic policies tends to converge to a final policy with low variance and better performance than if we had directly optimized a deterministic policy. Practitioners often justify this observation by the need to explore through a stochastic policy. Nevertheless, to our knowledge, this concept inherited from bandit theory is not well defined for direct policy optimization. The previous analysis establishes an equivalence between optimizing stochastic policies with policy-gradient algorithms and optimizing deterministic policies by continuation. Furthermore, as explained in Section 3.2, the continuation equation (8) consists in smoothing the return of this deterministic policy through the continuation distribution. Local optima tend to be removed when the variance of the continuation is sufficiently large. Optimizing stochastic policies and regularizing the entropy, as in most state-of-the-art policy-gradient algorithms, is therefore expected to avoid local extrema before converging towards policies with small variance. We thus provide a theoretical motivation for the performance reached by algorithms applying exploration as understood in direct policy optimization. The relationships between optimization by continuation and policy gradient in Section 5.1 have been established relying on Property 4.2 and Property 4.3. They assume continuations where the covariance matrix depends only on the current state and not on the whole observed history. In the general case, Property 4.4 allows one to extend these results by performing an analysis similar to Section 5.1. To be more specific, let us assume an affine Gaussian policy \(\pi_{\theta}^{GP^{\prime}}\), where the mean \(\mu_{\theta}\) is a function of the state and where the covariance \(\Sigma_{\theta}=\Sigma\) is a function of the history and is constant with respect to \(\theta\). Under this assumption, if \(d_{\mathcal{A}}\leq d_{\Theta}\) and \(\nabla_{\theta}\mu_{\theta}(s)\) is full rank, the return of the policy \(\pi_{\theta}^{GP^{\prime}}\) is equal to a (unknown) continuation of the mean function \(\mu_{\theta}\) (i.e., a deterministic policy). Furthermore, optimizing the Gaussian policy by policy gradient while discounting the covariance can be interpreted as optimizing the deterministic policy \(\mu_{\theta}\) by continuation. In practice, this result suggests to optimize history-dependent policies by policy gradient to take advantage of the most general regularization of the objective function through implicit continuation. A similar observation was recently made by Mutti et al. (2022) who argued that history-dependent policies are required when more complex regularizations are involved. Finally, a last point has been left open in the previous discussions, namely the update of the covariance matrix of the mirror policies. The latter is defined through the covariance of the continuation. Therefore, the covariance must decrease through the optimization and must be chosen to avoid local optima. One direction to investigate in order to select a variance that removes local extrema is to update the parameters of the policy by following a combination of two directions: the functional gradient of the optimized policy's return with respect to the policy mean and the functional gradient of another measure (to be defined) with respect to the policy variance. An example of heuristic measure for smoothness might be the entropy of the actions and/or states encountered in histories. This strategy obviously does not follow the classical approach when optimizing stochastic policies where the covariance is adapted by the policy-gradient algorithm to locally maximize the return and the exact procedure for updating the variance will require future studies. The empirical inefficiency of this classical approach was highlighted in previous works that improved the performance of policy-gradient algorithms by exploring alternative learning objective functions (Houthooft et al., 2018; Papini et al., 2020). ## 6 Conclusion In this work, we have studied the problem formulation, i.e., policy parameterization and reward-shaping strategy, when solving direct policy optimization problems. More particularly, we established connections between formulations of state-of-the-art policy-gradient algorithms and the optimization by continuation framework (Allgower and Georg, 1980). We have shown that algorithms optimizing stochastic policies and regularizing the entropy inherit the properties of optimization by continuation and are thus less subject to Figure 1: Illustration of the return of the policies \(\mathcal{N}(a|\mu_{\theta}(x),\sigma^{\prime})\), where \(\mu_{\theta}(x)=\theta\times(x-x_{target})\), for different \(\sigma^{\prime}\) values. The darker the curve, the smaller \(\sigma^{\prime}\), and the darkest one is the return of the deterministic policy \(\mu_{\theta}\). The green dots represent the global maxima and the red dots the local maxima. For some sufficiently large value for \(\sigma^{\prime}\), the return of the policy has a single extremum. converging towards local optima. In addition, the role of the variance of the policies is reinterpreted in this framework: it is a parameter of the optimization procedure to adapt in order to avoid local extrema. Additionally, to inherit the properties of generic continuations, it may be beneficial to consider variances that are functions of the history of states and actions observed at each time step. Our study leaves several questions open. Firstly, our results rely on several assumptions that may not hold in practice. Specifically, it is unclear how our findings can be generalized to non-affine policies and alternative to Gaussian policies. Nonetheless, our results can be extended in cases where we can obtain an analytic expression for the mirror policy outlined in Theorem 1. While finding such an expression may be challenging in general, we can easily extend our conclusions to non-affine policies by considering the first-order approximation. Additionally, our study is focused on Gaussian policies, which are commonly used in continuous state-action spaces. However, for discrete action spaces, a natural choice of policy is a Bernoulli distribution over the actions (or a categorical distribution for more than one action). If the state space is also discrete, this distribution may be parameterize by a table providing the success probability of the Bernoulli distribution for each state. In the case of a Beta continuation distribution, a mirror policy can be derived where actions follow a Beta-binomial distribution in each state, a result known in Bayesian inference as the Beta distribution is a conjugate distribution of the binomial distribution (Bishop and Nasrabadi, 2006). An analysis of this mirror policy would allow us to draw conclusions equivalent to those of the continuous case studied in this paper. Secondly, the study focused on entropy regularization of the policy only. Recent works have underlined the benefits of other regularization strategies that enforce the spread of other distributions as the state visiting frequency or the marginal state probability (Hazan et al., 2019; Guo et al., 2021; Mutti et al., 2022). Future research is also needed to better understand the effect of these regularizations on the optimization procedure. Finally, we give a new interpretation for the variance of policies that suggests it shall be updated to avoid local extrema rather than to maximize the return locally. A first strategy for updating the variance is proposed in Section 5.2, which opens the door to further research and new algorithm development. ## 7 Acknowledgments The authors would like to thank Csaba Szepesvari for the discussion on some mathematical aspects that allowed to increase the quality of this study. We also thank our colleagues Gaspard Lambrechts, Arnaud Delaunoy, Pascal Leroy, and Bardhyl Miftari for valuable comments on this manuscript. Adrien Bolland gratefully acknowledges the financial support of a research fellowship of the F.R.S.-FNRS.
2308.11309
TrajPy: empowering feature engineering for trajectory analysis across domains
Trajectories, sequentially measured quantities that form a path, are an important presence in many different fields, from hadronic beams in physics to electrocardiograms in medicine. Trajectory anal-ysis requires the quantification and classification of curves either using statistical descriptors or physics-based features. To date, there is no extensive and user-friendly package for trajectory anal-ysis available, despite its importance and potential application across domains. We developed a free open-source python package named TrajPy as a complementary tool to empower trajectory analysis. The package showcases a friendly graphic user interface and provides a set of physical descriptors that help characterizing these intricate structures. In combina-tion with image analysis, it was already successfully applied to the study of mitochondrial motility in neuroblastoma cell lines and to the analysis of in silico models for cell migration. The TrajPy package was developed in Python 3 and released under the GNU GPL-3 license. Easy installation is available through PyPi and the development source code can be found in the repository https://github.com/ocbe-uio/TrajPy/. The package release is automatically archived under the DOI 10.5281/zenodo.3656044.
Maurício Moreira-Soares, Eduardo Mossmann, Rui D. M. Travasso, José Rafael Bordin
2023-08-22T09:37:48Z
http://arxiv.org/abs/2308.11309v1
###### Abstract ###### Abstract **Motivation:** Trajectories, sequentially measured quantities that form a path, are an important presence in many different fields, from hadronic beams in physics to electrocardiograms in medicine. Trajectory analysis requires the quantification and classification of curves either using statistical descriptors or physics-based features. To date, there is no extensive and user-friendly package for trajectory analysis available, despite its importance and potential application across domains. **Results:** We developed a free open-source python package named TraiPy as a complementary tool to empower trajectory analysis. The package showcases a friendly graphic user interface and provides a set of physical descriptors that help characterizing these intricate structures. In combination with image analysis, it was already successfully applied to the study of mitochondrial motility in neuroblastoma cell lines and to the analysis of _in silico_ models for cell migration. **Availability:** The TraiPy package was developed in Python 3 and released under the GNU GPL-3 license. Easy installation is available through PyPI and the development source code can be found in the repository [https://github.com/ocbe-uiq/TraiPy/](https://github.com/ocbe-uiq/TraiPy/). The package release is automatically archived under the DOI 10.5281/zenodo.3656044. **Contact:** [email protected] ## 1 Introduction Trajectories are present in several fields of science with varying definitions but are intuitively understood as a set of points sequentially ordered and interconnected forming a path. More rigorously, a trajectory is defined by a sequence \(\left(x_{n}(t_{n})\right)_{n=0}\) of values \(x\), measured at time \(t_{n}\) with ordering index \(n\) \[\left(x_{n}(t_{n})\right)_{n=0}=x_{0}(t_{0}),x_{1}(t_{1}),x_{2}(t_{2}),...\] This sequence may be obtained experimentally or be the result a numerical calculation, may follow a closed mathematical form, a recursive definition, or obey a physical law or a biological mechanism (Levin, 2021). The values of \(x\) are often spatial coordinates in the Euclidean space, but any quantity measured repeatedly can delineate a trajectory in an abstract space, such as blood pressure (Ji _et al._, 2020), sunburns (Lergenmuller _et al._, 2022) or physical activity recorded over time (Perrier _et al._, 2022). Considerable effort has been employed to characterize complex trajectories in biology at different scales, from the diffusion of proteins and nano particles at the subcellular level (Huet _et al._, 2006; Arcizet _et al._, 2008), through wandering ants and migratory birds (Wolf and Wehner, 2000; Croxall _et al._, 2005) to the study of hand tremor trajectories in Parkinson's Disease (San-Segundo _et al._, 2020). Due to recent advances in microscopy, we can visualize the inner life of the cell, including the dynamics of mRNA, mitochondria, microtubules, actin filaments, etc. These rich trajectory data are challenging to summarise and to model in their raw format, demanding feature extraction and quantification. In biostatistics, trajectories arise naturally in clinical trials and observational longitudinal studies, when more than two measurements are recorded for the same patient at different timepoints, such as in the studies on sunburns and physical activity previously mentioned. This type of data requires methods developed for the analysis of repeated measurements and can also benefit from feature engineering. From the perspective of trajectory analysis in molecular dynamics (MD) simulations, among many available softwares we highlight three packages: MDAnalysis (Michaud-Agrawal _et al._, 2011), PTraj/CPPTraj (Roe and Cheatham, 2013) and freud (Ramasubramani _et al._, 2020). These packages are the state-of-the-art for MD analysis, but they require mastering either low-level programming or command line interfaces (CLI) which may pose a challenge for the end-user. Moreover, they are not suitable for general applications purpose. The main goal of well-established methods such as tracky ([http://soft-matter.github.io/trackpy/](http://soft-matter.github.io/trackpy/)) has on image processing and these codes are heavily oriented towards specific field dependent needs. They provide a limited number of quantitative descriptors, often with focus only on the net displacement or the mean squared displacement (MSD) for estimating the diffusion exponent (Allan _et al._, 2023). However, these measures lack sensitivity for the characterization of different kinds of trajectories in biophysics (Bumecki _et al._, 2015). Therefore, there is a demand for specialized packages that aim to improve and democratize feature engineering for general trajectory analysis. We propose TrajPy as a framework for tackling these challenges, aiming at broad applications across domains. The package can be integrated at the tip of image analysis pipelines, used for postprocessing of _in silico_ simulations or longitudinal clinical data. Successful modern methods perform the computation of physical properties related to the kinematics and/or morphology of the curves, to build a multidimensional space of attributes. This step permits to unveil hidden information about the trajectories by applying multivariate machine learning (ML) methods. For instance, in (Wagner _et al._, 2017) the authors propose a set of features to quantify single cell dynamics and draw conclusions regarding the classification of cell movement. They provide a random forest classifier TraJClassifier as plugin for the image analysis software Fiji (Schindelin _et al._, 2012). The attributes selected in this work are known in physics as good predictors for classifying movements in different types (sub-diffusion, normal diffusion, super-diffusion and anomalous diffusion), but beyond these classifications they are also helpful for identifying key differences between trajectories, even under the same diffusion regime. We expanded the set of features proposed by Wagner et al. with fourrier analysis and improved the estimation of the diffusion coefficient by implementing the Green-Kubo method. TrajPy currently offers 17 features that can be computed for any generic trajectory. It is important to note that TrajPy is not intended to replace specialized software, but it was developed as a building block that can work in synergy with other field-specific methods. In addition, it comes with a user-friendly graphical user interface (GUI) that requires no programming skills, making it accessible for experts in different fields to empower data analyses. ## 2 Methods Figure 1: Applications and functionalities. In A) we present TrajPy’s graphic user interface (GUI). In B) we show TrajPy’s capabilities for trajectory classification using principal component analysis. In C) Neuroblastoma derived-cell line used for the study of mitochondrial motility and in D) a system of bead-spring polymeric entities as an _in silico_ model for biological cells, both dynamics were quantified with TrajPy’s features. In E) the change in a set of four features from TrajPy between two distinguished trajectories is depicted. B and E, C, and D were adapted from (Simões et al., 2021), (Soares, 2020), and (Mossman, 2022), respectively. TrajPy is an open-source python package in continuous development on GitHub which welcome external contributions. We employ continuous integration/continuous deployment (CI/CD) with automated unit tests to assure code quality and reliability. All releases are published automatically to the PyPi repository, offering a simple method for installation and dependency management. The package development is driven by the aim of long-term maintainability and, as such, the number of external dependencies is kept at bare minimum. The core engine of the package requires only the standard packages for scientific computing _scipy_ and _numpy_. In addition, to run the graphical user interface (GUI) the packages _ukthemes_ and _Pillow_ are needed. Furthermore, _PyY4ML_ provides support for parsing molecular dynamics simulation data from LAMMPS (Plimpton _et al._, 2021). We provide online documentation with _readthedocs_. TrajPy consists of 3 main units of code, as described below. The heart of the package lays in _rajpy_ and contains the class _Trajectory_, which can be initialized either as a dummy object for calling its functions, or by loading a trajectory array, or a csv trajectory file. This primary code allows the user to compute the various physical and statistical attributes such as the Mean Squared Displacement, Diffusion Coefficient and Velocity of any given trajectory (see Supplementary Table 1 for the extensive list of features). The second unit is _traj_generator.py_ which consists in a collection of methods implemented to simulate different diffusion modes: confined, normal, anomalous, and direct motion. Lastly, but not least important, the _guipy_ contains the code for running the GUI, which provides a friendly interface that requires no knowledge of programming from the user (see **Figure 1**A). We propose two independent and complementary workflows for data analysis with TrajPy. The first approach encompasses the development of a classification model for the diffusion modes aforementioned. We generate synthetic trajectories by employing 4 independent simulation engines that generate trajectories on each one of the 4 labels (sub-diffusion, normal diffusion, super-diffusion and anomalous diffusion). The space of parameters for these simulations can be explored to obtain different trajectories that obey the same diffusion regime. Then we apply feature engineering to quantify these trajectories with the proposed features in TrajPy. The data generated with the features and the labels are used to train a classifier that can be used later for classifying unseen data generated from simulations or experiments. We provide a dataset of synthetic data that can be used to train new models (Moreira-Soares, 2020). **Figure 1**B depicts the principal components for the synthetic data and the diffusion modes clusters (Soares, 2020). The second workflow regards to statistical analysis of experimental (unlabeled) raw trajectory data. We perform the same feature engineering process on the experimental data, obtaining the same attributes used to train our classifier. Therefore, if deemed relevant, the analyst can apply the classifier to the experimental data and obtain the diffusion modes. The features can be useful to quantify different systems of interest across many areas and statistical inference can be performed to draw novel insights about the systems' nature. In addition, new classifiers can be trained based on other labels that may be interesting in other fields based on domain knowledge. For example, water quality condition affects fish trajectories, so classifying these trajectories between "Normal water quality" and "Polluted/abnormal water quality" is more relevant than diffusion modes classification in this context (Cheng _et al._, 2019). ## 3 Validation and Results The package was applied to study neuronal mitochondrial trafficking in neuroblastoma cell lines (Simoes _et al._, 2021). In this study, the researchers exposed the cells to mitochondrial toxins and recorded the mitochondrial trajectories using TIRF microscopy (see **Figure 1**C). By characterizing the dynamic trajectories, they analyzed how mitochondrial motility was affected. The application of TrajPy's feature engineering facilitated a deeper understanding of the underlying biological process. The findings revealed a novel quantitative approach describing how mitochondria behave in both healthy and diseased neuronal cells, demonstrating the valuable potential of TrajPy for applications in the study of subcellular dynamics. In another study, TrajPy was employed to quantify and analyze the migration behavior of self-propelled droplets in dense fibrous media modelled _in silico_(Moreira-Soares _et al._, 2020). By using TrajPy's feature engineering capabilities, the velocity of the cells and morphology of the cell's trajectory were measured, as a function of fiber density and adhesiveness between the cell and the matrix fibers. TrajPy enabled the comparison of simulation results with in vitro migration assay data of fibrosarcoma cells in fibrous matrices, demonstrating good agreement between the two methodologies. This study shed light on the critical role of adhesiveness in cell migration within crowded environments. Moreover, TrajPy has been used to explore the behavior of a simplified drop-like model representing biological cells as it undergoes the jamming transition (Mossmann, Eduardo, 2022), the physical process by which viscosity increases with increasing particle density (Mongera _et al._, 2018). In **Figure 1**D we can see the cell model with deformable boundaries and the system under two conditions with low and high rigidity. The jamming transitions have recently been recognized as key in various biological processes, including cell migration, embryo development, tissue homeostasis, and disease progression (Sadati _et al._, 2014; Lenne and Trivedi, 2022; Oswald _et al._, 2017; Gotthieil _et al._, 2023). By utilizing TrajPy, the behavior of the drop-like model was quantified and analyzed as pressure was increased, leading to the change in fluid viscosity. In addition, the cells trajectories were classified into the 4 diffusion modes using a classifier built with TrajPy's synthetic data. Through the application of TrajPy, the project gained valuable insights into how cell populations rapidly and significantly change their material properties during jamming transition, revealing the physiological relevance of these transition and permitting to explore potential regulatory mechanisms. **Figure 1**E gives an intuition of how a set of 4 features implemented in TrajPy change between a trajectory with high persistency time (upper) and another with higher stochasticity (lower). More examples are provided in supplementary information, in the package's documentation and in the code repository. ## Acknowledgements We thank all contributors from the open-source community who keep this project alive. ## Funding MM-S received funding from the National Council for Scientific and Technological Development (CNPq - Brazil), through the proc. 235101/2014-1 and the European Union's Horizon 2020 Research and Innovation program under the Marie Sklodowska-Curie Actions Grant, agreement No. 80113 (Scientia fellowship). EHM received funding from Te Panahia Mattianti - Centre of Excellence for Complex Systems and the Brazilian Coordination for the Improvement of Higher Education Personnel (CAPES, financing Code 001). RDMT thanks the support of FEDER funds through the Operational Programme Competitiveness Factors - COMPETE and Fundacao para a Ciencia e a Tecnologia through the strategic projects UIDB/04564/2020 and UIDP/04564/2020. JRB is grateful to the CNPq, proc. 403427/2021-5 and 304958/2022-0, and to the Research Support Foundation of the State of Rio Grande do Sul (FAPERGS), TO 21/2551-0002024-5, for the funding support. ## Conflict of Interest None declared.
2303.04310
Aharonov-Bohm magnetism in open Fermi surfaces
Orbital diamagnetism requires closed orbits according to the Liftshiftz-Kosevich theory. Therefore, one might expect that open Fermi surfaces do not have a diamagnetic response. Contrary to this expectation, we show that open orbits in finite systems do contribute a magnetic response which oscillates between diamagnetism and paramagnetism. The oscillations are similar to the Aharonov-Bohm effect, because the oscillation phase is set by the number of flux quanta through the area defined by the width of the sample and the distance between adjacent atomic layers. The magnetic response originates from the closed trajectories formed by counter-propagating open orbits coupled via specular boundary reflections. The phenomenon acts as a probe of the phase coherence of open electron trajectories.
Kostas Vilkelis, Ady Stern, Anton Akhmerov
2023-03-08T01:17:31Z
http://arxiv.org/abs/2303.04310v1
# Aharonov-Bohm magnetism in open Fermi surfaces ###### Abstract Orbital diamagnetism requires closed orbits according to the Lifshitz-Kosevich theory. Therefore, one might expect that open Fermi surfaces do not have a diamagnetic response. Contrary to this expectation, we show that open orbits in finite systems do contribute a magnetic response which oscillates between diamagnetism and paramagnetism. The oscillations are similar to the Aharonov-Bohm effect, because the oscillation phase is set by the number of flux quanta through the area defined by the width of the sample and the distance between adjacent atomic layers. The magnetic response originates from the closed trajectories formed by counter-propagating open orbits coupled via specular boundary reflections. The phenomenon acts as a probe of the phase coherence of open electron trajectories. ## I Introduction According to the classical theory by Langevin, diamagnetism is a result of the cyclotron motion of electrons in a magnetic field [1]. While this explanation provides an intuitive picture, it is incorrect due to the Bohr-van Leeuwen theorem [2; 3] that proves the absence of magnetic response in classical mechanics. On the other hand, a more modern interpretation by Liftshitz-Kosevich [4] explains diamagnetism as a result of quantized closed orbits along the Fermi surface. The picture by Liftshitz-Kosevich is simple yet incredibly successful at explaining phenomena like de Haas-van Alphen (dHVA) diamagnetic oscillations [5] through the Fermi surface shape of metallic systems [6; 7]. Because ballistic orbits in the magnetic field are rotated and rescaled cuts of the Fermi surface, a Fermi surface that spans the whole Brillouin zone results in an open cyclotron orbit. An example of an open orbit is shown in Fig. 1 by the black curve. These orbits appear in metals such as copper [8] and gallium [9] or in highly anisotropic materials like delafossites [10]. The Liftshitz-Kosevich theory [4] states that open orbits do not have a magnetic response. However, in multi-band materials with magnetic breakdown regions, it is possible to couple several open orbits into an effective closed orbit [11]. Such effective closed orbits have a magnetic response, but the contribution is exponentially small. On the other hand, open orbits in single-band materials do not close. As a result, the open orbits cannot be quantized and thus do not have a magnetic response according to the Bohr-van Leeuwen theorem [2; 3]. That raises the question of whether it is possible to observe quantum interference phenomena in open orbits without magnetic breakdown. In this paper, we develop a theory of the orbital magnetic response of open orbits in finite samples and predict magnetic oscillations alternating between diamagnetism and paramagnetism. Similar to the \(h/e\) oscillations of magnetoresistance oscillations in layered materials [12], these oscillations have the frequency of the Aharonov-Bohm effect [13] through the loop defined between the adjacent conducting atomic layers and the width of the sample. In addition to requiring ballistic phase-coherent propagation, we find that these magnetic oscillations are sensitive to boundary quality: diffusive boundaries destroy the effect. With these conditions fulfilled, we predict that this phenomenon has a strength comparable to Landau diamagnetism. Figure 1: An example of closed (blue curve) and open (black curve) orbits. The right-moving (solid black line) and the left-moving (dashed black line) open trajectories are connected through boundary reflections. The inset at the bottom right shows the corresponding Fermi surface which is closed(open) for the blue(black) curve. ## Open Orbit Quantization via Boundary Reflections We begin from considering an open Fermi surface in layered materials with weak interlayer coupling, however our theory equally applies to any other open Fermi surfaces. The dispersion of such a layered system is \[\varepsilon(\kappa_{x},k_{z})=\varepsilon_{\parallel}(\kappa_{x})+2t_{\perp}\cos \left(k_{z}c\right)\!, \tag{1}\] where \(c\) is unit cell spacing along the \(z\)-direction, \(t_{\perp}\) is the interlayer coupling and \(\varepsilon_{\parallel}(k_{\kappa})\) is a general dispersion. For brevity, we omit the \(y\)-dimension here and will introduce it later on. We linearize the dependence of \(\epsilon_{\parallel}\) on \(\kappa_{x}\) at energy \(E\): \[\begin{split}&\varepsilon_{\parallel}(\kappa_{x})\approx E+\hbar v _{x}(E)\left[\kappa_{x}-k_{x}(E)\right]\\ &\varepsilon_{\parallel}(k_{x})=E,\quad\hbar v_{x}(E)=\frac{ \partial\varepsilon_{\parallel}(k_{x}(E))}{\partial k_{x}},\end{split} \tag{2}\] where \(k_{x}(E)\) and \(v_{x}(E)\) are momentum and and velocity along x-direction at energy \(E\) when \(k_{z}=\pi/(2c)\), such that the out-of-plane energy is zero. Note that we require Eq. (1) to be open at Fermi level \(\varepsilon=\mu\) along the \(k_{z}\) direction. The addition of a homogeneous magnetic field perpendicular to the interlayer coupling \(t_{\perp}\) introduces open orbits that run along the open direction of the Fermi surface. The magnetic field points along the \(y\)-direction \(\mathbf{B}=(0,B,0)\), and introduces a vector potential \(\mathbf{A}=(0,0,-Bx)\) in the Landau gauge. It enters the Hamiltonian Eq. (1) via the Peierls substitution [14]\(k_{z}\to k_{z}-\frac{e}{\hbar}Bx\). In this case, \(\kappa_{x}\) is not conserved anymore and therefore we substitute Eq. (2) into Eq. (1) at fixed energy \(\varepsilon(\kappa_{x},k_{z})=E\) in order to define the local momentum along \(x\): \[\kappa_{x}(E,k_{z},x)=\pm\left[k_{x}(E)-\frac{2t_{\perp}}{\hbar v_{x}(E)}\cos \left(k_{z}c-\frac{e}{\hbar}cBx\right)\right] \tag{3}\] Whenever \(k_{x}(E)>2t_{\perp}/(\hbar v_{x}(E))\), the trajectory in Eq. (3) is open because \(\kappa_{x}\) stays strictly positive/negative. The semiclassical motion in a magnetic field follows constant energy lines in momentum space and a real-space trajectory that is perpendicular to the momentum-space one. For the open Fermi surface we consider, this implies periodic motion in the \(z\)-direction, but not in the \(x\)-direction. In the latter, the electron flips its direction of motion only when it scatters off a boundary. For specular scattering off the boundary, the allowed trajectories are those that fulfil by the Bohr-Sommerfeld quantization rule [15] given via WKB theory [16]: \[S(E,k_{z})=\oint\kappa_{x}(E,k_{z})dx=2\pi\left(n+\gamma\right), \tag{4}\] where \(n\in\mathbb{Z}\) and \(\gamma\) is the Maslov index \(\gamma=1/2\) for soft potential turning points and \(\gamma=0\) for hard-wall boundaries. To calculate the quantized spectrum, we substitute Eq. (3) into Eq. (4): \[S(E,k_{z})=2k_{x}(E)W-\frac{2\Gamma(\phi)\cos\left(k_{z}c\right) }{\Delta E_{x}}, \tag{5}\] \[\Delta E_{x}(E)=\frac{\hbar v_{x}(E)}{W},\quad\phi=\frac{e}{\hbar }cWB,\quad\Gamma=2t_{\perp}\operatorname{sinc}\left(\phi/2\right)\!, \tag{6}\] with \(\Delta E_{x}\) the energy spacing along \(x\), \(\phi\) is the number of magnetic flux quanta (in units of \(2\pi\)) passing through the loop of area \(cW\). The above equation for action is quantized in terms of \(2\pi n\) (\(\gamma=0\) for hard-wall boundaries). The Bohr-Sommerfeld quantization in Eq. (6) defines the allowed energies by the relation: \[\hbar v_{x}(E_{n})\left[\frac{\pi n}{W}-k_{x}(E_{n})\right]+\Gamma\cos\left( k_{z}c\right)=0. \tag{7}\] The solution to Eq. (7) is \[E_{n}=\varepsilon_{\parallel}\Big{(}\frac{\pi n}{W}\Big{)}+\Gamma\cos\left(k_{ z}c\right)\!, \tag{8}\] where one identifies \(\Gamma\) as the effective bandwidth of \(k_{z}\) band as defined in Eq. (6). The bandwidth oscillates with the number of flux quanta \(\phi\) threading a rectangle Figure 2: Plot of multiple displaced and overlapping \(k_{z}\) bands (blue curves) as a function of flux quanta \(\phi\) passing through the system. The orange curve highlights one such band and its variation of bandwidth \(\Gamma\) with \(\phi\). The blue filling illustrates the occupation of bands below the chemical potential \(\mu\) (dashed line), with the intensity indicating the number of overlapping bands at that point. As the bandwidth changes with respect to \(\mu\), the occupation of the bands changes which leads to a magnetic response. When an integer number of flux quanta pass through the system, different \(k_{z}\) trajectories are identical since they lead to the same energy as shown in the top right inset and therefore the bandwidth collapses. of size \(Wc\). The oscillations decay in a Fraunhoffer-type way, and their periodicity is that of the Aharonov-Bohm effect, as shown in Fig. 2. Furthermore, when the number of flux quanta is an integer, the bandwidth \(\Gamma\) collapses to zero in which case the different \(k_{z}\) channels decouple. In the opposite limit, we see that if we take \(B=0\) in Eq. (8), we recover the original dispersion given by Eq. (1). ## II Diamagnetic response of an open Fermi surface To find the total magnetisation of the system, we reintroduce back the \(y\)-dimension. We start with a zero-temperature case and consider finite temperature later. In this case, the magnetisation of the system at fixed \(k_{y}\) is: \[M(\mu,k_{y})=\frac{d}{dB}\int_{-\infty}^{\mu}E\rho(E,k_{y})dE, \tag{9}\] where \(\rho(E,k_{y})\) is the density of states along \(x\) at energy \(E\) and wavevector \(k_{y}\). We express the density of states \(\rho\) through the action of a trajectory in Eq. (6), similar to the work by Doron and Smilansky [17]: \[\rho(E,k_{y})=-\frac{1}{W\pi^{2}}\int_{-\pi/c}^{\pi/c}dk_{z}\frac{d}{dE}\text{ Im}\ln\Big{(}1-e^{iS\left(E+i0^{+}\right)}\Big{)}, \tag{10}\] with Im the imaginary part. The integral in Eq. (9) is difficult to compute because it contains a highly oscillatory integrand along energy \(E\) given by Eq. (10). Therefore we perform analytic continuation of Eq. (9) into the complex energy \(E+i\mathcal{E}\) (see supplementary). The analytic continuation converts the highly oscillatory terms in Eq. (10) into exponentially decaying away from \(\mathcal{E}=0\) and fixes the convergence of the integral. That allows us to linearise the dispersion in Eq. (2) around the Fermi level \(\mu\) and compute the magnetisation at \(k_{y}\): \[M(k_{y},\mu) =\frac{1}{Wc\pi}\frac{d\Gamma}{dB}\sum_{n=1}^{\infty}\frac{\sin \left(2nk_{F}W\right)}{n}J_{1}\left(\frac{2n\Gamma}{\Delta E_{x}}\right), \tag{11}\] \[k_{F}(k_{y}) =k_{x}(\mu,k_{y}),\quad\Delta E_{x}(k_{y})=\frac{\hbar v_{F}(k_{ y})}{W},\] \[v_{F}(k_{y}) =v_{x}(E,k_{y}),\] where \(J_{1}\) is a Bessel function of the first kind and \(k_{F}\), \(v_{F}\) and \(\Delta E_{x}\) are the Fermi momentum, velocity and energy spacing along the x-direction. For \(k_{y}=0\), Eq. (11) is the magnetisation of a 2D system with an open Fermi surface. Equation (11) presents three types of oscillations. The first is the oscillatory Aharonov-Bohm dependence of \(\Gamma\) on the flux \(\phi\). The second is oscillations with \(\Gamma/\Delta E_{x}\) that originate from commensuration of the energy separation \(\Delta E_{x}\) between quantized states in the \(x\)-direction with the bandwidth \(\Gamma\). These two types depend on the flux \(\phi\). The third type is oscillations with \(k_{F}W\) that originate from the position of the chemical potential with respect to the centre of a \(k_{z}\) band. The sum over \(n\) represents the different Fourier components of the oscillations with respect to the chemical potential \(\mu\). In 3D, the total magnetisation per unit volume is: \[\mathcal{M}(\mu)=\frac{1}{\pi}\int_{FS}M(k_{y},\mu)dk_{y}, \tag{12}\] where the integral over \(k_{y}\) is along the Fermi surface. We utilize the steepest-descent method to evaluate the leading order contributions to this integral originating from its behavior near the maxima of \(k_{F}(k_{y})\). To do so, we define the maxima of the Fermi wavevector along \(x\) as \(K_{F}\) and the corresponding Fermi surface curvature at these points: \[K_{F}=k_{F}(k_{y,0}),\quad\frac{dk_{F}(k_{y,0})}{dk_{y}}=0, \tag{13}\] \[-\frac{d^{2}k_{F}(k_{y,0})}{d^{2}k_{y}}=-\frac{\partial^{2} \varepsilon_{\parallel}}{\partial k_{y}^{2}}\frac{\partial k_{F}}{\partial \varepsilon_{\parallel}}=\left(\frac{m_{y}V_{F}}{\hbar}\right)^{-1}>0,\] where \(V_{F}=v_{F}(k_{y,0})\) and \(\Delta E_{x}=\Delta E_{x}(k_{y,0})\) are the Fermi velocity and energy spacing along x-direction at \(K_{F}\). We substitute Eq. (13) into Eq. (12), deform the integration contour along the steepest descent and obtain total magnetisation: \[\mathcal{M}(\mu) =\mathcal{M}_{0}\frac{d\text{sinc}(\phi/2)}{d\phi}\sum_{n=1}^{ \infty}\frac{\sin\left(2nK_{F}W\right)}{n^{3/2}}J_{1}\left(\frac{2n\Gamma}{ \Delta E_{x}}\right), \tag{14}\] \[\mathcal{M}_{0} =\frac{2et_{\perp}}{W\pi^{3/2}}\sqrt{k_{y,\text{eff}}W},\quad k _{y,\text{eff}}=\frac{m_{y}V_{F}}{\hbar},\] where \(k_{y,\text{eff}}\) is the effective Fermi \(y\)-momentum below which all the trajectories are orientated along the \(x\)-direction. The magnetisation in Eq. (14) is a complex oscillatory function of the Fermi wavevector and the magnetic field \(B\). To simplify the expression, we consider thermal broadening of the order of miniband spacing, \(k_{B}T\approx\Delta E_{x}\), which suppresses the terms with \(n>1\) (see the supplementary material for details). \[\mathcal{M}(\mu,k_{B}T\approx\Delta E_{x})\approx \tag{15}\] \[\mathcal{M}_{0}\frac{d\text{sinc}(\phi/2)}{d\phi}\sin\left(2K_{F}W \right)J_{1}\left(\frac{2\Gamma}{\Delta E_{x}}\right).\] In Eq. (15), there are two distinct regimes: the narrow sample, where \(t_{\perp}/\Delta E_{x}\ll 1\) and the wide sample, where \(t_{\perp}/\Delta E_{x}\gg 1\). These two limits correspond to the presence of either a single \(k_{z}\) energy band (given by Eq. (8)) at the Fermi level or multiple overlapping bands. In the narrow sample limit \(t_{\perp}/\Delta E_{x}\ll 1\), we expand the Bessel function \(J_{1}\) in Eq. (15) for small arguments and find a simplified form of magnetisation: \[\mathcal{M}(\mu,k_{B}T \approx\Delta E_{x})\approx \tag{16}\] \[-\frac{2\mathcal{M}_{0}}{\phi^{3}}\frac{t_{\perp}}{\Delta E_{x}} \sin\left(2K_{F}W\right)\left(2-\phi\sin\phi-2\cos\phi\right).\] The magnetisation in Eq. (16) oscillates with \(\phi\), the number of flux quanta passing through an area \(cW\) similar to the Aharonov-Bohm effect, however the oscillations decay with \(\phi^{3}\). Figure 2 provides a qualitative explanation of this behavior as a response of a partially occupied band with bandwidth that both oscillates and decays with \(\phi\). The oscillations are distinct from the dHVA diamagnetism [5] that oscillates with inverse magnetic field \(1/B\) because the cyclotron orbit shrinks with magnetic field. Due to their similarity with the Aharonov-Bohm effect, we name the magnetisation oscillations of open Fermi surfaces Aharonov-Bohm magnetism. In the wide sample limit \(t_{\perp}/\Delta E_{x}\gg 1\), Eq. (15) exhibits a combination of multiple frequency oscillations combined with an overall decay, as shown in Fig 3. However, Aharonov-Bohm magnetism is still evident in this regime because regardless of the chemical potential, the amplitude of the magnetization oscillations reaches its maximum whenever an integer number of flux quanta passes through the area \(cW\). ## IV Practical considerations In order to estimate the magnitude of the magnetic susceptibility \(\chi\), we expand the magnetisation in Eq. (15) to first order in flux \(\phi\): \[\chi=\mu_{0}\frac{d\mathcal{M}}{dB}\approx-\mu_{0}\frac{\mathcal{M}_{0}}{12} \frac{d\phi}{dB}\sin\left(2K_{F}W\right)J_{1}\left(\frac{4t_{\perp}}{\Delta E_ {x}}\right), \tag{17}\] where \(\mu_{0}\) is the vacuum permeability. To make the interpretation clearer we consider Landau diamagnetism [18] of an isotropic dispersion (\(m_{y}/m_{x}=1\)) and Fermi wavevector \(K_{F}\): \[\chi_{L}=-\mu_{0}\frac{e^{2}K_{F}}{12\pi^{2}m_{\parallel}}. \tag{18}\] The ratios between Aharonov-Bohm diamagnetism in Eq. (17) and Landau diamagnetism in Eq.(18) in the narrow and wide sample limits are: \[\frac{\max(\chi)}{\chi_{L}}\approx \tag{19}\] \[\begin{cases}\sqrt{\frac{m_{\parallel}}{m_{\perp}}}\left(K_{F}W \right)^{-1/2}\sqrt{k_{y,\text{eff}}W}&4t_{\perp}\gg\Delta E_{x}\\ \sqrt{\pi}\left(\frac{m_{\parallel}}{m_{\perp}}\right)^{2}\left(\frac{W}{c} \right)^{3}\left(K_{F}W\right)^{-3/2}\sqrt{k_{y,\text{eff}}W}&4t_{\perp}\ll \Delta E_{x}\end{cases}\] where we substituted \(t_{\perp}=\hbar^{2}/(2m_{\perp}c^{2})\) and \(V_{F}=\hbar K_{F}/m_{\parallel}\) where \(m_{\perp}\) is the mass along the \(z\) direction, \(m_{\parallel}\) is the mass along the in-plane direction. We see from Eq. (19) that in both limits AB diamagnetism favors large mass anisotropy \(m_{\parallel}/m_{\perp}\) and flat in-plane Fermi surfaces that maximize \(k_{y,\text{eff}}\). However, the narrow sample limit susceptibility scales much better with mass anisotropy and further scales with the width of the sample \(W\). As a check, we use typical parameters of microscopic delafossite samples[19], and observe that a sample with Fermi momentum \(K_{F}W\approx 10^{4}\) and \(\sqrt{k_{y,\text{eff}}W}\approx 1\), mass anisotropy \(m_{\parallel}/m_{\perp}=10^{-3}\), lattice spacing \(c=1\,\text{\AA}\) and the width of the sample \(W=1\,\text{\SIUnitSymbolMicro m}\) generates diamagnetism of the same order as Landau diamagnetism \(\chi/\chi_{L}\approx 1\). The Bohr-Sommerfeld quantization condition in Eq. (4) relies on specular boundary reflections at the ends of Figure 3: The magnetisation as a function of magnetic flux quanta passing through the system for different \(t_{\perp}/\Delta E_{x}\) ratios. The main (thick) curves are evaluated at \(K_{F}W=\pi/8\) whereas the thin secondary (thin) curves are evaluated at other \(K_{F}W\) values. the sample to close the trajectory. To examine the role of diffusive boundary scattering with the specular reflection probability equal to \(r\), we evaluate the magnetisation numerically in finite samples. We observe that the amplitude of the magnetisation is proportional to \(r^{2}\), consistent with the closed trajectory requiring two specular reflections. Furthermore, we remark that random bulk scattering and dephasing work in the same way as diffusive boundary reflection: the probability to encounter a random scattering/dephasing event in a \(2W\) width sample with mean-free path/phase coherence length \(l_{0/\phi}\) is \(exp(-2W/l_{0/\phi})\). Therefore, the strength of Aharonov-Bohm magnetism depends on both mean-free-path and boundary quality: \[\mathcal{M}\propto r^{2}\left[1-\exp\left(-\frac{l_{0/\phi}}{2W}\right)\right]. \tag{20}\] Finally, we summarize the necessary conditions required to observe Aharonov-Bohm magnetism: 1. Open component to the Fermi surface. 2. Phase coherence length and mean-free path larger than the sample width, \(W\leq l_{\phi},l_{0}\). 3. High-quality sample boundaries to ensure specular reflections. One candidate family of materials which fulfill conditions 1. and 2. are the delafossites [19] like PdCoO\({}_{2}\) and PtCoO\({}_{2}\). Delafossites are highly anisotropic materials with a cylindrical Fermi surface [10] and mean-free path on the order of \(20\,\mathrm{\SIUnitSymbolMicro m}\)[20]. Additionally, the hexagonal Fermi surface in delafossites allows one to align a sample in a way that does not permit trajectories along the magnetic field direction and thus maximizes \(k_{y,\mathrm{eff}}\). An alternative candidate material is elemental copper [8]. Despite not having a fully open Fermi surface, it does have small open components. Even though that reduces the number of possible open trajectories (and thus \(k_{y,\mathrm{eff}}\)), the out-of-plane mass \(m_{\perp}\) in copper is smaller and thus more favourable than in delafossites. Additionally, it is possible to engineer copper samples with a mean-free path well into the micrometre scale [21]. However, in both cases, the sample boundaries pose a significant bottleneck which should be overcome for the effect to be observed. AS thanks the Israeli Science Foundation Quantum Science and Technology grant no. 2074/19, the CRC 183 of the Deutsche Forschungsgemeinschaft for funding. The project received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreements No. 788715 (LEGOTOP) and No. 828948 (AndQC). The work was also supported by the NWO VIDI Grant (016.Vidi.189.180). A.S. formulated the initial project idea. All authors derived the theory. K.V. ran numerical calculations to verify the theory with input from A.A. K.V. authored the manuscript with input from other authors.
2310.04228
Characterization of high-fidelity Raman qubit gates
Raman qubits, represented by two ground or metastable quantum states coupled via an intermediate state, hold some advantages over directly coupled qubits, most notably much longer radiative lifetimes, shorter gate duration and lower radiation intensity due to using electric-dipole allowed optical transitions. They are also relatively simple to implement and control, making them an attractive option for building quantum gates for quantum computers. In this work, we present a simple and fast tomographic method to measure the errors of Raman qubit gates possessing the Morris-Shore dynamic symmetry. The latter occurs when the qubit states are on two-photon resonance and the driving fields have the same time dependence. The method is based on repeating the same gate multiple times, which amplifies the small coherent errors to sufficiently large values, which can be measured with high accuracy and precision. Then the (small) gate errors can be determined from the amplified errors by using the analytical connections between them.
Stancho G. Stanchev, Nikolay V. Vitanov
2023-10-06T13:15:24Z
http://arxiv.org/abs/2310.04228v1
# Characterization of high-fidelity Raman qubit gates ###### Abstract Raman qubits, represented by two ground or metastable quantum states coupled via an intermediate state, hold some advantages over directly coupled qubits, most notably much longer radiative lifetimes, shorter gate duration and lower radiation intensity due to using electric-dipole allowed optical transitions. They are also relatively simple to implement and control, making them an attractive option for building quantum gates for quantum computers. In this work, we present a simple and fast tomographic method to measure the errors of Raman qubit gates possessing the Morris-Shore dynamic symmetry. The latter occurs when the qubit states are on two-photon resonance and the driving fields have the same time dependence. The method is based on repeating the same gate multiple times, which amplifies the small coherent errors to sufficiently large values, which can be measured with high accuracy and precision. Then the (small) gate errors can be determined from the amplified errors by using the analytical connections between them. ## I Introduction Raman qubits -- qubits formed of the long-lived end states \(|0\rangle\) and \(|1\rangle\) of a three-state quantum system in a chainwise-coupled Raman configuration \(|0\rangle\leftrightarrow|a\rangle\leftrightarrow|1\rangle\) -- are a popular implementation of qubits for quantum technologies [1, 2, 3, 4]. They are particularly suitable for trapped ions and ultracold atoms, wherein Raman linkage patterns are ubiquitous [5, 6, 7, 8, 9]. Compared to directly-coupled qubits they have the advantage of using the electric-dipole allowed transitions \(|0\rangle\leftrightarrow|a\rangle\) and \(|1\rangle\leftrightarrow|a\rangle\) instead of the electric-dipole forbidden transition \(|0\rangle\leftrightarrow|1\rangle\). This allows one to use convenient optical transitions with much less laser power resulting in faster gates with negligible light shifts and unwanted couplings [10, 11, 12]. Moreover, the availability of two fields brings more control parameters and the possibility to use more sophisticated methods for quantum control, such as composite [13], optimal-control and shortcut approaches [14, 15, 16, 17, 18]. However, Raman qubits are more demanding in regard to their control, as now three, rather than two, states are involved, with the necessity to avoid population leakage to the auxiliary intermediate state \(|a\rangle\). Moreover, the characterization of the fidelity also requires dealing with three states and hence SU(3) dynamics instead of SU(2). In certain cases, the three-state dynamics can be reduced to two-state one. Such is the case when the intermediate state \(|a\rangle\) is far off resonance with the driving fields; then it can be eliminated adiabatically [19, 20, 21], which generates an _approximate_ SU(2) dynamics involving the qubit states only, with an effective coupling between the qubit states and ac Stark (light) shifts. Another case of SU(3) \(\rightarrow\) SU(2) reduction, this time exact, occurs when the Raman system possesses the Wigner-Majorana angular-momentum symmetry [22, 23, 24]. A third case, which is the focus of this paper, takes place when the Raman-coupled system possesses the Morris-Shore symmetry [25, 26, 27, 28, 29]; then the three-state system can be exactly decomposed into a two-state system and an uncoupled (dark) state. This symmetry requires the two-photon resonance between the end states \(|0\rangle\) and \(|1\rangle\), while the middle state \(|a\rangle\) can be off single-photon resonances. Moreover, the two Raman couplings must have the same time dependence but their magnitudes and phases can be different; indeed the leeway in the choice of the coupling magnitudes and phases has allowed the design of accurate quantum control schemes. The Morris-Shore transformation can be generalized to drop the two-photon resonance and timing conditions, although then the SU(3) \(\rightarrow\) SU(2) reduction is only approximate [30, 31]. The objective of the present paper is to develop a tomographic method for determination of coherent gate errors in Raman-coupled qubits, obeying the Morris-Shore (MS) symmetry. The method builds upon the one presented in [32, 33, 34] for two-level systems, where a certain high-fidelity gate is repeated multiple times with subsequent measurements of the population in the end of the sequence. The method takes advantage of the constructive interference created through the repetitions leading to the amplification of the errors to large enough val Figure 1: Reduction of a three-state Raman \(\Lambda\) system (left) to an effective two-state system (right) by the Morris-Shore transformation. The original system consists of two main states \(|\psi_{0}\rangle\) and \(|\psi_{1}\rangle\) and an auxiliary exited state \(|\psi_{A}\rangle\). The Rabi frequencies of the original system share same time dependence \(f(t)\) and same detunings \(\Delta(t)\). The reduced system consists of an upper state \(|\psi_{a}\rangle\) (same as the original upper state), a bright state \(|\varphi_{0}\rangle\) and a dark state \(|\varphi_{1}\rangle\). ues. These values can be measured reliably, from which one can determine the single-gate errors due to the availability of analytic relations between the single-pass and multi-pass probabilities. This paper is organized in the following manner. First, in Sec. II we consider in detail the case when the Raman-coupled qubit is driven by two pulses of rectangular temporal shape in order to benefit from the simplicity of the solution. After deriving the basic tomographic principle, based on error amplification (the NR approximation, see below Secs. III and IV), we proceed in Sec. V to smooth pulse shapes and show that the simple solutions based on the rectangular shapes are applicable for smooth shapes too. Finally, Sec. VI presents some discussion and conclusions. ## II Single-pass and multi-pass transitions ### Single-pass transition Consider a three-state Raman \(\Lambda\) system under the conditions of the Morris-Shore (MS) transformation [25], shown in Fig. 1, with the original system on the left and MS-transformed system on the right. The Hamiltonian of the system has the form \[\mathbf{H}(t)=\frac{1}{2}\left[\begin{array}{ccc}0&0&\Omega_{0}f(t)\\ 0&0&\Omega_{1}f(t)\\ \Omega_{0}^{*}f(t)&\Omega_{1}^{*}f(t)&2\Delta(t)\end{array}\right], \tag{1}\] where \(\Omega_{1}f(t)\) and \(\Omega_{2}f(t)\) are the Rabi frequencies, which have the same time dependence \(f(t)\), and \(\Omega_{1}\) and \(\Omega_{2}\) are complex constants. \(\Delta(t)\) is the detuning, which is same for both fields. The MS transformation reduces the original Hamiltonian (1) to an effective two-state Hamiltonian [19; 26], \[\widetilde{\mathbf{H}}(t)=\mathbf{S}\mathbf{H}(t)\mathbf{S}^{\dagger}= \left[\begin{array}{ccc}0&0&0\\ 0&0&\frac{1}{2}\Omega f(t)\\ 0&\frac{1}{2}\Omega f(t)&\Delta(t)\end{array}\right], \tag{2}\] where \(\mathbf{S}\) is the transforming complex-valued time-independent matrix \[\mathbf{S}=\left[\begin{array}{ccc}\frac{\Omega_{1}^{*}}{\Omega_{0}}&\frac {\Omega_{0}}{\Omega}&0\\ -\frac{\Omega_{0}^{*}}{\Omega}&\frac{\Omega_{1}}{\Omega}&0\\ 0&0&1\end{array}\right], \tag{3}\] and \(\Omega\) is the root-mean-square (RMS) Rabi frequency, which is a real constant, \[\Omega=\sqrt{|\Omega_{0}|^{2}+|\Omega_{1}|^{2}}. \tag{4}\] Note that the MS Hamiltonian \(\widetilde{\mathbf{H}}(t)\) is real and the complexity of the original Hamiltonian \(\mathbf{H}(t)\) is mapped onto the transformation matrix \(\mathbf{S}\). In the MS basis, the upper state \(\left|\psi_{a}\right\rangle\) is the same as in the original system, whereas the two MS lower states are superpositions of the original lower states, \[\left|\varphi_{0}\right\rangle =\frac{\Omega_{1}^{*}\left|\psi_{0}\right\rangle-\Omega_{0}^{*} \left|\psi_{1}\right\rangle}{\Omega}, \tag{5a}\] \[\left|\varphi_{1}\right\rangle =\frac{\Omega_{0}\left|\psi_{0}\right\rangle+\Omega_{1}\left|\psi _{1}\right\rangle}{\Omega}. \tag{5b}\] One of these -- the bright state \(\left|\varphi_{1}\right\rangle\) -- is coupled to the upper state \(\left|\psi_{a}\right\rangle\) with the RMS coupling \(\Omega f(t)\). The other -- the dark state \(\left|\varphi_{0}\right\rangle\) -- is uncoupled and hence, the original three-state system reduces to a two-state one, \(\left|\varphi_{1}\right\rangle\leftrightarrow\left|\psi_{a}\right\rangle\). This reduction casts the original U(3) dynamics to an effective U(2) dynamics, which greatly facilitates the analysis. Without loss of generality, consider the initial time to be \(t_{i}=0\) and the final time is denoted by \(T\). The propagator in the MS basis can be written as \[\widetilde{\mathbf{U}}(T)=\left[\begin{array}{ccc}1&0&0\\ 0&a&b\\ 0&-b^{*}e^{-i\delta}&a^{*}e^{-i\delta}\end{array}\right], \tag{6}\] where \(a\) and \(b\) are complex-valued Cayley-Klein (CK) parameters, restricted by the relation \[|a|^{2}+|b|^{2}=1, \tag{7}\] and \(\delta\) is a phase defined by \[\delta=\int_{0}^{T}\Delta(t)\,dt. \tag{8}\] By using the inverse of the transformation (6), the original propagator takes the form \[\mathbf{U} =\mathbf{S}^{\dagger}\widetilde{\mathbf{U}}\mathbf{S}=\] \[=\left[\begin{array}{ccc}1+(a-1)\frac{|\Omega_{0}|^{2}}{\Omega ^{2}}&(a-1)\frac{\Omega_{0}\Omega_{1}^{*}}{\Omega^{2}}&b\frac{\Omega_{0}}{ \Omega}\\ (a-1)\frac{\Omega_{0}^{*}}{\Omega^{2}}&1+(a-1)\frac{|\Omega_{1}|^{2}}{\Omega^ {2}}&b\frac{\Omega_{1}}{\Omega}\\ -b^{*}\frac{\Omega_{0}^{*}}{\Omega}e^{-i\delta}&-b^{*}\frac{\Omega_{1}^{*}}{ \Omega}e^{-i\delta}&a^{*}e^{-i\delta}\end{array}\right]. \tag{9}\] If the system starts in state \(\left|\psi_{0}\right\rangle\), Eq. (9) dictates the following populations in the end, \[P_{0} =\left|1+(a-1)\,\frac{|\Omega_{0}|^{2}}{\Omega^{2}}\right|^{2} \tag{10a}\] \[P_{1} =\left|(a-1)\,\frac{\Omega_{0}\Omega_{1}}{\Omega^{2}}\right|^{2}\] (10b) \[P_{a} =\left|b\,\frac{\Omega_{0}}{\Omega}\right|^{2}. \tag{10c}\] Hereafter we shall refer to the propagator (9) and the probabilities (10) as _single-pass propagator_ and _single-pass probabilities_. Let us assume that the system in Fig. 1 is a qubit with qubit states \(\left|\psi_{0}\right\rangle=\left|0\right\rangle\) and \(\left|\psi_{1}\right\rangle=\left|1\right\rangle\). Then we must have all population in the qubit subspace, which means that the CK parameter \(b\) must be zero, \(b=0\). Then, due to the probability conservation condition (7), the other CK parameter \(a\) will be a phase factor, i.e. \(a=e^{i\varphi}\). In fact, its phase \(\varphi\) is an important control parameter. The other control parameter is the ratio \(\Omega_{0}/\Omega_{1}\), which determines which quantum gate is created. The condition \(b=0\), viewed in the MS basis, implies no transition between the MS state \(|\varphi_{0}\rangle\) and the upper state \(|\psi_{a}\rangle\). Obviously, we are not interested in the trivial case of no interaction because then \(a=1\) and the propagator is the identity matrix. The condition \(b=0\) in the presence of interaction can be achieved in two scenarios. The simplest one is by a resonant pulse of temporal area \(2\pi\). Then \(a=-1\), \(b=0\) and the propagator (9) reduces to \[\mathbf{U}(T)=\left[\begin{array}{ccc}1-2\frac{|\Omega_{0}|^{2}}{\Omega^{2 }}&-2\frac{\Omega_{0}\Omega_{1}^{*}}{\Omega^{2}}&0\\ -2\frac{\Omega_{0}\Omega_{1}}{\Omega^{2}}&1-2\frac{|\Omega_{1}|^{2}}{\Omega^ {2}}&0\\ 0&0&-e^{-i\delta}\end{array}\right]. \tag{11}\] The second possibility is far off resonance when \(|\Delta|\gg\Omega\) and the three-state problem can be reduced to a two-state one. In this case, the phase \(\varphi\) can be expressed approximately as \[\varphi\approx\frac{\Omega^{2}}{\Delta}\int_{0}^{T}f^{2}(t)\,dt. \tag{12}\] Equation (12) shows that the far-off resonance case is suitable for constricting phase gates. In this paper, we consider only the resonance case. The reason is that in the far-off resonance case, due to the significant increase in detuning and Rabi frequencies, the gates require much larger pulse area and hence are much slower. Moreover, probabilities for transitions to higher energy levels outside the three-state Raman system become prominent. This would compromise the quantum gates due to detrimental leakage errors. ### Target gate parameters and errors In the resonance case, we have \(\varphi=\pi\), hence \(a=-1\). The target gates have the following general form \[U_{tar}=\left[\begin{array}{ccc}\cos\zeta&e^{-i\phi}\sin\zeta&0\\ e^{i\phi}\sin\zeta&-\cos\zeta&0\\ 0&0&-1\end{array}\right], \tag{13}\] where the phase factor \(e^{i\phi}\) is coming from the complexity of \(\Omega_{0}\) and \(\Omega_{1}\), while \(\zeta\) is the mixing angle defined as \[|\Omega_{0}|/\Omega=\sin(\zeta/2),\quad|\Omega_{1}|/\Omega=\cos(\zeta/2). \tag{14}\] In order to construct X gate, we must have \(\zeta=\pi/2\) i.e., \(|\Omega_{0}|/\Omega=|\Omega_{1}|/\Omega=1/\sqrt{2}\). For the Hadamard gate, we need \(\zeta=\pi/4\), i.e., \(|\Omega_{0}|/\Omega=\sin(\pi/8)\) and \(|\Omega_{1}|/\Omega=\cos(\pi/8)\). In order to quantify the gate errors, stemming from imprecise resonance (nonzero \(\Delta\)) and inaccurate pulse area, it is convenient to express the complex-valued Cayley-Klein parameters \(a\) and \(b\), restricted by Eq. (7), by three real parameters as \[a =-e^{-i\alpha}\cos\gamma \tag{15a}\] \[b =-ie^{-i\beta}\sin\gamma, \tag{15b}\] where \(\alpha\), \(\beta\) and \(\gamma\) have all target values of \(0\) in order to retrieve values of \(a\) and \(b\) in the ideal case. Therefore they are measures of _coherent gate errors_. From the resonance requirement \(\Delta\to 0\) and Eq. (8), it follows that the target value of the phase \(\delta\) is also zero, \(\delta\to 0\), i.e. it is also an error measure. For high-fidelity quantum gates these errors are very small and their determination is challenging. The concept of this paper is to amplify these errors by gate repetitions to sufficiently large values which can be measured reliably, with high accuracy and precision. The parameter \(\zeta\) is considered as known. Indeed, it can be determined from a single-pass measurement of the probabilities. For example, it follows from Eq. (13) that \(P_{1}\approx\sin^{2}(\zeta)\), hence the parameter \(\zeta\) can be found from here. Then by substituting of \(\zeta\) in Eq. (14) both \(|\Omega_{0}|/\Omega\) and \(|\Omega_{1}|/\Omega\) can be found as well. ### Multi-pass transition In our previous work [22], we found the \(N\)-pass propagator of a three-state Raman system. In Schrodinger's representation, the \(N\)-pass propagator is the \(N\)th power of the single propagator \(U\) (9); it reads \[\mathbf{U}^{N}\!\!=\!\!\left[\begin{array}{ccc}1+(a_{N}-1)\frac{|\Omega_{0} |^{2}}{\Omega^{2}}&(a_{N}-1)\frac{\Omega_{0}\Omega_{1}^{*}}{\Omega^{2}}&b_{N} \frac{\Omega_{0}}{\Omega^{2}}\\ (a_{N}-1)\frac{\Omega_{0}^{*}\Omega_{1}}{\Omega^{2}}&1+(a_{N}-1)\frac{|\Omega _{1}|^{2}}{\Omega^{2}}&b_{N}\frac{\Omega_{1}}{\Omega^{2}}\\ -b_{N}^{*}\frac{\Omega_{0}^{*}}{\Omega}e^{-iN\delta}&-b_{N}^{*}\frac{\Omega_{1} ^{*}}{\Omega}e^{-iN\delta}&a_{N}^{*}e^{-iN\delta}\end{array}\right], \tag{16}\] where the \(N\)-pass Cayley-Klein parameters \(a_{N}\) and \(b_{N}\) are connected to the single-pass ones \(a\) and \(b\) by the relations \[a_{N} =\left[\cos(N\vartheta)+i\text{Im}(a_{\delta})\frac{\sin(N \vartheta)}{\sin(\vartheta)}\right]e^{-iN\delta/2}, \tag{17a}\] \[b_{N} =b_{\delta}\frac{\sin(N\vartheta)}{\sin(\vartheta)}e^{-iN\delta/2}, \tag{17b}\] with \[a_{\delta} =a\,e^{i\delta/2}, \tag{18a}\] \[b_{\delta} =b\,e^{i\delta/2},\] (18b) \[\vartheta =\arccos(\text{Re}\,a_{\delta}). \tag{18c}\] The multi-pass probabilities are \[P_{0}^{(N)} =\left|1+(a_{N}-1)\frac{|\Omega_{0}|^{2}}{\Omega^{2}}\right|^{2}, \tag{19a}\] \[P_{1}^{(N)} =\left|(a_{N}-1)\frac{\Omega_{0}\Omega_{1}}{\Omega^{2}}\right|^{2},\] (19b) \[P_{a}^{(N)} =\left|b_{N}\frac{\Omega_{0}}{\Omega}\right|^{2}. \tag{19c}\] In Eqs. (17), the parameters \(\vartheta\) and \(\delta\) are multiplied by the factor \(N\) in some terms, therefore we are able to amplify them after the repetitions. We note that the parameters \(|\Omega_{0}|/\Omega\) and \(|\Omega_{1}|/\Omega\) in Eqs. (19) remain the same as in the single pass propagator (9). For high gate fidelity, we must have * \(P_{a}^{(N)}\) to be very small after every pass, i.e. \[P_{a}^{(N)}\ll 1\quad(N=1,2,\ldots),\] (20) * \(P_{1}^{(N)}\) to be very small after every even pass, i.e \[P_{1}^{(2M)}\ll 1\quad(M=1,2,\ldots),\] (21) We use these probabilities as indicators by which to determine the errors \(\alpha,\beta\) and \(\gamma\) for X and H gates. ## III Near-Resonance (NR) approximation In this section, our objective is to find out the connection between errors in the Hamiltonian and those in the propagator. To achieve this, we use the Rabi model and apply a near-resonance (NR) approximation derived from it. We find that this approximation is not only suitable for the Rabi model but also applicable to other models lacking analytical solutions. ### Assumptions Because our objective is to design a protocol for determining the errors of high-fidelity Raman gates, we assume that their errors are small. Hence we make three general assumptions. * We assume that the detuning \(\Delta\) is small and constant, \[|\delta|\ll\pi\quad(\text{with }\delta=\Delta T),\] (22a) which we call the _detuning error_. * For the pulse shape \(f(t)\), we define the _filling ratio_ \[r=\frac{1}{T}\int_{0}^{T}f(t)\,dt\quad(0\leq r\leq 1),\] (22b) the role of which will be revealed below. * Because at resonance the Cayley-Klein parameter \(a\) is \(a=\cos(A/2)\), where \(A=\int_{0}^{T}\Omega f(t)\,dt\) is the RMS pulse area, and because the target value of \(a\) is \(-1\), the RMS pulse area \(A\) must be very close to \(2\pi\); hence we should have \[A=\Omega\int_{0}^{T}f(t)\,dt=\Omega rT=2(\pi-\epsilon),\] (22c) where \[|\epsilon|\ll\pi\] is the _pulse area error_. We will make these assumptions throughout the text hereafter. ### Rabi model and NR approximation The Rabi model is convenient in two aspects: first, it is an exactly solvable model and second, it allows for any filling ratio \(0\leq r\leq 1\). The Rabi model periodical time dependence is shown on fig. 2 and it has the following form \[f_{R}(t)=\sum_{n=0}^{N-1}R\Big{[}\frac{t}{rT}-\frac{(2n+1)}{2r}\Big{]}, \tag{23}\] where \(R\) denotes the rectangular function. In this case, both CK parameters are given by the exact expressions \[a =\left[\cos(\sigma/2)+i\frac{\delta}{\sigma}\sin(\sigma/2)\right] e^{-i\delta r/2}, \tag{24a}\] \[b =-i\frac{A}{\sigma}\sin(\sigma/2)e^{-i\delta/2},\] (24b) where \[\sigma=\sqrt{\delta^{2}+A^{2}/r^{2}}\]. Taking into account conditions ( 22 ), we find the following expressions, which will be referred to as the _NR approximation_, \[a \approx-\cos(\epsilon)e^{-i\delta r/2},\] (25a) \[b \approx-i\sin(\epsilon)e^{-i\delta/2},\] (25b) \[a_{\delta} \approx-\cos(\epsilon)e^{i\delta(1-r)/2},\] (25c) \[b_{\delta} \approx-i\sin(\epsilon),\] (25d) \[\vartheta \approx\pi-\sqrt{\epsilon^{2}+\delta^{2}(1-r)^{2}/4}. \tag{25e}\] Figure 2: First three pulses of the periodical time dependence \(f_{R}(t)\), related to Rabi model and given in Eq. (23). From here and Eq. (15) we find \[\alpha\approx\delta r/2,\quad\beta\approx\delta/2,\quad\gamma\approx\epsilon. \tag{26}\] Note that the parameters \(\alpha,\beta\) and \(\gamma\) are propagator (gate) parameters, while \(\delta,r\) and \(\epsilon\) are Hamiltonian parameters, hence we have direct connections between them. ### Fidelity For any unitary gate \(U\) the fidelity is \[F=\frac{|\mathrm{Tr}(U_{0}U^{\dagger})|^{2}}{d^{2}} \tag{27}\] where \(U_{0}\) is the traget gate and \(d\) is the Hilbert space dimension. In our case \(d=3\). * For \(r=1\) i.e. \(\alpha=\beta\), we find from Eq. (27) for the fidelity \[F = \frac{1}{9}\left[\cos^{2}\zeta^{\prime}+2\cos\alpha\cos\zeta^{ \prime}\cos\gamma(1+\cos\zeta^{\prime})\right.\] (28) \[+ \left.(1+\cos\zeta^{\prime})^{2}\cos^{2}\gamma\right],\] where \(\zeta^{\prime}\) is the error of \(\zeta\), i.e. for X gate \(\zeta^{\prime}=\frac{\pi}{2}-\zeta\) and for Hadamard gate \(\zeta^{\prime}=\frac{\pi}{4}-\zeta\). For small error, \(|\zeta^{\prime}|\ll 1\), we find from here \[F = \frac{1+4\cos\alpha\cos\gamma+4\cos^{2}\gamma}{9}\] (29) \[- \frac{1+3\cos\alpha\cos\gamma+2\cos^{2}\gamma}{9}\zeta^{\prime 2}.\] For \(\zeta^{\prime}=0\), only the first term survives. Obviously, if all errors vanish, \(\alpha=\gamma=\zeta^{\prime}=0\), then \(F=1\). * For \(r<1\), the result derived from Eq. (27) for the fidelity is too cumbersome to be presented here. For \(\zeta^{\prime}=0\), the fidelity can also be expressed using the parameters \(\epsilon\), \(\delta\), and \(r\) that characterize the Hamiltonian. Thus, we have: \[F = \frac{1}{9}\left[1+2\cos\epsilon\Big{(}\cos(r\delta/2)+\cos( \delta-r\delta/2)\right.\] (30) \[+ \left.\left(1+\cos(\delta-r\delta)\right)\cos\epsilon\Big{)} \right].\] ## IV Determination of the gate errors Now we shall determine the errors \(\alpha\), \(\beta\) and \(\gamma\) specified in Eqs. (26) by the multi-pass probabilities in the NR approximation. All figures use the exact solution of the Rabi model. Nevertheless, for the error range of \(0.05\), which is of interest to us, the plots are practically identical with those for the NR approximation. In Sec. V, we apply the NR approximation (25) for other models with various pulse shapes \(f(t)\), respectively other filling ratio \(r\), and compare the results with the exact (or numerical) solutions. In all figures, we choose \(|\Omega_{0}|/\Omega=|\Omega_{1}|/\Omega=1/\sqrt{2}\), which corresponds to the X(NOT) gate. ### Determination of \(\gamma\) By using of connections (17) and the exact Rabi model solution (24), we can obtain the multi-pass probabilities \(P_{a}^{(N)}\), according to Eq. (19). According to the NR approximation (25) the probability is \[P_{a}^{(N)}=\frac{|\Omega_{0}|^{2}}{\Omega^{2}}\frac{\sin^{2}\epsilon\sin^{2} N\vartheta}{\sin^{2}\vartheta}, \tag{31}\] where \(\vartheta\) is given in Eq. (25e). It is shown in Fig. 3. Equation (31) gives almost identical results in the range \(|\delta|<0.05\) and \(|\epsilon|<0.05\) as the exact one in Fig. 3 and therefore the NR approximation plot is not shown. From Fig. 3 we see that at small \(\delta\), the multi-pass probability \(P_{a}^{(N)}\) depends weakly on \(\delta\) and \(r\) and at \(\delta=0\) we simply have \[P_{a}^{(N)}=\frac{|\Omega_{0}|^{2}}{\Omega^{2}}\sin^{2}(N\epsilon), \tag{32}\] Figure 3: Multi-pass probability \(P_{a}^{(N)}\) according to the exact Rabi model solutions (24). The plots are nearly identical for the NR approximation (31). For small \(\epsilon\) and \(\delta\), the probability \(P_{a}\) depends slightly on both \(\delta\) and \(r\) and can be approximated according to Eq. (32). This allows the error \(\gamma=\epsilon\) to be determined by the multi-pass probability in Eq. (33). The values of \(r\) are selected for the sake of comparison because they naturally emerge for other pulse shapes in Sec. V. from which \(\gamma=\epsilon\) can be found as \[\gamma=\frac{1}{N}\arcsin\Big{[}\frac{\Omega}{|\Omega_{0}|}\sqrt{P_{a}^{(N)}} \Big{]}. \tag{33}\] ### Determination of \(\alpha\) and \(\beta\) Having already the error \(\gamma=\epsilon\) determined, we proceed to determine \(\alpha=r\delta/2\). Having found \(\alpha\) and knowing the value of \(r\) a priori, we can find the value of \(\beta=\delta/2\) as simply \(\beta=\alpha/r\). Hence we focus our attention on the determination of \(\alpha\). We will show that depending of the filling ratio \(r\), two approaches are required - one for \(r<0.5\) and another for \(r>0.5\). #### iv.2.1 Determination of \(\alpha\) for \(r<0.5\) For \(r<0.5\), the probability \(P_{a}\) depends very strongly on \(\delta,r\) and \(N\), which is visible on Fig. 3. For already known \(\epsilon\) (measured at smaller number of \(N\)), see above, we could also perform another experiment with larger \(N\). In Fig. 4 we see these two regions of the probabilities. The first region (dashed lines) corresponds to the smaller \(N\), where \(\epsilon\) is determined for any value of \(\delta\). The inflection point is almost the same for all curves and after it the curves begin to diverge for different \(\delta\). By using the Taylor series up to \(\delta^{2}\) Eq. 31 we find \[P_{a}^{(N)} =\frac{|\Omega_{0}|^{2}}{\Omega^{2}}\sin^{2}(N\epsilon)\] \[\times\left[1-\frac{\delta^{2}(1-r)^{2}(1-N\epsilon\cot N\epsilon )}{4\epsilon^{2}}\right], \tag{34}\] from which \(\delta\) and respectively the error \(\alpha=r\delta/2\) can be found. #### iv.2.2 Determination of \(\alpha\) for \(r>0.5\) For \(r>0.5\) the curves on Fig. 4 come close to each other and at \(r=1\), they overlap for any \(N\). In this case we need other approaches in order to determine \(\alpha\). By using a similar approach as in Eq. IV.1, connections (17) and the exact Rabi model solution (24), we can find the multi-pass probabilities \(P_{1}^{(2M)}\) of Eq. (19), shown in Fig. 5. According to the NR approximation (25) the probability is \[P_{1}^{(2M)} =\frac{2|\Omega_{0}|^{2}|\Omega_{1}|^{2}}{\Omega^{4}}\left[1- \cos\frac{N\delta}{2}\cos N\vartheta\right.\] \[+\left.\frac{\sin\frac{N\delta}{2}\sin N\vartheta\sin\frac{\delta( 1-r)}{2}}{\sin\vartheta}-\frac{\sin^{2}\epsilon\sin^{2}N\vartheta}{2\sin^{2} \vartheta}\right]. \tag{35}\] In the NR approximation, the probabilities in Eqs. (35) give almost indistinguishable results as the exact ones in Fig. 5 and therefore the NR approximation plot is not shown. If \(r\) is known approximately, then \(\delta\) can be found numerically from Eq. (35). For small \(\epsilon\) (\(\epsilon<0.005\)) it can be approximated to \[P_{1}^{(2M)}\approx 4\frac{|\Omega_{0}|^{2}|\Omega_{1}|^{2}}{\Omega^{4}}\sin^{ 2}(N\delta r/4), \tag{36}\] from which \(\alpha=\delta r/2\) can be found. Figure 4: Multi-pass probability \(P_{a}^{(N)}\) according to the exact Rabi model solutions (24) at \(\epsilon=0.025\) and \(r=0.25\). At smaller number of repeated gates \(N\) (the dashed line region), all curves almost overlapped at a given \(\epsilon\), that allows to determine \(\gamma=\epsilon\). At bigger \(N\), the curves diverge that allows to determine \(\delta\) Figure 5: Multi-pass probability \(P_{1}^{(2M)}\). At known \(\epsilon\) and \(r>0.5\), the detuning error \(\delta\) can be found numerically. For small \(\epsilon\leq 0.005\), \(P_{1}\) can be approximated according to (36). For small \(\epsilon\) and \(r<0.5\), the amplification is not sufficient and it is necessary to use the procedures described in IV.2.1. Comparisons of NR approximation with other models In Secs. III.2 and IV, we stated that the NR approximation nearly coincides with the exact Rabi model for error ranges up to \(0.05\). In this section, we present results for three additional models with various time dependencies \(f(t)\) and filling ratios \(r\) and compare the results with those of NR approximation. We will see that for a Raman qubit, driven by a MS-Hamiltonian, the NR approximation is a convenient approximation also for other pulse shapes, which considerably broadens the applicability of the NR approach. ### Rosen-Zener (RZ) model The Rosen-Zener (RZ) model [35], which assumes a hyperbolic-secant pulse shape \(\mathrm{sech}(t/T)\) (running from \(-\infty\) to \(+\infty\)) is exactly solvable. Strictly speaking, even a single pass requires an infinitely long duration, meaning a filling ratio \(r\to 0\). However, for a set pulse of a finite duration \([-\tau,\tau]\), truncated sufficiently far from its maximum, such that \(r\leq 0.1\) (meaning \(\tau\geq 15.7T\), which in turn corresponds to an amplitude value of less than \(3\times 10^{-7}\) of the maximum value), the RZ model is essentially exact. According to assumptions in Sec. III.1 the periodical time dependence is \[f_{RZ}(t)=\sum_{n=0}^{N-1}\mathrm{sech}\,\Big{[}\frac{\pi}{r}\Big{(}\frac{t}{T }-\frac{2n+1}{2}\Big{)}\Big{]}, \tag{37}\] shown in Fig. 6. In this example, we choose a filling ratio \(r=0.1\) \[r=\frac{1}{T}\int_{0}^{T}f_{RZ}(t)\,dt=0.1. \tag{38}\] Considering the conditions in Sec. III.1, the CK parameters have the following exact solution \[a =\frac{\Gamma^{2}\big{(}\frac{1}{2}+i\frac{\delta r}{2\pi}\big{)} }{\Gamma\big{(}\frac{1}{2}+\frac{A}{2\pi}+i\frac{\delta r}{2\pi}\big{)}\Gamma \big{(}\frac{1}{2}-\frac{A}{2\pi}+i\frac{\delta r}{2\pi}\big{)}}, \tag{39a}\] \[b =-i\frac{\sin(A/2)}{\cosh(\delta r/2)}e^{-i\delta/2}. \tag{39b}\] The multi-pass probabilities (19) can be found from Eqs. (39) and (17). In Fig. 7 we show the comparison between the exact (RZ) probabilities \(P_{a}^{(N)}\) and the NR approximated ones (31). It is visible that both plots are practically the same for the error range of \(0.05\). The plots for the populations \(P_{1}^{(2M)}\) are not shown but they are practically indistinguishable as the ones shown in Fig. 5 for \(r=0.1\). Based on these findings we conclude that the NR approximation and the method for derivation presented in the preceding section are perfectly applicable for sech pulses. ### \(\sin^{2}\) model Now we present an example where the time dependence is \[f_{S}(t)=\sin^{2}(\pi t/T). \tag{40}\] The benefit of such a pulse shape is that it has a well-defined finite duration (contrary to the sech shape), smooth pulse shape (contrary to the rectangular pulse), but unfortunately, the Schrodinger equation can not be solved analytically. Yet, it is easily integrated numerically. The filling ratio for this pulse shape is \[r=\frac{1}{T}\int_{0}^{T}f_{S}(t)\,dt=0.5. \tag{41}\] The probability map for \(P_{a}^{(N)}\) is shown in Fig. 8. The NR approximation for the same filling ratio \(r=0.5\) (top row) is almost identical to the numerical results (bottom row). The plots for the populations \(P_{1}^{(2M)}\) are not shown because they are very similar to the ones shown in Fig. 5 for \(r=0.5\). Figure 6: Illustration of the pulses of the time dependence \(f_{RZ}(t)\) for RZ model (37) for \(r=0.1\). Figure 7: Comparison between the multi-pass probabilities \(P_{a}^{(N)}\) obtained from the exact RZ-model solutions (39) and the NR approximation (31), for \(r=0.1\). ### Second trigonometric model We now proceed to another numerically solved pulse shape with the time dependence of \[f_{C}(t)=1-\cos^{10}[\pi t/T], \tag{42}\] shown in Fig. 9. Compared to the \(\sin^{2}\) model, it features a filling ratio \(r=\frac{1}{T}\int_{0}^{T}f_{C}(t)\,dt=0.754\), hence the choice of \(r\) in the corresponding frames for this value of \(r\) in Figs. 4 and 5. For this model, instead of comparing \(P_{a}^{(N)}\), and to provide additional information, we compare the probability map for \(P_{1}^{(2M)}\), as depicted in Fig. 10. The NR approximation for the same filling ratio, \(r=0.754\) (top row), is nearly identical to the numerical results (bottom row). The plots for the populations \(P_{a}^{(N)}\) are not shown because they are very similar to the ones shown in Fig. 3 for \(r=0.754\). ## VI Conclusions We presented a tomographic method designed for the characterization of high-fidelity Raman qubit gates, which obey the Morris-Shore transformation, the most important condition for which is the two-photon resonance condition between the qubit states. The proposed method makes use of coherent amplification of the gate errors by repeating the same gate numerous times. By examining the multi-pass probabilities, we establish their dependence on four key parameters: pulse area error \(\epsilon\), detuning error \(\delta\), filling ratio \(r\), and the number of pulses (passes) \(N\). From these expressions, it becomes feasible to directly calculate the errors \(\epsilon\) and \(\delta\), which determine the gate errors \(\alpha,\beta\), and \(\gamma\). Since the Raman system is reduced to an effective two-state system in the near-resonance regime, employing the NR approximation with a filling factor \(r\) serves as a convenient and practical approach. Additionally, this approximation can be extended to other pulse shapes, thereby removing the restriction of the rectangular shape. ###### Acknowledgements. This research is supported by the Bulgarian national plan for recovery and resilience, contract BG-RRP-2.004-0008-C01 (SUMMIT), project number 3.1.4.
2308.08880
Continuous optical generation of microwave signals for fountain clocks
For the optical generation of ultrastable microwave signals for fountain clocks we developed a setup, which is based on a cavity stabilized laser and a commercial frequency comb. The robust system, in operation since 2020, is locked to a 100 MHz output frequency of a hydrogen maser and provides an ultrastable 9.6 GHz signal for the interrogation of atoms in two caesium fountain clocks, acting as primary frequency standards. Measurements reveal that the system provides a phase noise level which enables quantum projection noise limited fountain frequency instabilities at the low $10^{-14} (\tau /\mathrm{s})^{-1/2}$ level. At the same time it offers largely maintenance-free operation.
Burghard Lipphardt, Patrick Walkemeyer, Michael Kazda, Johannes Rahm, Stefan Weyers
2023-08-17T09:27:09Z
http://arxiv.org/abs/2308.08880v1
# Continuous optical generation of microwave signals for fountain clocks ###### Abstract For the optical generation of ultrastable microwave signals for fountain clocks we developed a setup, which is based on a cavity stabilized laser and a commercial frequency comb. The robust system, in operation since 2020, is locked to a 100 MHz output frequency of a hydrogen maser and provides an ultrastable 9.6 GHz signal for the interrogation of atoms in two caesium fountain clocks, acting as primary frequency standards. Measurements reveal that the system provides a phase noise level which enables quantum projection noise limited fountain frequency instabilities at the low \(10^{-14}(\tau/\mathrm{s})^{-1/2}\) level. At the same time it offers largely maintenance-free operation. Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig, Germany [email protected] ## 1 Introduction The performance of atomic clocks is characterized by their systematic uncertainty and frequency instability. Depending on the actual measurement time the latter determines the statistical measurement uncertainty. Today the most accurate realization of the SI second is obtained from caesium fountain clocks [1]. In the best case systematic and statistical measurement uncertainties at the low \(10^{-16}\) level have been reached in measurement campaigns lasting a number of days [2]. Here advanced methods of the microwave generation, making use of cryogenic oscillators or ultrastable lasers, are advantageous [3, 4, 5, 6], which allow to overcome the otherwise limiting phase noise of microwave syntheses based on commercially available quartz oscillators. Fountain clock applications which benefit from the lowest overall uncertainties are calibrations of International Atomic Time (TAI) [2], absolute frequency measurements of optical clock transitions [7, 8, 9], steering of local time scales [10, 11] and fundamental physics, like the search for changes in fundamental constants [7, 8, 12], violations of local position or Lorentz invariance (LPI and LLI) [13, 14] or dark matter [15, 16, 17]. Moreover, overcoming phase noise limitations through advanced microwave signal generation techniques enables more accurate investigations of systematic effects that may lead to improved systematic uncertainties. For ultrastable microwave generation for the PTB caesium fountain clocks the frequency stability of a cavity stabilized laser is transferred to the microwave spectral range via a frequency comb. For several years, in a former setup an optically stabilized microwave oscillator (OSMW) at 9.6 GHz was in continuous operation [18, 19], using the frequency comb as a transfer oscillator [20]. As a replacement of this system, here we describe a new more robust setup for providing an optically generated microwave signal (OGMW), where the 9.6 GHz microwave signal is obtained directly from a commercial frequency comb, which is locked to a cavity-stabilized fiber laser via high bandwidth actuators [21]. In the following Section 2 we describe our setup in detail and then present our results in Section 3. ## 2 Setup As in the previous setup, the source for the short-term frequency stability of the microwave signal is a 1.5 \(\mu\)m fiber laser (Koheras BASIK E15) locked by means of the Pound-Drever-Hall technique to an optical cavity made of ultralow expansion (ULE) glass with highly reflecting ULE mirrors [18]. The resulting laser frequency stability is \(\sim\)\(10^{-15}\) for averaging times in the range of 1 to 10 s. The output of this cavity-stabilized laser (CSL) is now split and reaches the former and the new frequency comb system via path length stabilized optical fibers. As a central part of our new setup, we utilise a commercial frequency comb system from Menlo (FC1500-250-ULN). Its femtosecond laser (FSL), with a repetition rate \(f_{\mathrm{rep}}\) of 240 MHz and a pulse length of 100 fs, generates an optical comb at a wavelength of 1.5 \(\mu\)m with an equidistant mode distribution of about 30 nm width. Two fast actuators with a control bandwidth of about 1 MHz and appropriate control electronics allow the stability of the CSL to be transferred to all modes of the comb. The actuators adjust the length and dispersion of the FSL cavity, respectively, to control independently \(f_{\mathrm{rep}}\) and the carrier envelope offset frequency \(f_{\mathrm{ceo}}\). In Fig. 1 both control loops are depicted (beige-colored and peach-colored blocks). To lock \(f_{\mathrm{rep}}\) to the CSL frequency, the comb spectrum is optically pre-filtered (\(\pm\)0.11 nm) with a fiber Bragg grating in a beat detection unit (BDU) where a beat frequency \(f_{\mathrm{x}}\) between one mode frequency of the comb and the frequency of the CSL is generated on an InGaAs photodiode (SNR = 40 dB at 1 MHz resolution bandwidth). This beat frequency of about 60 MHz is constantly locked to the frequency of a direct digital synthesizer (DDS1) using a phase discriminator and PI controller. For the stabilization of the second degree of freedom of the comb, the offset frequency \(f_{\mathrm{ceo}}\) is extracted by using a highly nonlinear fiber (HNLF), an f-2f interferometer [22, 23] and an InGaAs photodiode (SNR = 37 dB at 1 MHz resolution bandwidth). Using again a phase discriminator and PI controller (Syncro CEO), \(f_{\mathrm{ceo}}\) is locked to the 40 MHz DDS2-frequency, referenced to the output frequency of a hydrogen maser. In this "synthesizer mode" of the comb, all optical modes and thus its repetition rate are in fixed frequency relation to the CSL frequency and ideally have the same relative stability. The femtosecond pulses of frequency combs generate a comb of microwave frequencies at multiples of the repetition rate on an optical detector. Thereby, the harmonics theoretically extend up to the Fourier limit of about 4 THz (given by a pulse length of \(\sim\)100 fs). In our setup, before detection the electrical signal first passes through an interleaver [24] consisting of a fiber network that duplicates the repetition rate of 240 MHz four times (green blocks). This technique amplifies the modes that are at a multiple of the frequency 1.92 GHz and attenuates the shot noise contribution of the others. After detection with a fast InGaAs photodiode (DSC 40S, bandwidth 12 GHz), the 5th harmonic (9.6 GHz) is filtered out with a bandpass filter (9.4-9.8 GHz) and amplified from about -9 dBm to 13 dBm by phase noise specified amplifiers (HMC-C050 and CMD245, not shown in Fig. 1). From the 9.6 GHz output signal a subsequent homemade synthesis generates the interrogation signal for the fountain clocks [25]. To prevent the comb frequencies and thus the generated 9.6 GHz signal following the drift of the optical cavity, the repetition rate \(f_{\mathrm{rep}}\) with its harmonics is locked to a hydrogen maser with a time constant of \(\sim\)50 s. To realize this time constant, the difference frequency of a harmonic of the repetition rate and a reference frequency from the hydrogen maser is measured by a counter and integrated by a PC (light-blue blocks). The frequency comparison is performed in the microwave range to increase the counter resolution. For this purpose, the 41st harmonic (9.84 GHz) is detected with another fast InGaAs photodiode, filtered and mixed down to 5 MHz first with a reference frequency of 9.6 GHz and then with a synthesized frequency of 235 MHz. The 8th harmonic of the 5 MHz signal (40 MHz, virtually @ 78.72 GHz) is then counted and constantly controlled. For this purpose the drift rate of DDS1 is correspondingly readjusted. The necessary low-noise reference frequencies are obtained from the 100 MHz output signal of the hydrogen maser. First this signal is filtered by a low-noise 5 MHz BVA quartz oscillator, from which the 9.6 GHz reference signal is generated by multiplication. Next the 235 MHz signal is synthesized using a divider chain, which is similarly employed in our frequency synthesis for the fountain clock interrogation signal [25]. All counters, those for locking \(f_{\mathrm{rep}}\) to the hydrogen maser and those for monitoring \(f_{\mathrm{x}}\) and \(f_{\mathrm{ceo}}\), are synchronous multichannel counters (K+K FXE [26]), which count dead-time free with a resolution of 12 ps and which are operated in lambda-counting mode with 1 ms gate time. Similar to the former OSMW system, the new OGMW system also offers the possibility to measure the optical clock transition frequencies of PTB's quadrupole (\(E2\)) and octupole (\(E3\)) \({}^{171}\)Yb\({}^{+}\) frequency standards [27, 28] with respect to each other or the hydrogen maser frequency simultaneously. ## 3 Results First, in Fig. 2 the effect of locking \(f_{\mathrm{rep}}\), and thus the generated 9.6 GHz signal, to the hydrogen maser is visualized by depicting the Allan standard deviations of the microwave signal and the frequencies of the hydrogen maser and the free-running CSL. The data is obtained with another Figure 1: Setup for the optical generation of the 9.6 GHz microwave signal. The commercial frequency comb system comprises the femtosecond laser (FSL) and its optical accessories (box outlined with dashed line) and two phase discriminators/PI controllers (Syncro RRE and Syncro CEO). CSL: cavity-stabilized laser, BDU: beat detection unit, HNLF: highly nonlinear fibre, DDS: direct digital synthesizer, \(f_{\mathrm{rep}}\): repetition rate, \(f_{\mathrm{ceo}}\): carrier envelope offset frequency, \(f_{\mathrm{x}}\): beat frequency. frequency comb by measurements of the ratios of the microwave, the hydrogen maser and the CSL frequencies to the output frequency of an \({}^{171}\)Yb\({}^{+}\) single-ion frequency standard [28], which makes use of a cryogenic silicon cavity [29] and does not contribute to the measured instability for all Fourier frequencies. The control system is designed to maintain short-term stability on the one hand and to achieve long-term stability given by the maser frequency without control overshoot on the other. To characterize the noise performance of the new OGMW setup, two different measurements were performed with respect to the previous OSMW setup. First, a single-sideband phase noise power spectral density measurement was performed to characterize the short-term noise level. Since a direct phase noise measurement close to the carrier is not possible at the present phase noise level, we measured the summed phase noise of the two 9.6 GHz microwave signals from the OGMW and OSMW setups by subtracting both signals from each other with a mixer, setting a phase difference of the signals of \(\pi/2\) for phase-sensitive detection using a phase shifter. The resulting signal is amplified with a homemade low-noise amplifier (42 dB, 0.7 nV/\(\sqrt{\mathrm{Hz}}\)) and its phase noise is measured by a phase analyzer (Rohde & Schwarz FSWP26) in the high-resolution baseband (red line in Fig. 3). Since the stability of both microwave signals is provided by the same optical reference signal (CSL), this measurement result demonstrates the quality of the frequency stability transfer from the CSL to the microwave signals, using the different techniques of the OGMW and OSMW setups. In our systems, the phase noise of the RF amplifiers and the detection processes of the photodiodes are limiting factors. To measure the overall phase noise of the microwave signals from the OGMW and OSMW setups, two independent optical reference signals are needed. Therefore, via its frequency comb the OSMW signal was locked to another remote laser, stabilized to a cryogenic silicon cavity [29]. Figure 2: Allan standard deviation \(\sigma_{y}(\tau)\) of the frequencies of the generated 9.6 GHz signal, the hydrogen maser (HM) and the free-running cavity-stabilized laser (CSL). The frequency stability of the latter laser is about one order of magnitude better than the stability of the CSL. The summed phase noise of both independent microwave signals is depicted as blue line in Fig. 3. The single-sideband phase noise power spectral density \(L(f)\) reaches -103 dBc/Hz at 1 Hz from the carrier. The additional fluctuations around 10 Hz are caused by an acoustic sensitivity of the setup for transferring the remote laser light stabilized to the cryogenic silicon cavity between different buildings and do therefore not impact the OGMW signal. For comparison and to demonstrate the benefits of the optical microwave signal generation, in Fig. 3 the summed phase noise of a quartz based 9.6 GHz synthesis and the OGMW signal is shown as well. By taking into account the measured single-sideband phase noise power spectral density depicted by the blue line in Fig. 3 as an upper limit for the phase noise level of the 9.6 GHz OGMW signal, we extract the spectral density of the relative frequency fluctuations as \(S_{y}^{f}(f)=2.2\times 10^{-30}/f+8.6\times 10^{-31}/\mathrm{Hz}+1.7\times 10^{-3 1}f/\mathrm{Hz}^{2}+8.6\times 10^{-34}f^{2}/\mathrm{Hz}^{3}\). From this the upper limit for the frequency instability contribution caused by the Dick effect [30] is calculated for the two caesium fountain clocks CSF1 and CSF2 [19]. For the typical operation parameters of the fountains this calculation yields Allan standard deviation contributions \(\sigma_{\mathrm{y,Dick}}(\tau)=1.2\times 10^{-15}(\tau/1\,\mathrm{s})^{-1/2}\) for CSF1 and \(\sigma_{\mathrm{y,Dick}}(\tau)=1.4\times 10^{-15}(\tau/1\,\mathrm{s})^{-1/2}\) for CSF2. These noise contributions are negligible for both fountains for normal quantum projection noise limited operation in the Figure 3: Single-sideband phase noise power spectral density \(L(f)\) from comparing 9.6 GHz microwave signals from the new OGMW and the former OSMW setup. Red: summed phase noise of OGMW and OSMW signals using the same optical reference signal (CSL); blue: summed phase noise of OGMW and OSMW signals using the CSL and a remote cavity stabilized laser (see text), respectively; black: summed phase noise of the signal from a quartz based 9.6 GHz synthesis and the OGMW signal (for comparison). \(\sigma_{\rm y}(\tau)\geq 10^{-14}(\tau/1\,{\rm s})^{-1/2}\) Allan standard deviation range [18, 19]. In another characterizing measurement, frequency data from the previously utilized OSMW setup, incorporating a 9.6 GHz dielectric resonator oscillator (DRO), and the new OGMW setup were acquired simultaneously for a measurement period of 10 months. The two associated frequency combs were locked to the same CSL and hydrogen maser. Part of the OSMW setup is an in-loop signal consisting of the frequency difference between the DRO frequency and the 9.595 GHz comb mode frequency (\(f_{\rm rep}=252.5\) MHz) [18]. For the OSMW and OGMW comparison, this in-loop signal and also the frequency difference between the 9.6 GHz signal of the OGMW and the same 9.595 GHz comb mode frequency are registered. From these two intermediate frequency signals the eighth harmonic is generated, again to measure their frequency difference (as 10 s averages) at high resolution. The resulting data track is shown in the inset of Fig. 4. The gaps in the recording (\(\sim\)19%) are mainly due to failures of the OSMW signal. The data show agreement of the two microwave signals as the measured frequency difference is \(0.5\times 10^{-18}\) with a standard error of the mean of \(1.6\times 10^{-18}\). For the same data the Allan standard deviation is plotted for long-term frequency stability analysis (blue graph in Fig. 4). It has been checked that the Allan deviation at 10 s measurement time corresponds to the result of the single-sideband phase noise power spectral density measurement (Fig. 3). For comparison, in Fig. 4 the Allan standard deviations of the hydrogen maser and the caesium fountain CSF2 at high atomic density operation [19] are shown in green and red, respectively. Figure 4: Allan standard deviation \(\sigma_{\rm y}(\tau)\) from measured frequency difference data from the new OGMW and the former OSMW setup (blue data). Indicated in green and red are also the Allan standard deviations of the frequencies of the hydrogen maser and the caesium fountain CSF2 at high atomic density operation. Inset: frequency difference data from the OGMW and the OSMW setup, averaged for 10 s during a 10 month period, and used for the calculation of the blue data. Conclusion As an alternative for cryogenic oscillators, systems of cavity stabilized lasers and frequency combs (for the optical stabilization of microwave oscillators or the direct generation of microwave signals) have proven to be reliable tools for providing ultrastable microwave signals for the benefit of the frequency stability of caesium fountain clocks. After years of operating an optically stabilized oscillator to generate the microwave signal required by PTB's cesium fountain clocks, this signal is now obtained directly from the femtosecond pulses of a frequency comb. The new setup for the optical generation of an ultrastable 9.6 GHz microwave signal provides even more robust continuous operation with a phase noise level that is fully compatible with the requirements of fountain clocks. **Acknowledgments.** We would like to thank Melina Filzinger and Nils Huntemann for providing the reference signal from the \({}^{171}\)Yb\({}^{+}\) single-ion frequency standard.
2302.12883
3D Surface Reconstruction in the Wild by Deforming Shape Priors from Synthetic Data
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem that has received extensive attention from the computer vision community. Many learning-based approaches tackle this problem by learning a 3D shape prior from either ground truth 3D data or multi-view observations. To achieve state-of-the-art results, these methods assume that the objects are specified with respect to a fixed canonical coordinate frame, where instances of the same category are perfectly aligned. In this work, we present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image. We show that one can leverage shape priors learned on purely synthetic 3D data together with a point cloud pose canonicalization method to achieve high-quality 3D reconstruction in the wild. Given a single depth image at test time, we first transform this partial point cloud into a learned canonical frame. Then, we use a neural deformation field to reconstruct the 3D surface of the object. Finally, we jointly optimize object pose and 3D shape to fit the partial depth observation. Our approach achieves state-of-the-art reconstruction performance across several real-world datasets, even when trained only on synthetic data. We further show that our method generalizes to different input modalities, from dense depth images to sparse and noisy LIDAR scans.
Nicolai Häni, Jun-Jee Chao, Volkan Isler
2023-02-24T20:37:27Z
http://arxiv.org/abs/2302.12883v1
# 3D Surface Reconstruction in the Wild by Deforming Shape Priors from Synthetic Data ###### Abstract Reconstructing the underlying 3D surface of an object from a single image is a challenging problem that has received extensive attention from the computer vision community. Many learning-based approaches tackle this problem by learning a 3D shape prior from either ground truth 3D data or multi-view observations. To achieve state-of-the-art results, these methods assume that the objects are specified with respect to a fixed canonical coordinate frame, where instances of the same category are perfectly aligned. In this work, we present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image. We show that one can leverage shape priors learned on purely synthetic 3D data together with a point cloud pose canonicalization method to achieve high quality 3D reconstruction in the wild. Given a single depth image at test time, we first transform this partial point cloud into a learned canonical frame. Then, we use a neural deformation field in to reconstruct the 3D surface of the object. Finally, we jointly optimize object pose and 3D shape to fit the partial depth observation. Our approach achieves state-of-the-art reconstruction performance across several real-world datasets, even when trained only on synthetic data. We further show that our method generalizes to different input modalities, from dense depth images to sparse and noisy LIDAR scans. ## I Introduction Surface reconstruction of a 3D object from a partial observation, such as a depth image or a LIDAR scan, is a longstanding problem in computer vision [61, 65, 38, 46]. Discovering the full shape of an object from a partial input has many applications, including in visual servoing [28], robotic manipulation [3, 46, 39], autonomous driving [4, 67] and content creation [20]. Every computational approach aimed at 3D reconstruction must choose a representation for the D model. An increasingly popular choice is to use neural fields [42, 35] for this task. These neural fields, trained on 3D ground truth data, represent the de-facto gold standard regarding reconstruction quality. At inference time, the learned 3D shape prior is adapted to the partial observation. However, these methods suffer from two major limitations: i) they require 3D ground truth data in the form of occupancy values or signed distance functions and ii) these models expect shapes to be aligned and normalized in a fixed canonical coordinate frame - a frame of reference that is shared between all instances in the shape category. These two limitations have for now, limited these approaches to synthetic data, such as Shapenet [5]. To remove the reliance on 3D data, the community has shifted to dense [36], or sparse [68] multi-view supervision with known camera poses, which can be estimated using Structure from Motion (SfM). Similarly, single-view 3D reconstruction methods have also made considerable progress by using neural fields as their shape representation [31, 13]. While these single-view methods can be trained from unconstrained image collections, they have not achieved the high quality of multi-view or 3D ground truth supervised models. In this work, we aim to answer the question: _How can we achieve the reconstruction quality of 3D supervised methods from single view observations in the wild?_ We propose to use a single depth image from a calibrated camera together with a pretrained canonicalization network to register the partial point clouds to the canonical coordinate space. We reduce the effect of errors in the canonicalization process by jointly fine-tuning the latent shape descriptor and object pose using only the partial observation as input (Figure 1). We achieve 3D reconstruction results on synthetic data close to or better than the state-of-the-art. Furthermore, we Fig. 1: We pretrain a 3D shape prior on synthetic 3D data. Using the Equi-pose [29] canonicalization algorithm, we register the depth image to the canonical coordinate frame and use a finetuning scheme to jointly estimate the surface reconstruction and object pose from a single observation, leading to a model that generates diverse 3D reconstructions and object poses. show that using depth images as input allows for generalization across various datasets, from dense depth in synthetic and natural images to sparse depth inputs from LIDAR scans. ## II Related Work 3D object reconstruction based on a conditional input, such as images or depth is an active research area [7, 35, 36, 47, 52, 22]. The defacto gold standard in terms of reconstruction quality uses 3D ground truth data [42, 35]. These approaches are largely limited to synthetic data, such as Shapenet [5] as they require shapes that are aligned in a common canonical coordinate frame. Reconstruction of real-world shapes has been performed by transferring the learned representation across domains [14, 1] or with the use of special depth sensors [40, 9]. However, collecting 3D ground truth data in the real world can be difficult. With the development of neural rendering and inverse graphics methods, the requirement for 3D ground truth has been relaxed in favor of dense multi-view supervision [59, 36, 18, 68] or single view methods that require ground truth camera poses for training [31, 13]. However, not all applications allow for the collection of multi-view images, and estimating camera poses from images remains challenging. With the advent of generative models for 3D shapes [17], using 3D supervision has become an interesting prospect once more. Our work shows how we can leverage shape canonicalization for shape reconstruction in the wild. ### _Pose Registration and 3D Shape Canonicalization_ Reliance on camera poses is an issue for many real-world datasets but a necessary step for neural rendering or deformation-based models. Point cloud registration can estimate the object pose directly and has achieved good performance when matching point clouds of the same object; however, these methods are unsuitable for single view pose estimation without a ground truth 3D model [23, 54, 43]. Category-level object pose estimation methods achieve tremendous results, for supervised training mechanisms [44, 41, 53, 6], and using only self-supervision [49, 50, 29, 45, 26]. For example, Canonical Capsules [50] learn to represent object parts with pose-invariant capsules by training a Siamese network in a self-supervised manner. Although the learned capsules can reconstruct the input point cloud in the learned canonical frame, Canonical Capsules only works on complete point clouds. In contrast, Equi-pose [29] can canonicalize both complete and partial point clouds. By leveraging an SE(3) equivariant network, Equi-pose simultaneously learns to estimate object pose and canonical point cloud completion. Our work shows that one can leverage Equi-pose with test time pose refinement to get accurate shape reconstructions in canonical space. ### _Pointcloud Completion_ Instead of relying on ground truth camera poses, we use depth images to register the partial 3D point cloud into a canonical frame. As we use depth images as input, our method closely relates to point cloud completion algorithms. Early work on point cloud completion used 3D convolutions to learn shape completion [10, 21]. However, 3D convolutions are costly and operate on a canonical voxel grid. More recently, PointNet encoders were used for shape completion [32, 66]. Transformers have also been shown to work well on this task [65, 61]. However, these methods rely on points already in a canonical coordinate frame. Further, these methods do not reconstruct the underlying surface of the object but output a limited number of points. In contrast, out method does not rely on canonical input points and reconstructs the underlying object surface with high fidelity. ### _Surface Reconstruction from a Single View_ There have been extensive studies on 3D reconstruction from single view images using various 3D representations, such as voxels [60, 52, 56, 62, 55, 56], points [16, 63], primitives [11, 8] or meshes [24, 18]. Most of the methods above use explicit representations, which suffer from limited resolution or fixed topology. Neural rendering and neural fields provide an alternative representation to overcome these limitations. Recent methods showed how to learn Signed Distance Functions (SDFs) [59, 31, 13] or volumetric representations such as occupancy [64], which have shown great promise in learning category-specific 3D reconstructions from unstructured image collections. However, these methods usually require additional information, such as ground truth camera poses or aligned 3D shapes, which limits their applicability. In our work, we propose a method that does not require ground truth camera poses or aligned 3D data and leverages widely available synthetic data to learn a category-specific 3D prior model. ### _Learning Shape Reconstruction through Deformation_ Learning a generalizable model that maps a low-dimensional latent code to 3D surfaces can suffer from low-quality reconstructions. Category-specific deformable shape priors are useful to improve the quality of the reconstruction [2, 15, 24, 25, 33, 37]. These methods generally learn the deformation to an initial base shape. More recent work has used neural rendering together with SDFs [31, 13] to learn 3D shape priors from image collections and their associated camera poses. Other methods [12, 69] jointly learn the deformation and the template shape in a canonical frame. In this work, we go one step further and show how we can leverage template shape and deformation models for incomplete observations registered to the template coordinate frame. ## III Method Given a single segmented RGB-D image of an object, our goal is to jointly estimate the object pose and reconstruct the underlying 3D surface. To do so, we first learn a category-specific 3D template in the canonical coordinate frame together with an instance specific deformation field by leveraging synthetic 3D data. During test time, rather than directly reconstructing the shape in the camera coordinate frame, we use recent advances in point cloud canonicalization to transform a partial depth scan to the canonical space for surface reconstruction. Next, we describe first how we learn the 3D shape prior purely on synthetic data. Then we discuss how we reconstruct the surface of an observed depth image by deforming the learned canonical template shape. ### _3D Shape Prior_ Given a set of 3D objects \(\mathcal{O}_{i}\), our goal is to learn a category-specific 3D shape prior together with a latent space describing the variation in shapes. Instead of directly mapping a low dimensional latent code \(z_{i}\in\mathbb{R}^{n}\) to the 3D shape, we follow recent advances in learning 3D shape priors through deformation of a canonical template [69, 12]. We jointly learn our 3D shape prior, represented as a neural network, and latent codes \(z_{i}\) through the auto-decoder framework presented in [42]. To generate high-quality 3D reconstructions, we use signed distance fields (SDFs). SDF is a function that assigns each point \(x_{j}\in\mathbb{R}^{3}\) a scalar value \(s_{j}\in\mathbb{R}\) \[SDF(x_{j})=s_{j}, \tag{1}\] representing the distance to the closest object surface. The sign of \(s_{j}\) indicates whether a point is inside (negative) or outside (positive) of the object, and the surface can be extracted as a mesh through marching cubes [34]. We use a DIF-Net as our 3D shape prior network [12]. The 3D representation network consists of a neural template field and a deformation field. We use the template field to capture common structures among a category of shapes by keeping the weights shared across all instances in the training set. The template field takes a 3D coordinate \(x_{j}\) as input and predicts the signed distance to the closest surface \(\hat{s}\): \[T:x_{j}\in\mathbb{R}^{3}\rightarrow\hat{s}\in\mathbb{R} \tag{2}\] To deform the template to a specific object instance, we use a deformation field together with a structural correction field \[D:x_{j}\in\mathbb{R}^{3}\rightarrow(v,\Delta s)\in\mathbb{R}^{4}. \tag{3}\] The vector \(v\) deforms a point in the instance space to the template space, and the correction factor \(\Delta s\) modifies the SDF value of point \(x_{j}\) if it still differs from the ground truth value. The correction factor has been shown to be beneficial for categories with significant shape variations. For example, for chairs, there exist instances with and without armrests. We use a Hyper-Network [47, 37] to condition the deformation field on a latent code. With a learned template field \(T\) and deformation field \(D\), the SDF value of a point \(x_{j}\) can be obtained with \[s_{j}=T(x_{j}+v)+\Delta s=T(x_{j}+D_{v}(x_{j}))+D_{\Delta}(x_{j}). \tag{4}\] ### _Training DIF_ During training we use the auto-decoder framework [47, 42] to jointly learn latent codes \(z_{i}\) and the weights of the DIF network that predicts SDF values \(\hat{s}=\Psi(x)\). Given a collection of shapes with ground truth SDF values on the object surface and in free space, we first apply an SDF regression loss from [48] as \[\mathcal{L}_{sdf}=\sum_{i}(\sum_{x\in\Omega}|\Psi_{i}(x)-s|+\sum _{x\in S_{i}}(1-\langle\nabla\Psi_{i}(x),n\rangle)\\ +\sum_{x\in\Omega}||\nabla\Psi_{i}(x)|-1||+\sum_{p\in\Omega \setminus S_{i}}\rho(\Psi_{i}(x))), \tag{5}\] where \(s\) and \(n\) denote the ground truth SDF value and normal, \(\nabla\) is the spatial gradient of the neural field, \(\Omega\) is the 3D space in which values are sampled, and \(s_{j}\) is the shape surface. We select an equal number of surface and free space points uniformly at random to compute this loss. The first term in Equ. 5 regresses the SDF value; the second term learns consistent normals on the shape surface, the third term is the eikonal equation that enforces unit norm or the spatial gradients, and the last term penalizes SDF values close to \(0\) which are far away from the object surface with \(\rho(s)=\exp(-\delta\cdot|s|),\delta>>1\). For more details on this loss, check [47]. We further apply multiple regularization terms to help learn smooth deformations and consistent latent space. The first regularization term applies \(L_{2}\) regularization on the embeddings as \(\mathcal{L}_{z}=\sum_{i}||z_{i}||_{2}\). Prior work by [12] showed that learning a template shape that captures common attributes across a category is improved by enforcing normal consistency across all shapes by regularizing the normals of the template networks with \[\mathcal{L}_{normal}=\sum_{i}\sum_{x\in S_{i}}(1-\langle\nabla T(x+D_{v}(x)), n\rangle). \tag{6}\] We further want deformations to be smooth and the optional corrections to the SDF field to be small, which is enforced with the following two loss terms \(\mathcal{L}_{smooth}=\sum_{i}\sum_{x\in\Omega}||\nabla D_{v}(x)||_{2}\) and \(\mathcal{L}_{c}=\sum_{i}\sum_{x\in\Omega}|D_{\Delta s}(x)|\). The overall loss to training the 3D shape prior is \[\mathcal{L}=\mathcal{L}_{sdf}+\lambda_{1}L_{normal}+\lambda_{2}\mathcal{L}_{z }+\lambda_{3}\mathcal{L}_{smooth}+\lambda_{3}\mathcal{L}_{c}, \tag{7}\] with the \(\lambda\) terms weighing the relative importance of each loss term. ### _Point Cloud Lifting and Canonical during testing_ During inference, we use a single RGB-D image and known camera intrinsic parameters to lift the depth image to a partial 3D point cloud. In order to predict the deformation field, we first transform the partial point cloud to the canonical coordinate frame using Equi-pose [29] as our pose estimation module. Equi-pose is a SE(3)-equivariant network that learns category-specific canonical shape reconstruction and pose estimation in a self-supervised manner. By enforcing consistency between the invariant shape reconstruction and the input point cloud transformed by the estimated pose, Equi-pose can estimate the pose of the input point cloud with respect to the learned canonical frame. Therefore, we first input a complete template shape in our canonical frame to Equi-pose such that the transformation between our canonical frame and Equi-pose's canonical frame can be obtained. This way, we can transform any observed partial point cloud to our canonical frame using Equi-pose as a pose estimator. However, the estimated pose from Equi-pose can only serve as a noisy initialization. We show in Section III-D how our method further finetunes the pose to achieve accurate shape reconstruction. ### _Jointly Optimizing Shape and Pose_ Once we train the 3D shape prior network and the partial input point cloud is roughly aligned in the canonical space, we reconstruct the object surface by optimizing the latent code and the object pose while keeping the SDF network weights fixed. As canonical 3D reconstruction methods are sensitive to minor deviations between estimated and canonical coordinate frames, we jointly optimize the latent code \(z_{i}\) and the initial transformation by minimizing the SDF values at the observed depth points. At the same time, we sample random points in free space for the Eikonal term to ensure that the neural field is an SDF. We represent the translation as a three-dimensional vector initialized to zero and use the continuous 6D rotation parametrization from [70] for rotations. We choose a random latent code from the learned latent space as our initialization \(z_{i}\) and optimize \[\text{min}_{z_{i},R,t}\mathcal{L}_{sdf}+\lambda_{2}\mathcal{L}_{z}. \tag{8}\] ## IV Experiments **Datasets** According with other works in the literature, we include three categories in our experiments: car, chair, and airplane. These categories are present across multiple datasets, facilitating comparison between approaches and enabling evaluation of a methods generalization capabilities. We use synthetic data from the ShapeNet dataset [5] to train our deformation and template networks using 3D ground truth. Then our method trained on Shapenet is directly evaluated on the following datasets: ShapeNet [5], Pix3D chairs [30], Pascal3D+ [58] and the DDAD [19], without retraining. Since Pascal3D+ and Pix3D do not contain depth scans, we generate the partial point clouds by removing invisible points of the CAD models using the ground truth camera poses. As DDAD does not provide reconstruction ground truth, we show the performance of our method on real-world noisy scans qualitatively only. See the appendix for additional information on datasets, baselines, and implementation details. **Implementation Details** In line with prior work we train the 3D shape network on three categories in the Shapenet [5] dataset, namely _car, chair_ and _plane_. The networks are trained using the Adam optimizer [27]. We use batch size \(128\) shapes per iteration and use \(4000\) points on the surface, and \(4000\) randomly sampled points in free space per object. Training takes \(10\) hours on four NVIDIA V100 GPUs. **Baselines** We compare against the state-of-the-art in single view, category-specific 3D object reconstruction: i) SDF-SRN [31], a neural field method that represents the object in camera coordinate frame and uses a neural renderer with silhouette supervision. ii) TARS-3D [13] is a method that uses ground truth camera poses to render a deformed template shape in canonical space to the image coordinate frame. Note that TARS-3D does not require ground truth camera poses during inference, but also does not estimate the object pose. Rather, it outputs the estimates surface reconstruction in the canonical coordinate frame. Instead, both TARS and SDF-SRN output the 3D surface reconstruction in the canonical coordinate frame. As our method is closely related to point cloud completion, we further compare our method against a transformer-based point cloud completion method, PoinTr [65]. In contrast to the baselines, our model is only trained on Shapenet and does not require camera pose information during both training and testing time. Only a partial depth scan is needed during inference. **Evaluation Metrics** In this work, we follow [51] and report the F1-score at threshold \(1\%\) as our primary evaluation metric. Tatarchenko et. al showed, that common metrics, such as chamfer distance and IoU allow for large variations from the ground truth model. In contrast, F1 score with a tight threshold Fig. 2: We train a DIF-Net as our 3D representation on purely synthetic data in an auto-decoder fashion. During inference, we lift the depth image onto 3D using the known camera intrinsics and estimate an initial transformation between the camera frame and the canonical frame. We jointly optimize the object pose and 3D shape to fit the partial observation. Trainable parameters/network parts are marked in red. requires the prediction to closely follow the ground truth to achieve high score. We additionally report the bidirectional chamfer distance (CD), multiplied by a factor of \(1e4\) for readability. ### _3D Reconstruction on synthetic Shapenet data_ Table I shows quantitative results of testing all approaches on the holdout test set of Shapenet. Our method outperforms the baselines in the car and plane categories with ground truth camera poses and is competitive in the chair category. We investigate cases where no ground truth camera poses are available and initialize our method and PoinTr with the Equitpose estimates. Even without access to ground truth camera poses, our method performs comparable to or better than the baseline methods. We can see that PoinTr suffers greatly when the coordinates are not in the canonical coordinate system, showing that our approach of combining canonicalization with shape reconstruction is necessary for 3D shape reconstruction on real-world depth data. Our method's 3D reconstructions are more faithful to the underlying ground truth mesh, shown by the fact that we outperform all other methods on the F1 metric. PoinTr outputs only a limited number of points and does not reconstruct the underlying surface, nor does it give us correspondences between shapes in a category. ### _3D reconstruction on Shapenet with Occlusion_ This section investigates how our method performs when the observed object is occluded. For this experiment, we generate occluded area on the RGB-D images and remove the occluded points from the Shapenet test data as presented in Figure 3. To study the influence of a wide variety of occlusions, we use rectangular overlap regions and generate overlap ratios from 5 - 85%. We apply models trained on Shapenet on the occlusion task for all methods. We remove the occluded region from the RGB-D images and canonicalize the occluded, partial pointclouds, before finetuning the surface reconstruction to fit the available points. As shown in Table II, our method outperforms all the baselines without access to ground truth camera poses despite not seeing any occluded data during training. Compared to the other baselines, PoinTr has a higher tolerance to occlusion. However, point cloud completion methods cannot predict the object's surface. Moreover, these methods usually predict the complete point cloud by adding predicted points Fig. 3: Qualitative result on the occluded Shapenet dataset. Our model outputs more accurate 3D shapes, as we do not condition on the input image, but can finetune the latent shape based on the partial depth observation. to the input. Therefore, the predicted point clouds are not guaranteed to be uniformly distributed and can have a higher density around the input points, resulting in lower chamfer distance, as shown in Figure 3. In contrast, our method does not have these drawbacks and can reconstruct the occluded surface by leveraging the learned category-level prior with high fidelity in terms of F-score. ### _3D reconstruction on Pascal3D+ and Pix3D_ For this experiment, we test the generalization capabilities of our approach. We directly apply our model trained on Shapenet for reconstructing Pascal3D+ objects. TARS-3D and PoinTr also apply network weights trained on Shapenet to this task, while SDF-SRN is trained directly on the Pascal3D+ dataset. As shown in Table III, our method again outperforms the other surface reconstruction methods without access to ground truth camera poses. Figure 4 shows that our method generates reasonable outputs. We further test our method on another chair dataset, namely the chair category of the Pix3D dataset. As shown in Table III, our method outperforms the baselines. PoinTr again achieves a lower chamfer distance, which does not fully represent the reconstruction quality. As shown in Figure 4, PoinTr generates point clouds that are not uniformly distributed while our method predicts smooth surfaces. ### _Ablation Study and Failure Cases_ In this section, we conduct an ablation study to asses the importance of optimizing the pose during inference. The main results are shown in Table IV. On the left, we show the ground truth mesh overlaid with the partial input points with Fig. 4: Qualitative results on the Pascal3D+ (top) and Pix3D (bottom) datasets. pose optimization (blue) and without (green). We can see a noticeable reduction in F1 scores and a significant reduction in the reconstruction quality. The chamfer score difference between reconstruction methods across the dataset is slight, though, confirming [51] in that chamfer distance is not an ideal metric for 3D reconstruction. This ablation study shows that canonical reconstruction methods are sensitive to deviations in the estimated canonical coordinate frame. This result is also confirmed by the poor performance of PoinTr on input points where the estimated canonical coordinate frame is off (see Tables I and III). ### _3D reconstruction on real-world noisy scans_ Finally, we apply our method trained on Shapenet directly to real-world noisy LIDAR scans. To demonstrate our tolerance to noise in the point clouds, we test our method on an autonomous driving benchmark DDAD [19]. DDAD contains urban scenes scanned using LiDARs mounted on self-driving cars. To showcase our method, we extract frames that include other driving cars and crop the LiDAR scans of other cars with masked images. Finally, these noisy LiDAR scans are fed to the pose estimation module and our deformation field to reconstruct the surfaces. Since DDAD does not contain ground truth CAD models, we present the qualitative results in Figure 5. Note that our method does not have access to the image but only the noisy LiDAR point clouds. Despite large portions of missing parts and the noise in the LiDAR scans, our method can still reconstruct reasonable car surfaces without access to ground truth camera poses. ## V Conclusion and future work We introduced a new method for complete 3D surface reconstruction of an object from real-world depth images. Our method relies on a representation obtained solely by training on synthetic data, which allows for extracting high-quality, category-specific geometry. We showed that even small errors in pose estimation lead to significant errors in 3D reconstruction. Therefore a simple method which uses an independently trained pose estimator followed by reconstruction in the object frame does not yield good reconstruction results. Instead, we presented a finetuning scheme to optimize the object surface and pose jointly during inference. We also showed Fig. 5: Qualitative results on the DDAD dataset. that learning strong 3D priors benefits the 3D reconstruction of occluded objects. Our method generalizes across datasets and input modalities, from dense depth images to sparse LIDAR point clouds. While our process still exhibits failure modes when the error in the estimated pose is large, this could be alleviated by combining pose estimation and 3D reconstruction in an end-to-end trainable manner. We hope our work will inspire further work in this direction.
2301.02118
Origin of Multifractality in Solar Wind Turbulence: the Role of Current Sheets
In this work, a multifractal framework is proposed to investigate the effects of current sheets in solar wind turbulence. By using multifractal detrended fluctuation analysis coupled with surrogate methods and volatility, two solar wind magnetic field time series are investigated, one with current sheets and one without current sheets. Despite the lack of extreme-events intermittent bursts in the current sheet-free series, both series are shown to be strongly multifractal, although the current sheet-free series displays an almost linear behavior for the scaling exponent of structure functions. Long-range correlations are shown to be the main source of multifractality for the series without current sheets, while a combination of heavy-tail distribution and nonlinear correlations are responsible for multifractality in the series with current sheets. The multifractality in both time series is formally shown to be associated with an energy-cascade process using the p-model.
Leonardo F. Gomes, Tiago F. P. Gomes, Erico L. Rempel, Silvio Gama
2023-01-05T15:53:03Z
http://arxiv.org/abs/2301.02118v1
# Origin of Multifractality in Solar Wind Turbulence: the Role of Current Sheets ###### Abstract In this work, a multifractal framework is proposed to investigate the effects of current sheets in solar wind turbulence. By using multifractal detrended fluctuation analysis coupled with surrogate methods and volatility, two solar wind magnetic field time series are investigated, one with current sheets and one without current sheets. Despite the lack of extreme-events intermittent bursts in the current sheet-free series, both series are shown to be strongly multifractal, although the current sheet-free series displays an almost linear behavior for the scaling exponent of structure functions. Long-range correlations are shown to be the main source of multifractality for the series without current sheets, while a combination of heavy-tail distribution and nonlinear correlations are responsible for multifractality in the series with current sheets. The multifractality in both time series is formally shown to be associated with an energy-cascade process using the \(p\)-model. keywords: multifractals - turbulence - data analysis - statistical - solar wind ## 1 Introduction Fractals have been widely employed in nonlinear analysis along the past decades as a form of representing the complex topological structures produced by dynamical systems. These topological structures are subsets of the phase space that may represent chaotic attractors, stable or unstable manifolds, boundaries between basins of attraction, etc. Thus, when dynamical systems are investigated through nonlinear time series analysis, the fractal indices computed from the time series somehow represent the complexity of the structure of an underlying set on which the solution lies. Additionally, the dynamical structure could be represented either by a monofractal or a multifractal process. A monofractal process has a scaling law for a fluctuation function which is a linear function of statistical moments with a single scaling exponent. A multifractal process has a power-law scaling which is a nonlinear function of statistical moments with a range of scaling exponents (Salat et al., 2017). A monofractal scaling is to be expected from dynamical processes behind perfectly self-similar fractal sets, like deterministically generated Cantor sets (Cantor, 1883), or even from white noise time series (Ihlen, 2012); multifractals, on the other hand, are observed in inhomogeneous systems, such as strongly intermittent turbulence, where the presence of strong fluctuations related to coherent structures localized in space generate a departure from Gaussianity in probability distribution functions (PDFs) of small-scale structure functions (Carbone et al., 2004), as seen in several analyses of observational magnetohydrodynamic data (see, e.g., Marsch & Tu (1998), Burlaga (2001), and Bruno (2019) for reviews on turbulence, intermittency and multifractal scalings in the solar wind). A series of recent works have confirmed the complex and multifractal nature of solar wind fluctuations. Chang et al. (2004) studied the origin of complexity in space plasmas using MHD simulations, dynamic renormalization group and wavelet analysis, arguing that the turbulent plasmas in the solar wind and auroral regions are dominated by a combination of propagating modes and nonpropagating intermittent nonlinear structures, whose interactions with charged particles may lead to the energization of plasma populations such as auroral ions. Macek (2007) employed Voyager magnetic field data in the outer heliosphere and Helios plasma data in the inner heliosphere to show that multifractal spectra of intermittent solar wind fluctuations are consistent with that of the generalized two-scale weighted Cantor set. Bolzan & Rosa (2012) analyzed magnetic field data from the ACE satellite and conjectured that the presence of large scale coherent structures during coronal mass ejections (CME) decreases the multifractality, when compared with periods after the CME events. Wavelet-leader multifractal analysis of magnetospheric dissipations, as measured by the AL index, reveal that the magnetosphere is a multi-scale, complex, turbulent system, driven into a non-equilibrium self-organized state, which may explain the observations of repeatable and coherent substorm phenomena with underlying complex multifractal behavior in the plasma sheet (Valdivia et al., 2013). The interaction of the solar wind with the Earth's magnetosphere also contributes for multifractality in measurements of the geomagnetic activity, such as the geomagnetic induced current (Wirsing & Mili, 2020) and the Dst index (Ogunjo et al., 2021), although internal sources of multifractality must also be considered, as Gopinath (2016) suggests that multifractality of the auroral electrojet index is fairly independent of the solar activity cycle. Wawrzaszek et al. (2019) characterized multifractality in intermittent turbulence of heliospheric magnetic field fluctuations from Ulysses spacecraft, concluding that intermittency/multifractality decreases with heliospheric distance, a result that was confirmed by Kiran et al. (2021). Recent analysis of electron density fluctuations in the E-F valley region of the ionosphere performed with the multifractal detrended fluctuation analysis (MF-DFA) method show that irregularities are multifractal, asymmetric, intermittent and non-homogeneous (Neelakshi et al., 2022). The direct link between intermittency and multifractality of magnetic and velocity field fluctuations in the solar wind was made clear in Salem et al. (2009). Using data from the Wind spacecraft, they applied the Haar wavelet transform to filter out intermittency from the time series and showed that the scaling exponents for the structure functions behave as a linear function of statistical moments, as in monofractal processes, therefore attributing multifractality in the solar wind to intermittency. Gomes et al. (2019) obtained a similar linear scaling after filtering out the current sheets from Cluster-1 intermittent magnetic field data, suggesting that the current sheets are the coherent structures responsible for the nonlinear scaling of the structure functions in the solar wind. This was confirmed after inspection of time series of days when current sheets were absent, that also showed a linear scaling. A question remained on whether the linear scalings found by Salem et al. (2009) and Gomes et al. (2019) indeed imply that the filtered time series are monofractal or not, i.e., is the nonlinearity of the distribution of scaling exponents of structure functions a general measure of multifractality or is it just an indication of intermittency, one among different possible sources of multifractality? One of the goals of the current work is to answer this question. In this sense, it is important to stress that the origin of multifractality is not always related to fat-tailed PDFs, as it may also be caused by different correlations in small and large fluctuations, such as linear or nonlinear correlations (Kantelhardt et al., 2002; Wu et al., 2018). The source of multifractality can be investigated by producing surrogates from the original time series. Two types of surrogates are useful in this context (Theiler et al., 1992; Lancaster et al., 2018). First, shuffling the amplitudes of the original signal breaks all long-range correlations, while keeping the PDF unchanged. Therefore, if the multifractality is due to fat-tailed PDFs, it cannot be removed by shuffling the series. If it is due, solely, to time correlations, the corresponding shuffled series will be monofractal. If both fat-tailed PDF and linear/nonlinear correlations are present, the multifractality of the shuffled series should be smaller than that of the original series (Barunik et al., 2012). The second type of surrogate is produced by randomizing the phases of the Fourier modes of the original time series, producing a new series with Gaussian PDF, but preserving the linear correlations of the original series. If the random phases time series becomes monofractal, then nonlinear correlations and/or non-Gaussian PDFs are the source of multifractality. If the multifractality is preserved in the random phases time series, then linear correlations are its source. Studies of surrogate time series have been conducted to probe the origin of multifractality in a wide range of contexts, including financial markets (Barunik et al., 2012), human gate diseases (Dutta et al., 2013), near-fault earthquake ground motions (Yang et al., 2015), solar irradiance fluctuations (Madanchi et al., 2017), air pollutants (Dong et al., 2017), meteorological time series of air pressure, air temperature and wind speed (Gos et al., 2021) and rainfall records (Sarker and Mali, 2021). The surrogate method was also employed in time series of CME linear speed during solar cycle 23 to conclude that the multifractality is due to both the broad PDF and long range time correlations (Chattopadhyay et al., 2018). In the present paper, we use the method to reveal the role of current sheets in the origin of multifractality in the solar wind. By analyzing two qualitatively different magnetic field time series from Cluster-1, one filled with current sheets and another one void of current sheets, we develop a nonlinear methodology based on the MF-DFA method coupled with the volatility and surrogate time series. Thus, the contribution of small- and large-scale magnetic fluctuations can be quantified in different types of multifractal solar wind series. It is revealed that when the multifractality is not mainly due to the PDF, the scaling exponents display an almost linear behavior as a function of the moments of the structure function, despite the presence of strong multifractality in the series. In addition, we employ the \(p\)-model (Halsey et al., 1986; Meneveau and Sreenivasan, 1987) to confirm that the multifractality in both types of solar wind time series can be attributed to a turbulent energy cascade process. This paper is organized as follows. In section II, the MF-DFA methodology is briefly described; in section III, the multifractal analysis of two solar wind time series is conducted, including their volatility time series; section IV analyses the surrogate of the original and volatility time series, to determine if the source of the multifractality in the solar wind is due to PDF or correlations; section V presents the scaling exponent analysis of the original and surrogate times series; section VI describes the \(p\)-model analysis. Finally, section VII presents the conclusions. ## 2 MF-DFA The MF-DFA method is a generalization of the detrended fluctuation analysis (DFA) method for quantifying long-range correlations in non-stationary time series (Kantelhardt et al., 2002). The method identifies the scaling of \(q\)th-order moments of the time series (Norouzzadeh et al., 2007). The MF-DFA method consists of five steps: 1. The time series \(x_{k}\) (\(k=1,2,\cdots,N\)) is integrated: \[Y(i)=\sum_{k=1}^{i}\left[x_{k}-\langle x\rangle\right],\qquad i=1,...,N\] (1) where \(\langle x\rangle\) is the average value of the data set. 2. The series \(Y(i)\) is divided into \(N_{s}\equiv\mathrm{int}(N/s)\) non-overlapping segments with equal lengths \(s\). Since \(N\) is usually not a multiple of \(s\), some of the data points in the time series may be left out of the last segment. To fix this, the procedure is repeated starting from the opposite end of the time series and going backwards. Consequently, \(2N_{s}\) segments are obtained. 3. The local trend for each \(2N_{s}\) segments is calculated. Then the variance is given by \[F^{2}(s,\nu)=\frac{1}{s}\sum_{i=1}^{s}\left\{Y\left[(\nu-1)\,s+i\right]-y_{ \nu}(i)\right\}^{2},\] (2) for each segment indexed by \(\nu=1,\ldots,N_{s}\) and \[F^{2}(s,\nu)=\frac{1}{s}\sum_{i=1}^{s}\left\{Y\left[N-(\nu-N_{s})\,s+i\right]-y _{\nu}(i)\right\}^{2}\] (3) for \(\nu=N_{s}+1,\ldots,2N_{s}\), where \(y_{\nu}\) is the \(m\)-th degree fitting polynomial of each segment \(\nu\). This polynomial detrending of order \(m\) in the \(Y\) profile eliminates trends up to order \(m-1\) in the original time series and specifies the type of MF-DFA applied. * The average over all segments is calculated to obtain the \(q\)th-order fluctuation function: \[F_{q}(s)=\left\{\frac{1}{2N_{s}}\sum_{\nu=1}^{2N_{s}}[F^{2}(s,\nu)]^{\frac{q}{3} }\right\}^{\frac{1}{q}}\,,\] (4) where, in general, the \(q\) parameter can take any real value except zero. For \(q=2\), the equation returns the DFA method. Steps 2 to 4 are repeated for different time scales \(s\). * The scaling behavior of the fluctuation function is defined by the log-log plot of \(F_{q}(s)\times s\) for each value of \(q\). If \(x_{l}\) have long-range correlations, for large values of \(s\), \(F_{q}(s)\) increases as a power-law, \[F_{q}(s)\sim s^{h(q)}\,.\] (5) The scaling exponents \(h(q)\) are the generalized Hurst exponents, defined as the slope of the log \(F_{q}(s)\times\log(s)\) graph, where for \(h(2)\) we have the standard Hurst Exponent (Hurst et al., 1965). For positive values of \(q\), \(h(q)\) describes the scaling behavior of segments with large fluctuations and for negative values of \(q\), \(h(q)\) describes the scaling behavior of segments with small fluctuations. For monofractal series, \(h(q)\) is independent of \(q\), but for multifractal series \(h(q)\) depends on \(q\). The generalized Hurst exponent is directly related to the Renyi exponent (Renyi, 1976) \(\tau(q)\) by \[\tau(q)=q\,h(q)-1\,. \tag{6}\] Besides \(h(q)\), another way to characterize the multifractality of a time series is by the singularity spectrum \(f(\alpha)\), which is related to \(\tau(q)\) via a Legendre transform, \[\alpha=\tau^{\prime}(q)\quad\text{and}\quad f(\alpha)=q\,\alpha-\tau(q)\,, \tag{7}\] where \(\alpha\) is the singularity exponent. This \(f(\alpha)\times\alpha\) relation represents the multifractal spectrum and has a concave parabolic shape. From the multifractal spectrum, it is possible to obtain a set of parameters to characterize each series: (i) the \(\alpha\) value where \(f(\alpha)\) is maximum, \(\alpha_{0}\); (ii) the \(\alpha\) width, \(\Delta\alpha=\alpha_{max}-\alpha_{min}\), where \(\alpha_{min}\) and \(\alpha_{max}\) are, respectively, the minimum and maximum values of \(\alpha\) that mark the base of the concave parable in the multifractal spectrum (\(\Delta\alpha\) is a measure of multifractal strength); (iii) the asymmetry parameter: \[A=\frac{\alpha_{max}-\alpha_{0}}{\alpha_{0}-\alpha_{min}}\,, \tag{8}\] where \(A=1\) means the spectrum is symmetric, for \(A>1\) the spectrum is right-skewed asymmetric, and for \(A<1\) the spectrum is left-skewed asymmetric (Shimizu et al., 2002; de Freitas et al., 2016). A multifractal spectrum with a long right tail has a greater contribution from small fluctuations. By contrast, a multifractal spectrum with left asymmetry has a greater influence by local fluctuations with large values (Inleh, 2012). Another useful multifractal parameter can be extracted from the \(\tau(q)\times q\) relation. As can be seen from Eq. (6), \(\tau(q)\) has a linear dependence with \(q\) for monofractal series, where \(h(q)\) is constant. In contrast, for multifractal series, this dependence is nonlinear. The \(q\)-dependency of the Renyi exponent can be quantified by the coefficient of determination, \(R^{2}\). \(R^{2}\) measures the proportion of the variance for a dependent variable that is predictable by an independent variable in a linear regression model (Barrett, 1974). The coefficient of determination is given by: \[R^{2}=1-\frac{\sum_{i=1}^{n}(\tau_{i}-\bar{\tau}_{i})^{2}}{\sum_{i=1}^{n}( \tau_{i}-\bar{\tau})^{2}}\,, \tag{9}\] where \(\tau_{i}=\tau(q_{i})\) is the observed dependent variable, \(\bar{\tau}_{i}\) is the corresponding predicted value and \(\bar{\tau}\) is the mean of the observed data. \(R^{2}\) varies from 0 to 1, where in our case 1 represents a perfect fit to the linear dependence model. In other words, the measure of \(R^{2}\) for the \(\tau(q)\times q\) relation will be closer to 0 for multifractal series and closer to 1 for monofractal series. The MF-DFA method has best results if the time series are reasonably stationary, i.e., if they have a noise like structure. As suggested by Eke et al. (2002), it is possible to determine if the time series have noise like structure by computing a monofractal detrended fluctuation analysis prior to conducting the MF-DFA analysis. Time series are noise like if their Hurst exponent \(h(2)\) is between 0 and 1, and they are random walk like (nonstationary) if \(h(2)\) is above 1. Inleh (2012) suggests that time series with \(h(2)\) above 1.2 should be differentiated before application of the MF-DFA analysis. ## 3 Multifractal analysis of solar wind data We analyze solar wind magnetic field data detected with the Fluxgate Magnetometer (FGM) onboard Cluster-1, with 22 Hz sampling frequency. Two time series with 24 hours are investigated, one from 2008 March 9 and one from 2016 January 25. To reduce the computational time of the analysis, the data length has been reduced by using a decimation process. The low-pass Chebychev Type I infinite impulse response filter was used with a reduction factor \(M=10\), order 8 and \(0.8/M\) cut-off frequency. This decimation process is described in Gomes et al. (2019). After decimating the time series, we apply the MF-DFA method with four input parameters: minimum scale \(s_{i}\), maximum scale \(s_{f}\), order of fluctuation function \(q\) and polynomial order \(m\). The scale refers to multiple segment sizes of the cumulative series and varies from a minimum segment size \(s_{i}\) to a maximum \(s_{f}\). In this work, we use \(s_{i}=10\) and \(s_{f}=N\), where \(N\) is the length of the time series; \(q\) varies between \(-20\) and \(20\) with an increment of \(\Delta q=0.25\), and \(m=3\). This choice of parameters was supported by several tests. The recommendation for large time series is to use a polynomial trend order around \(m=3\); \(s_{f}=N\) was chosen to avoid deformations in the shape of the multifractal spectra. Meanwhile, for the \(q\) parameter the use of values larger than 20 does not change the shape of the spectra significantly. ### MF-DFA analysis of the \(|B|\) time series Figure 1 shows the solar wind magnetic field time series studied in this section for days 2008 March 9 and 2016 January 25. In the upper panel, the time series for 2008 March 9 (red) and its first order differencing (black) are shown. As it was explained in the previous section, time-differencing is necessary in this case due to the high nonstationarity of this series (\(h(2)=1.23\)). Throughout the remaining of this section, only the differenced time series will be used for March 9. This time series was characterized by Gomes et al. (2019) as being permeated by large-scale current sheets. The green regions in the original time series denote current sheets found with Li's method (Li, 2008). The lower panel shows the time series for 2016 January 25, which is characterized by a higher degree of stationarity and the absence of current sheets (Gomes et al., 2019). Due to its higher stationarity (\(h(2)=0.96\)), there is no need to perform a differencing in this series. Figure 2 shows different multifractal measures of the two magnetic field time series. Figure 2(a) shows the multifractal spectra, which reveal a left asymmetry for the March 09 time series (red) and a right asymmetry for the January 25 series (blue). The left asymmetry indicates the stronger contribution to multifractality coming from large fluctuations associated with values of \(q>0\) in the intermittent time series of the current sheet-filled time series of March 09; the right asymmetry found for the current sheet-free time series of January 25 points to the greater contribution of small fluctuations to the multifractality (Ihlen, 2012). The width of the spectrum can be used as a measure of the degree of multifractality of the series (Shimizu et al., 2002). Comparing both spectra, it can be seen that they have almost the same width (\(\Delta\alpha\approx 0.541\) for March 9 and \(\Delta\alpha\approx 0.555\) for January 25), which may be surprising, since the time series of March 9 is visibly more intermittent, with strong bursts randomly interspersed in time. In this case, the difference in multifractality can be better quantified by the Renyi exponent \(\tau(q)\), shown in Fig. 2(b). It reveals a nonlinear behavior for both series, but with \(R^{2}\approx 0.804\) for March 9 and \(R^{2}\approx 0.986\) for January 25, thus, March 9 displays higher multifractality. ### MF-DFA analysis of the volatility time series In the previous section, the degree of multifractality, as provided by the width of the multifractal spectra, could not properly distinguish between the two time series under investigation, which is unexpected, given that the original series are not only visually very different, but one of them is known to be permeated by coherent structures (current sheets) and the other is not. This is probably because although the differenced time series of 2008 March 9 is apparently more intermittent than the series of 2016 January 25, most of the abrupt changes in \(|B|\) caused by the current sheets in the March 9 series have a small amplitude and, therefore, do not produce strong bursts in the time-differenced series. Such abrupt changes in \(|B|\) can be enhanced by employing the volatility, thus providing a way to investigate the role of current sheets in the multifractality. In the present section, we employ the volatility to enhance the distinct features of each series due to current sheets before repeating the MF-DFA analysis. The magnetic volatility, \(\mathrm{vol}_{mag}\), can be calculated from the standard deviations of the log magnetic return \(\Delta r_{\mathrm{mag}}(t)\) in a moving window of length \(\omega\) along \(N\) sample points (Tsay, 2010) \[\Delta r_{\mathrm{mag}}(t)=\log\left(\frac{|\mathbf{B}(t+\tau)|}{|\mathbf{B} (t)|}\right)\,, \tag{10}\] \[\mathrm{vol}_{\mathrm{mag}}(j)=\sqrt{\frac{1}{\omega-1}\sum_{i=j}^{\omega+j -1}(\Delta r_{\mathrm{mag}}(i)-\mu(j))^{2}}\,, \tag{11}\] where \(\tau\) is a time-lag, \(j=1,\ldots,N-\omega+1\) and \(\mu(j)\) is the mean \(\Delta r_{\mathrm{mag}}\) inside the window (Gomes et al., 2019). Note that since \(\Delta r_{\mathrm{mag}}\) involves computing a time difference with lag \(\tau\), there is no need to difference the original time series to remove nonstationantiaries prior to computation of the volatility. The \(\omega\) ant \(\tau\) values are estimated from the Power Spectrum Density (PSD). Figure 3(a) shows the PSD for the March 9 time series, where the inertial range is the blue region between the dashed lines. This region was chosen as the frequency interval where the slope of the fitted line is -5/3, following Kolmogorov's K41 theory (Kolmogorov, 1941) for fully developed turbulence (Frisch, 1995). The frequency in the middle of the inertial range marks the scale used to define both \(\tau\) and \(\omega\). It is also the scale used in Li's method to detect the current sheets, shown in Fig. 1. In this way, we define \(\tau=\omega=50s\). Figure 3(b) shows the PSD for the January 25 series. Figure 4 exhibits the volatility time series for 2008 March 9 (upper panel, red) and for 2016 January 25 (lower panel, blue) from the decimated magnetic field data. Recall that the upper series has many current sheets while the lower one has none. Note that, unlike the January 25 series, the March 9 volatility series has several extreme events. Most of these high peaks are due to the abrupt changes in the magnetic field that take place when the satellite crosses a current sheet in the solar wind, as evidenced by the coincidence between extreme events in the volatility and current sheets detected by Li's method (see Fig. 2(a),(b) in Gomes et al. (2019)). As a consequence, the multifractal spectra obtained from the volatility of both series are very different, as seen in Fig. 5(a). Now, the spectrum of the intermittent time series of March 9 is much broader than the one from January 25. The \(\alpha-\)width is \(\Delta\alpha=0.94134\) for March 9 and \(\Delta\alpha=0.74921\) for January 25. The volatility has enhanced the contribution of the extreme events due to current sheets, thus showing the signature of coherent structures present in the solar wind that were partially hidden in the multifractal analysis of the original time series. The Renyi exponents are shown in Fig. 5(b); once again, the curve for March 9 is more concave than for January 25, reflecting its higher level of multifractality. The coefficient of determination for the Renyi exponents is \(R^{2}=0.97464\) for the volatility of March 9 and \(R^{2}=0.98125\) for the volatility of January 25. It is clear that the volatility has highlighted the role of current sheets in the multifractal singularity spectrum. ## 4 MF-DFA of surrogate time series According to Madanchi et al. (2017), there are two features in a time series that can lead to its multifractality: (i) the presence of heavy-tailed PDFs, as in highly intermittent series, and (ii) the existence of linear and non-linear correlations. In this section, we try to identify the origin of the multifractality in the solar wind by means of two surrogate time series derived from the original \(|B|\) data. As mentioned in the introduction, the shuffled time series is a random permutation of the original time series in the real space that destroys all temporal correlations, while keeping the same PDF for the amplitudes of \(|B|\). On the other hand, the random phases surrogate is generated from the Fourier Transform of the original \(|B|\) series. A new Fourier series is generated by shuffling the phases of the Fourier modes while keeping their power spectrum (Maiwald et al., 2008). The inverse Fourier transform of this new frequency spectrum is the random phases surrogate, which keeps the power spectrum and linear autocorrelation of the original series, but has a Gaussian PDF and breaks the nonlinear correlations. After generating these two surrogates, we repeat the multifractal analysis described in the previous section; if the shuffled surrogate has a multifractal spectrum which is considerably narrower than the spectrum of the original series, it means that time correlations are an important source of multifractality in the original time series. If the random phases surrogate has a multifractal spectrum which is considerably narrower than the spectrum of the original series, it means that fat-tailed PDFs and/or nonlinear correlations are important for the multifractality. Note that both kinds of multifractality mentioned above can be simultaneously present in a time series (Norouzzadeh et al., 2007; Madanchi et al., 2017). If both the shuffled and random phases surrogates produce monofractal spectra, then nonlinear correlations (but not fat-tailed PDFs) are the source of multifractality. In the following subsections, we perform this analysis for both the \(|B|\) and volatility time series of 2008 March 9 and 2016 January 25. ### Magnetic Field time series, 2008 March 9 Figure 6 shows the differenced time series of \(|B|\) for March 9 (red) with its shuffled (green) and random phases (magenta) surrogates. Clearly, the shuffled surrogate keeps the extreme events of the differenced \(|B|\) series, but the same events are absent from the random phases surrogate. Figure 7(a) displays the multifractal spectra for the March 9 original and surrogate time series. For the shuffled spectrum (green) we see a small reduction in the width when compared with the original one (red). This means that there is a contribution from correlations to multifractality, along with the contribution of the PDF. Considering the random phases spectrum (magenta), its width reduces drastically (the \(\Delta\sigma\) variation is about 0.32), which points to a significant contribution to multifractality coming from a non-Gaussian PDF and/or Figure 1: Solar wind time series of \(|B|\) measured by Cluster-1. (a) For 2008 March 9 (red), containing current sheets (green), and its first order differencing (black); (b) time series of \(|B|\) for 2016 January 25 (blue), without current sheets. Figure 3: Power spectral density for solar wind magnetic field of (a) 2008 March 9, and (b) 2016 January 25. The blue region is the inertial range and the red line is the linear fit for this interval, with a slope equal to -5/3 for March 9 and slope -3/2 for January 25. Figure 2: (a) Multifractal spectrum of \(|B|\) for 2008 March 9 (red), and 2016 January 25 (blue). (b) Renyi exponents for 2008 March 9 (red), and 2016 January 25 (blue). nonlinear correlations. The conclusion from both spectra is that the PDF has the strongest contribution to multifractality. The contribution of the PDF is due to the presence of strong intermittent bursts (extreme events) in the March 9 time series. Since these bursts have been shown to be related to large current sheets (see Gomes et al. (2019)), the current sheets can be seen as the origin of most of the multifractality in this time series. Figure 7(b) confirms this conclusion by showing the Renyi exponent as a function of \(q\), where the random phases surrogate has a smaller concavity than the shuffled surrogate. ### Magnetic field time series, 2016 January 25 Figure 8 shows the time series for January 25 (blue) with its shuffled (green) and random phases (magenta) surrogates. Figure 9(a) shows a significant width reduction in both surrogate spectra in comparison with the original volatility spectrum (blue). The spectrum of the shuffled series (green) has a width \(\Delta\alpha=0.194\), indicating a difference of 0.36 with the spectrum of \(|B|\). Similarly, the spectrum for the random phases series has a small width, about \(\Delta\alpha=0.32\), a difference of 0.23 with the spectrum of \(|B|\). So, there is strong influence from long-range correlations as well as non-gaussianity on the January 25 magnetic field multifractality, but the contribution of the correlations is preponderant, since the shuffled spectrum is considerably narrower than the random phases spectrum. ### Volatility time series, 2008 March 9 We proceed with the analysis of the origin of the multifractality for March 9 using the volatility, as shown in Fig. 10 for the original (red), shuffled (green) and random phases (magenta) time series. The corresponding multifractal spectra in Fig. 11(a) show a wide parabola Figure 4: Volatility of solar wind magnetic field time series for (a) 2008 March 9, and (b) 2016 January 25. Figure 5: (a) Multifractal spectra for the volatility in 2008 March 9 (red), and 2016 January 25 (blue). (b) Renyi exponents for the volatility in 2008 March 9 (red), and 2016 January 25 (blue). Figure 6: Differenced time series for 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (magenta). Figure 7: (a) Multifractal spectrum of \(|B|\) for 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (magenta). (b) Renyi exponents for 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (magenta). for the original volatility series (red) and two narrower parabolas related to its shuffled (green) and random phases (magenta) series. The random phases spectrum has a width of about \(\Delta\alpha=0.39\) and the shuffled spectrum has a width of \(\Delta\alpha=0.35\). Since both spectra have approximately the same width, it shows an important feature that was not so clear from the multifractal spectra of the \(|B|\) surrogate series (Fig. 7), that is, the importance of the nonlinear correlations, which play a key role, together with the PDF, in the origin of the multifractality for the March 9 series. Since the volatility is computed with a lag-time of \(\tau=50s\), it is better suited for measuring the relevance of long-range nonlinear correlations than the time-differenced \(|B|\) series. Figure 11(b) confirms that the shuffled and random phases series have almost linear Renyi exponents, thus, the series are closer to monofractal. ### Volatility time series, 2016 January 25 Figure 12 shows the volatility time series of the January 25 time series (blue) and its shuffled (green) and random phases (magenta) surrogates. Figure 13(a) shows the corresponding multifractal spectra. Once again, the reduction in the width for both surrogate spectra means that a mutual contribution to multifractality coming from long-range correlations and non-Gaussianity is present, with a clear predominance of the long-range correlations effects, since the shuffled spectrum is much narrower than the random phases spectrum. A quantitative comparison of all the results for the \(|B|\) time series and volatility time series of March 9 and January 25 is provided by Tables 1 to 3. Table 1 shows \(R^{2}\) for the Renyi exponent of \(|B|\) and its volatility for March 9 and January 25; Table 2 shows the width of the multifractal spectra, \(\Delta\alpha\); Table 3 shows the asymmetry of the spectra, \(A\). In general, all spectra for January 25 are right-asymmetric due to the importance of small scale fluctuations; for March 9, some spectra Figure 11: (a) Multifractal spectrum for the volatility of 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (magenta). (b) Renyi exponents for the volatility of 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (magenta). Figure 8: Time series for 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (magenta). Figure 10: Time series Volatility for 2008 March 9 (red) and the respective surrogates: shuffled (green), and random phases (blue). Figure 9: (a) Multifractal spectrum for 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (blue). (b) Renyi exponents for 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (magenta). are left-asymmetric due to the importance of large-scale fluctuations, but the random phases show right asymmetry, since in the random phases surrogate the effects of non-Gaussian PDFs are destroyed. ## 5 Zeta Function Another function typically employed in multifractal analyses of time series is the zeta function. Consider the structure function for \(|B|\)(Frisch, 1995): \[S_{p}(\tau)=\left(|\left|B(t+\tau)\right|-\left|B(t)\right|\right|^{p}\right), \tag{12}\] where \(\left\langle\cdot\right\rangle\) is the time average, \(\tau\) is the time lag and \(p\) are the statistical moments for the time series of \(B\). Assuming scale invariance inside the inertial range, \(S_{p}\) follows a power law \[S_{p}(\tau)\sim\tau^{\xi(p)}\,, \tag{13}\] where \(\zeta(\cdot)\) is the zeta function or scaling exponent of the structure function. So, \(\xi(p)\) is obtained by the slope of the \(\log S_{p}(\tau)\times\log\tau\) plot. The importance of this parameter comes from Kolmogorov's K41 theory (Kolmogorov, 1941) and the IK (Iroshnikov-Kraichnan) theory (Iroshnikov, 1964; Kraichnan, 1965) of self-similarity and scale invariance inside the inertial range for a homogeneous and isotropic turbulence, where the \(\zeta\) function was shown to be a linear function of \(p\), with \(\zeta(p)=p/3\) for K41 and \(\zeta(p)=p/4\) for IK. In Fig. 14(a), the linear K41 theoretical zeta scaling exponent function is shown by the black dashed line while the IK scaling exponent is denoted by a dotted line. The top panel (a) also shows the zeta scaling exponent computed from the time series of \(|B|\) for the intermittent series of March 09 (red line with circles) and for the current sheet-free series of January 25 (blue line with diamonds). The zeta function for the March 09 series clearly departs from the linear behavior, as expected for multifractal intermittent series, but, surprisingly, the zeta function exhibits an almost linear relation with \(p\) in the case of January 25, despite the fact that both series have multifractal spectra with similar widths (see Fig. 2(a)). Thus, one should be cautious before using the behavior of the scaling exponent as a definite measure of multifractality, although it is a good measure of intermittency. To confirm this result, Fig. 14(b) compares the zeta scaling exponents of the March 09 \(|B|\) series (red line with circles) with the zeta scaling exponents of its random phases series (magenta line with triangles). Since the random phases series has a Gaussian PDF, it removes from the original series the intermittent extreme events responsible for the fat-tailed PDF and the zeta scaling exponent becomes linear, following the K41 line. This result con \begin{table} \begin{tabular}{l c c c c} \hline & March 9 & \multicolumn{3}{c}{January 25} \\ & \(|B|\) & Volatility & \(|B|\) & Volatility \\ \hline Original & 0.80413 & 0.97464 & 0.98597 & 0.98125 \\ Shuffle & 0.97505 & 0.96748 & 0.99537 & 0.99573 \\ Random Phases & 0.98185 & 0.99637 & 0.99601 & 0.99424 \\ \hline \end{tabular} \end{table} Table 1: \(R^{2}\) of the Renyi exponent for magnetic field and volatilities of 2008 March 9 and 2016 January 25 \begin{table} \begin{tabular}{l c c c c} \hline & March 9 & \multicolumn{3}{c}{January 25} \\ & \(|B|\) & Volatility & \(|B|\) & Volatility \\ \hline Original & 0.54112 & 0.94134 & 0.55568 & 0.74921 \\ Shuffle & 0.36663 & 0.40332 & 0.19468 & 0.19873 \\ Random Phases & 0.21802 & 0.39299 & 0.32181 & 0.43088 \\ \hline \end{tabular} \end{table} Table 2: Width of \(\alpha\), \(\Delta\alpha\), for magnetic field and volatilities of 2008 March 9 and 2016 January 25. Figure 12: Time series of the volatility for 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (magenta). Figure 13: (a) Multifractal spectrum for the volatility of 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (magenta). (b) Renyi exponents for the volatility of 2016 January 25 (blue) and the respective surrogates: shuffled (green), and random phases (magenta). firms the importance of the contribution from a fat-tailed PDF to the multifractality of the March 09 series. In Fig. 14(c), the same analysis is done for the January 25 series, where both the original series and its random phases show an IK linear behavior, since none of the series has fat-tailed PDF, although they have multifractal spectra (see the blue and magenta spectra in Fig. 9(a)). We conclude from this that the \(\zeta\)-function is a good measure of multifractality due to PDF, but misses the contribution of long-range correlations to the multifractality. ## 6 \(P-\)Model In section 4, we showed that the multifractal spectra of the volatility of the solar wind are predominantly due to nonlinear and linear correlations in the time series of January 25 and due to PDF and nonlinear correlations for the March 9 time series. The presence of long-range nonlinear correlations in both series is the signature of a nonlinear dynamical system (possibly with some stochastic component) governing the behavior of both series. In the present section, we employ the \(p-\)model (Halsey et al., 1986; Meneveau and Sreenivasan, 1987) to show that both the correlations and the extreme events mentioned above are actually a consequence of turbulent energy-cascade processes with different scaling laws that depend on the presence or absence of current sheets in the original time series. The \(p-\)model is a model for non-homogeneous energy-cascading process in the inertial range of fully-developed turbulence based on the generalized Cantor set. Consider that the flux of kinetic energy from eddies of size \(L\) to smaller eddies is represented by a dissipation \(E_{L}\). In the one-dimensional version of the \(p-\)model, \(L\) is the length of an interval. Suppose that an eddy of size \(L\) is usually divided into two smaller eddies (i.e., two sub-intervals) of sizes \(l_{1}L\) and \(l_{2}L\), where \(0<l_{1}<l_{2}<1\) are the size factors, with the energy flux \(E_{L}\) being distributed unto these sub-eddies with different probabilities \(p_{1}\) and \(p_{2}\), i.e., the new dissipation values are \(p_{1}\,E_{L}\) and \(p_{2}\,E_{L}\). In practice, one can start the process with \(L=E_{L}=1\). Then, each new eddy is further sub-divided into two smaller eddies with the same size factors \(l_{1}\) and \(l_{2}\) and probabilities \(p_{1}\) and \(p_{2}\). This process may be repeated until the sub-intervals reach the Kolmogorov dissipation scale. At each cascading step \(n\), there will be \(\left(\begin{array}{c}n\\ m\end{array}\right)\) segments with length \(l_{1}^{m}\,l_{2}^{n-m}L\) and dissipation \(p_{1}^{m}\,p_{2}^{n-m}E_{L}\), for \(m=0,1,\ldots,n\). As shown by Halsey et al. (1986) for the general two-scale Cantor set, it is possible to obtain the analytic expressions for the singularity exponent \(\alpha\) and the singularity spectrum \(f\) as \[\alpha=\frac{\ln p_{1}+(n/m-1)\,\ln p_{2}}{\ln l_{1}+(n/m-1)\,\ln l_{2}}\;, \tag{14}\] \[f=\frac{(n/m-1)\,\ln(n/m-1)-(n/m)\,\ln(n/m)}{\ln l_{1}+(n/m-1)\,\ln l_{2}}\;. \tag{15}\] For each \(n\) and given values of \(l_{1},l_{2},p_{1}\) and \(p_{2}\), the variation of \(m\) will provide the different values of \(\alpha\) and \(f\) for the singularity spectrum. Since \(0\leq m\leq n\) and \(m\) is an integer, larger values of \(n\) provide a better definition of the spectrum. For a cascading process with direct energy dissipation in the inertial range, we have \(p_{1}+p_{2}<1\)(Meneveau and Sreenivasan, 1987). This means that a new \(dp\) dissipation parameter must be included, where \(dp=1-p_{1}-p_{2}\). Thus, we define \(p_{2}=1-p_{1}-dp\), as well as \(l_{2}=1-l_{1}\), in Eqs. (14) and (15). Figure 15 shows the MF-DFA multifractal spectra for the volatility series of March 9 (red circles) and January 25 (blue diamonds). The \(p-\)model fits obtained from Eqs. (14) and (15) are also shown (black line with dots). The values of \(p_{1},dp\) and \(l_{1}\) were obtained with a Monte Carlo method that minimized the mean squared error between the original and fitted spectra. For March 9th, we obtained \(p_{1}=0.71\), \(dp=0.17\) and \(l_{1}=0.68\). For January 25th, we obtained \(p_{1}=0.51\), \(dp=0.11\) and \(l_{1}=0.66\). The agreement between the observational and theoretical curves confirms that the solar wind multifractal spectra can be obtained from a turbulence cascade process. This is a remarkable result, since the \(p-\)model was specifically elaborated to represent turbulent cascade processes, and will usually not be able to approximate the spectra of other processes. Next, we compare the turbulent time series behind the \(p-\)model spectra with the observational solar wind volatility time series in Figure 14: (a) Zeta functions for the magnetic field time series for 2008 March 9 (red circles) and 2016 January 25 (blue diamonds). (b) Zeta functions for \(|B|\) 2008 March 9 (red circles) and its Random Phases (magenta triangles). (c) Zeta functions for \(|B|\) 2016 January 25 (blue diamonds) and its Random Phases (magenta triangles). The dashed lines represent the K41 scaling and the dotted lines, the IK scaling. Figure 15: Left: Multifractal spectrum for the volatility of 2008 March 9 (red circle) and its \(p-\)model fit (black line with dots). Right: Multifractal spectrum for the volatility of 2016 January 25 (blue diamond) and its \(p-\)model fit (black line with dots). terms of their PSDs. To obtain the \(p-\)model PSDs, we use the probabilities and size factors previously obtained with the Monte Carlo method. By iterating the generalized two-scale Cantor set model, we produce two \(p-\)model time series. Figure 16 shows a comparison of the solar wind volatility time series with the \(p-\)model time series. The two upper panels depict the solar wind series for March 9 (a) and the corresponding \(p-\)model (b); the two lower panels depict the solar wind series for January 25 (a) and the corresponding \(p-\)model (b). The qualitative similarity between observational and \(p-\)model time series is apparent in both cases. A comparison of observed and simulated PSDs is shown in Fig. 17. Figure 17(upper panels) shows the PSDs for the volatility time series of 2008 March 9 (left) and 2016 January 25 (right). The blue region between the vertical dashed lines represents the inertial range and the red line is the linear regression with slope \(-5/3\) for the March 9 series and \(-3/2\) for the January 25 series. Thus, the highly intermittent series of March 9 (with current sheets) exhibits a K41 scaling, whereas the January 25 series (without current sheets) shows an IK scaling. This fact had been previously established by Li et al. (2011) and confirmed by Gomes et al. (2019) using PSDs computed from the time series of \(|B|\). The PSDs computed from the \(p-\)model time series are shown in Fig. 17(lower panels), and they reveal K41 scaling for the March 9 series and IK scaling for the January 25 series, just like in the original solar wind series. Note that in both cases the inertial range can be extended almost throughout the whole PSDs shown, since our \(p\)-model has small dissipation. We conclude that a K41 intermittent turbulence cascade is behind the multifractality of the current sheet-filled time series of March 9 and an IK turbulence cascade is the origin of the multifractality of the January 25 series. This result is consistent with other time series analysed by us, that show that current sheets are responsible for the K41 turbulence multifractality and the absence of current sheets results in an IK turbulence multifractality in the solar wind (see Table 4 in Gomes et al. (2019)). ## 7 Conclusions We have presented a new methodology for multifractal analysis of solar wind magnetic field data, based on MF-DFA, volatility and surrogate time series. The MF-DFA provides a standard way to generate the singularity spectrum and the Renyi exponent; the volatility enhances the extreme events, stressing the differences between series with current sheets and series without current sheets; the surrogate time series provide a way to infer the origin of multifractality. Additionally, the \(p\)-model was used to reproduce the multifractal behavior of the solar wind series, indicating that a nonlinear turbulence energy cascade dynamical system is behind the observed dynamics. A similar framework for multifractal analysis, but without the volatility and the \(p\)-model, was used by Chattopadhyay et al. (2018) in the analysis of CME linear speed data in the solar wind. In order to keep the paper reasonably short, we have limited our presentation to only two time-series, but we have tested our techniques in other series and found that the conclusions presented are robust. An example of analysis with two other time series is included in the supplementary material (online). Further exploration of the methodology is left for future works. Just like in Gomes et al. (2019), we found the volatility to be very useful to highlight the role of current sheets. In our case, they increase the signature of multifractality due to PDF in the singularity spectra. The surrogate analysis of both original and volatility series shows that for time series with current sheets, multifractality is due to Figure 16: (a) Volatility time series for 2008 March 9 (red) and (b) generated \(p-\)model time series (black) by \(10^{th}\) interaction. (c) Volatility time series for 2016 January 25 (blue) and (d) generated \(p-\)model time series (black) by \(15^{th}\) interaction. Figure 17: (a) Left: power spectral density for 2008 March 9 volatility. (a) Right: power spectral density for 2016 January 25 volatility. (b) Left: power spectral density for generated \(p-\)model time series from 2008 March 9 volatility. (b) Right: power spectral density for generated \(p-\)model time series from 2016 January 25 volatility. The blue regions mark the inertial range and the red lines are the linear fits for those intervals. both intermittency and nonlinear correlations; for time series without current sheets, it is predominantly produced by the long-range correlations. The \(p-\)model analysis reveals that those are mainly nonlinear correlations, since the process behind the statistics is a nonlinear turbulent energy cascade. So, turbulence is the common source of the multifractality, but current sheets are the source of the left asymmetry of the singularity spectrum, as well as the nonlinear scaling exponent for the structure functions. In the absence of current sheets, the small-amplitude fluctuations are the main source of the right asymmetry of the singularity spectrum. It is important to stress that despite being a multifractal process, the current sheet-free series exhibits an almost linear scaling exponent for the structure functions, which is sometimes confused with a monofractal process in the literature. Our results indicate that the Renyi exponent is more sensitive to multifractality due to correlations than the structure function scaling exponent (zeta function). In dealing with separate cases where the presence or absence of current sheets is considered, we are attacking one of the "nine outstanding questions of solar wind physics", related by Vasill & Borovsky (2020), namely, the origin and evolution of the mesoscale (timescales in the range of minutes up to a few hours) plasma and magnetic-field structure of the solar wind. These current sheets have been associated with the border between adjacent flux tubes (Bruno, 2019), while also being related to nonlinear turbulent interactions rather than the presence of advected pre-existing flux-tube structures (Bowen et al., 2018). In the present work, we do not focus on the origin of those coherent structures, but measure their weight on the statistics of solar wind fluctuations. We do this not only through Fourier spectral indices and the scaling of structure functions, as in Salem et al. (2009), but their contribution to multifractality is explored in depth through the MF-DFA, volatility and surrogate techniques. As we said, our results reveal that although the scaling of the structure functions may be almost linear for series without current sheets, the singularity spectra may still display broad parabolas, the signature of highly multifractal signals. Thus, the scaling exponent of structure functions is adequate to measure multifractality due to PDFs, but not for multifractality due to long-range correlations, where the Renyi exponent and singularity spectra should be adopted. Multifractal series with nearly linear behavior of the scaling exponents were also reported in Tam et al. (2010) (see their Fig. 4), where the rank-order multifractal analysis (ROMA) is employed in the description of auroral zone electric-field fluctuations. In conclusion, the basic question related to mesoscale plasma turbulence in the solar wind is not whether it is monofractal or multifractal, but if the source of the ubiquitous multifractality is the PDF or the long-range correlations. The short answer is that in the presence of current sheets, the PDF has a strong contribution for multifractality, but in their absence, it is mainly due to correlations. It would be interesting to check if the monoscaling of the structure functions reported in previous solar wind time series, as in Kiyani et al. (2009, 2013) and Bruno (2019) for turbulence at kinetic scales, indeed reveal monofractality or if they indicate, in fact, multifractal series due to correlations and not due to intermittency. ## Acknowledgements L.F.G. acknowledges Brazilian agency CAPES for the financial support; E.L.R. acknowledges Brazilian agencies CAPES (grant 88887.309065/2018-01) and CNPq (Grant 306920/2020-4) for their financial support, as well as FCT--Fundacao para a Ciencia e a Tecnologia (Portugal); S.G. was partially supported by (i) CMUP, member of LASI, which is financed by national funds through FCT - Fundacao para a Ciencia e a Tecnologia, I.P., under the project with reference UIDB/00144/2020, and (ii) project SNAP NORTE-01-0145-FEDER-000085, financed by ERDF through NORTE2020 under Portugal 2020 Partnership Agreement. ## Data Availability The data used for this analysis can be obtained from European Space Agency (ESA) at the Cluster Science Archive: [https://csa.esac.esa.int/csa-web/](https://csa.esac.esa.int/csa-web/) (last access: 2 December 2020, ITA, 2020).
2309.01233
An FLRW accelerating universe model in Weyl type $f(Q)$ gravity and Observational Constraints
We propose to develop a cosmological model of the universe based on Weyl type $ f(Q) $ gravity which shows the transition from decelerating in the past to acceleration at present by considering a particular functional form of $ f(Q) $ gravity as $ f(Q) = ({H_0}^2) (\alpha_1 + \alpha_2 \hskip0.05in log ({H_0^{-2}} Q)) $. We have solved Weyl type $ f(Q) $ gravity field equations numerically and have obtained numerical solutions to the Hubble and deceleration parameters, distance modulus, and apparent magnitudes of stellar objects like SNIa Supernovae. We have also obtained numerical solutions for the Weyl vector $ w $, non-metricity scalar $ Q $, and the Lagrangian multiplier $ \lambda $ appearing in the action of $ f(Q) $ gravity. We have compared our theoretical solutions with the error bar plots of the Observed Hubble data set of $ 77 $ points, $ 580 $ distance modulus SNIa data set, and $ 1048 $ supernova Pantheon data sets of apparent magnitudes. It is found that our results fit well with the observed data set points. \bf{The model envisages a unique feature that although the universe is filled with perfect fluid as dust whose pressure is zero, still the weyl vector dominance f(Q) creates acceleration in it. }
G. K. Goswami, Rita Rani, J. K. Singh, Anirudh Pradhan
2023-09-03T17:42:57Z
http://arxiv.org/abs/2309.01233v2
# FLRW cosmology in Weyl type \(f(Q)\) gravity and observational constraints ###### Abstract We propose to develop a cosmological model of the universe based on Weyl type \(f(Q)\) gravity which shows the transition from decelerating in the past to acceleration at present by considering a particular functional form of \(f(Q)\) gravity as \(f(Q)=({H_{0}}^{2})(\alpha_{1}+\alpha_{2}\;log({H_{0}^{-2}Q}))\). We have solved Weyl type \(f(Q)\) gravity field equations numerically and have obtained numerical solutions to the Hubble and deceleration parameters, distance modulus, and apparent magnitudes of stellar objects like SNIa Supernovae. We have also obtained numerical solutions for the Weyl vector \(w\), non-metricity scalar \(Q\), and the Lagrangian multiplier \(\lambda\) appearing in the action of \(f(Q)\) gravity. We have compared our theoretical solutions with the error bar plots of the Observed Hubble data set of 77 points, 580 distance modulus SNIa data set, and 1048 supernova Pantheon data sets of apparent magnitudes. It is found that our results fit well with the observed data set points. The model envisages a unique feature that although the universe is filled with perfect fluid as dust whose pressure is zero, the weyl vector dominance \(f(Q)\) creates acceleration in it. PACS number: 98.80 cq Keywords: Weyl-type \(f(Q)\) gravity, FLRW metric. ## I Introduction In the year 1915, Einstein completely replaced the instantaneous action at a distance nature of gravitation with a field theory of general relativity (GR) [1; 2]. Gravitation was geometrized due to its permanent nature. The uniform distribution of gravitational structures in the universe over cosmic range makes it a spatially homogeneous and isotropic 4-dimensional space-time of constant curvature. These were the novel ideas of GR. Long back before Einstein, Riemann [3] developed the geometry of higher dimensional curved spaces with the help of tensor algebra and calculus. It includes the space-time that consists of metric and affine structures that are determined by metric tensor \(g_{ij}\) and Christoffel symbol \(\Gamma^{\alpha}_{ij}\). Einstein used Riemannian geometry as a mathematical tool to describe the curved space-time generated by the gravitational field in the universe. The four crucial tests of GR and the FLRW cosmological model that gives initial unavoidable big bang singularity tell the success story of GR. In the last few decades, the study indicates that the universe is expanding and accelerating. It is confirmed by cosmological observations such as Type Ia supernovae [4; 5], cosmic microwave observations [6] and Planck data [7]. Scientists have modified general relativity in various ways to support that the universe is expanding and accelerating. Some of the modified theories include \(f(R)\) where \(R\) is the Ricci scalar [8; 9; 10; 11; 12; 13; 14; 15], \(f(R,T)\) an extension of \(f(R)\) gravity with the trace (\(T\)) of energy-momentum tensor [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26], \(f(G)\) where \(G\) is the Gauss-Bonnet Tensor [27; 28; 29] and \(f(R,G)\) gravity [30; 31]. In the year 1916, German mathematician Hermann Weyl [32] proposed an extension of Riemannian geometry that unified the theory of gravity and electromagnetism. Weyl introduced an intrinsic vector field \(w^{\alpha}\) and a semi-metric connection \(\tilde{\Gamma}^{\alpha}_{ij}\) to define parallel transportation of a vector from one point to another in such a way that both its direction and magnitude change. However, it faced withdrawal due to Einstein's criticism of the theory. After this, an extension of general relativity was proposed by Cartan in which he introduced a Torsion field [33]. This led to the new extension of general relativity known as the Einstein-Cartan theory [34; 35; 36; 37]. Same time, Weitsenb\(\ddot{o}\)ck introduced a theory based on Weitsenb\(\ddot{o}\)ck space with torsion and zero Reimann curvature [38]. The idea leads to the concept of distant parallelism which is known as teleparallelism or absolute parallelism. The primary idea used in the teleparallel formulation of gravity is to use tetrad vectors instead of metric \(g_{ij}\) of the spacetime that describes the gravitational phenomenon. This led to the concept of the teleparallel equivalent of General Relativity (TEGR) [39]. So in the following years, Scientists like Dirac, Cartan, Weitezenb\(\ddot{o}\)ck, and many more started working on Weyl geometry-based spaces and have proposed the extension to the Weyl gravity such as Weyl-Dirac Langragian [40; 41; 42; 43], Weyl-Cartan theory [44; 45], Weyl-Cartan-Weitzenb\(\ddot{o}\)ck theory [46; 47]. In fact, there are two geometric equivalent frameworks of Riemannian geometry. The one is the teleparallel framework in which the curvature and the nonmetricity are zero i.e. it is entirely based on the torsion. The second one is the geometry that is completely described by the nonmetricity (\(Q\)) which is known as symmetric teleparallel gravity [49]. The symmetric teleparallel gravity was further extended into \(f(Q)\) theory [48]. Beltran et al. [50] studied the concept of cosmological implications in \(f(Q)\) gravity. Mandal et al. [51; 52] analyzed the cosmography in \(f(Q)\) gravity and discussed the energy conditions of \(f(Q)\) cosmology respectively. W. Khyllep et al. [53] investigate the cosmological behavior at the background and perturbation level of the power-law model of \(f(Q)\) theory. Off late, Kun Hu et al.[60] constructed the bounce inflation model for the early universe, and calculated the tensor perturbations (namely, primordial gravitational waves) of the model. Many others recent works in \(f(Q)\) gravity include [54; 55; 56; 57; 58; 59]. We propose to develop a cosmological model of the universe based on Weyl type \(f(Q)\) gravity which carries a salient feature that in the past the universe was decelerating. After a certain epoch, it starts accelerating and still continuing at present. For this, the particular functional form of \(f(Q)\) gravity is taken as \(f(Q)=\alpha_{1}+\alpha_{2}log(H_{0}^{-2})Q\). We have solved numerically the Weyl type \(f(Q)\) gravity field equations and have obtained numerical solutions to the Hubble and deceleration parameters, distance modulus, and apparent magnitudes of stellar objects like SNIa Supernovae. We have also obtained numerical solutions for the Weyl vector, non-metricity scalar, and the Lagrangian multiplier \(\lambda\) appearing in the action of \(f(Q)\) gravity. We have compared our theoretical solutions with the error bar plots of the Observed Hubble data set of 77 points, 580 distance modulus SNIa data set, and 1048 supernova Pantheon data sets of apparent magnitudes. It is found that our results fit well with the observed data set points. The model envisages a unique feature that although the universe is filled with perfect fluid as dust whose pressure is zero, the weyl vector dominance \(f(Q)\) creates acceleration in it. The paper is structured as follows. In Sec. II, we have presented Weyl type \(f(Q)\) gravity action and field equations. In Sec. III, we solve the field equations for FLRW space-time by taking the energy-momentum tensor as that of a perfect fluid and obtained numerical solutions to the Hubble and deceleration parameters, distance modulus, and apparent magnitudes of stellar objects like SNIa Supernovae. We have also obtained numerical solutions for the Weyl vector \(w\) and the Lagrangian multiplier \(\lambda\) appearing in the action of \(f(Q)\) gravity. In this section, we have also compared the cosmological parameters with the standard \(\Lambda\)CDM. In Sec. IV, we compare our theoretical solutions with the Observed Hubble data set of 77 points, 580 distance modulus SNIa data set, and 1048 supernova Pantheon data sets of apparent magnitudes. Finally in the last Sec. V we have concluded the work. ## II Field equations of the Weyl type \(f(Q)\) theory The action in Weyl-type \(f(Q)\) gravity is given by [61] \[S=\int\bigg{[}k^{2}f(Q)-\frac{1}{4}W_{ij}W^{ij}-\frac{1}{2}m^{2}w_{i}w^{i}+ \lambda(R+6\nabla_{\alpha}w^{\alpha}-6w_{\alpha}w^{\alpha})+L_{m}\bigg{]}\sqrt{- g}d^{4}x \tag{1}\] where \(k^{2}\equiv\frac{1}{16\pi G}\), \(m\) is the mass of the particle associated with the intrinsic vector field \(w_{i}\) of Weyl geometry, \(L_{m}\) is the matter Lagrangian and \(f(Q)\) is a general function of non-metricity scalar \(Q\). The second and third term represents the ordinary kinetic term and mass term of the vector field respectively. The Lagrangian multiplier scalar \(\lambda\) is put to make the Weyl geometry a curved space-time. The brief introduction to Weyl geometry which introduces the intrinsic vector field \(w_{i}\), non-metricity scalar \(Q\) and the tensor \(W^{ij}\) is described in the Appendix. We obtain the following Proca type equation by varying the action (1) with respect to the vector field \(w\), \[\nabla^{j}W_{ij}-(m^{2}+12k^{2}f_{Q}+12\lambda)w_{i}=6\nabla_{i}\lambda. \tag{2}\] If we compare the Eq. (2) with the standard Proca equation, we may define an effective dynamical mass of the vector field as follows \[m_{eff}^{2}=m^{2}+12k^{2}f_{Q}+12\lambda \tag{3}\] By varying the action (2) with respect to the metric, we obtain the field equation, \[\frac{1}{2}(T_{ij}+S_{ij})=-\frac{k^{2}}{2}g_{ij}f-6k^{2}f_{Q}w_{i }w_{j}+\lambda(R_{ij}-6w_{i}w_{j}+3g_{ij}\nabla_{\gamma}w^{\gamma})+\\ 3g_{ij}w^{\gamma}\nabla_{\gamma}\lambda-6w_{(i\nabla_{j})} \lambda+g_{ij}\Box\lambda-\nabla_{j}\nabla_{i}\lambda, \tag{4}\] where \(f_{Q}\) is the derivative of \(f\) with respect \(Q\), \(T_{ij}\) is the energy-momentum tensor of the content of the universe, \[T_{ij}\equiv-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}L_{m})}{\delta g^{ij}} \tag{5}\] and \(S_{ij}\) represents the re-scaled energy-momentum tensor of the free Proca field, \[S_{ij}=-\frac{1}{4}g_{ij}W_{\eta\alpha}W^{\eta\alpha}+W_{i\eta}W_{j}^{\eta}- \frac{1}{2}m^{2}g_{ij}w_{\eta}w^{\eta}+m^{2}w_{i}w_{j}. \tag{6}\] ## III Cosmological evolution in flat flat FRW metric We consider the following spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric which describes the cosmological evolution in a flat geometry, \[ds^{2}=-dt^{2}+a^{2}(t)(dx^{2}+dy^{2}+dz^{2}). \tag{7}\] where \(a(t)\) is a scale factor. The vector field \(w_{i}\) is taken as as \(w_{i}=[0,0,0,\psi(t)]\). Therefore, \(w^{2}=w_{i}w^{i}=-\psi^{2}(t)\) and \(Q=-6w^{2}=6\psi^{2}(t)\). The Lagrangian of the perfect fluid is taken as \(L_{m}=p\). Therefore, \[T_{j}^{i}=(p+\rho)u^{i}u_{j}+p\delta_{j}^{i}=diag(p,p,p,-\rho), \tag{8}\] where \(p\) and \(\rho\) are the pressure and matter-energy density of the perfect fluid. We have considered velocity vector \(u^{i}=(0,0,0,1)\), so that \(u^{i}u_{i}=-1\). The generalized Proca equation for metric Eq.(7) can be written as, \[\dot{\psi} = \dot{H}+2H^{2}+\psi^{2}-3H\psi, \tag{9}\] \[\dot{\lambda} = (-\frac{1}{6}m^{2}-2k^{2}f_{Q}-2\lambda)\psi=-\frac{1}{6}m_{eff}^ {2}\psi,\] (10) \[\partial_{i}\lambda = 0. \tag{11}\] The field equations Eq.(4) for metric Eq.(7) are obtained as, \[\frac{1}{2}\rho = \frac{k^{2}}{2}f-\bigg{(}6k^{2}f_{Q}+\frac{1}{4}m^{2}\bigg{)} \psi^{2}-3\lambda(\psi^{2}-H^{2})-3\dot{\lambda}(\psi-H), \tag{12}\] \[-\frac{1}{2}p = \frac{k^{2}}{2}f+\frac{m^{2}\psi^{2}}{4}+\lambda(3\psi^{2}+3H^{2 }+2\dot{H})+(3\psi+2H)\dot{\lambda}+\ddot{\lambda}. \tag{13}\] Using Eqs. (9), (10) and (11), Eqs. (12) and (13) are simplified as, \[\frac{1}{2}\rho= \frac{k^{2}}{2}f+\frac{m^{2}\psi^{2}}{4}+3\lambda(H^{2}+\psi^{2} )-\frac{1}{2}m_{eff}^{2}H\psi, \tag{14}\] \[\frac{1}{2}(p+\rho)= -2\lambda\bigg{(}1-\frac{m_{eff}^{2}}{12\lambda}\bigg{)}\dot{H}+ \frac{m_{eff}^{2}}{3}(H^{2}+\psi^{2}-2H\psi)+2k^{2}f_{Q}\psi. \tag{15}\] We introduce a following set of dimensionless variables (\(\tau\), \(h\), \(\tilde{\rho}\), \(\tilde{\lambda}\), \(\Psi\), \(\tilde{Q}\)) to simplified the field equations, \[\tau=H_{0}t,\ \ H=H_{0}h,\ \ \rho=6k^{2}H_{0}^{2}\tilde{\rho},\ \ \lambda=k^{2}\tilde{\lambda},\ \ \Psi=H_{0}\psi,\ \ Q=H_{0}^{2}\tilde{Q},\ \ f=H_{0}^{2}F. \tag{16}\] where \(H_{0}\) represents the present value of the Hubble parameter. The Eqs. (9), (10), (14) and (15) are obtained as, \[\frac{d\psi}{d\tau}= \frac{dh}{d\tau}+2h^{2}+\Psi^{2}-3h\Psi, \tag{17}\] \[\frac{d\tilde{\lambda}}{d\tau}= -\bigg{(}\frac{M^{2}}{6}+2F_{\tilde{Q}}+2\tilde{\lambda}\bigg{)} \Psi=-\frac{1}{6}M_{eff}^{2}\Psi,\] (18) \[\frac{dh}{d\tau}= \frac{1}{1-M_{eff}^{2}/12\tilde{\lambda}}\bigg{(}-\frac{3}{2} \gamma\frac{\tilde{\rho}}{\tilde{\lambda}}+\frac{\Psi}{\tilde{\lambda}}\frac{ dF_{\tilde{Q}}}{d\tau}+\frac{M_{eff}^{2}}{6\tilde{\lambda}}(h^{2}+\Psi^{2}-2h\Psi) \bigg{)},\] (19) \[\tilde{\rho}= \frac{1}{6}\bigg{(}F+\frac{M^{2}\Psi^{2}}{2}+6\tilde{\lambda}(h^ {2}+\Psi^{2})-M_{eff}^{2}h\Psi\bigg{)}. \tag{20}\] where \[M_{eff}^{2}=M^{2}+12F_{\tilde{Q}}+12\tilde{\lambda}\ \ \ \ \ with\ \ \ \ \ \ M^{2}=\frac{m^{2}}{k^{2}} \tag{21}\] To solve the above field equations, we consider the following particular form of \(f(Q)\) as \(f(Q)=({H_{0}}^{2})(\alpha_{1}+\alpha_{2}\ log(H_{0}^{-2}Q))\) where \(\alpha_{1}\) and \(\alpha_{2}\) are arbitrary constants. So that, from Eq. 16, we get \(F(\tilde{Q})=\alpha_{1}+\alpha_{2}log(\tilde{Q})\) and \(F_{\tilde{Q}}=\frac{\alpha_{2}}{\tilde{Q}}=\frac{\alpha_{2}}{\tilde{6}\Psi^{2}}\). By using the transformation \(\dot{z}=-(1+z)H\), the field Eqs. (17), (18), (19), and (20) are expressed in terms of red-shift \(z\) as follows: \[-(1+z)h(z)\frac{d\Psi(z)}{dz}= -(1+z)h(z)\frac{dh}{dz}+2h^{2}(z)+\Psi^{2}(z)-3h(z)\Psi(z), \tag{22}\] \[(1+z)h(z)\frac{d\tilde{\lambda}}{dz}= \frac{1}{6}M_{eff}^{2}(z)\Psi(z),\] (23) \[-(1+z)h(z)\frac{dh(z)}{dz}= \frac{1}{1-M_{eff}^{2}(z)/12\tilde{\lambda}(z)}\bigg{(}-\frac{3} {2}\gamma\frac{\tilde{\rho}(z)}{\tilde{\lambda}(z)}+\frac{\Psi(z)}{\tilde{ \lambda}(z)}(-(1+z)h(z))\frac{dF_{\tilde{Q}}}{dz}+\frac{M_{eff}^{2}}{6\tilde{ \lambda}(z)}(h^{2}(z)+\Psi^{2}(z)-2h(z)\Psi(z))\bigg{)},\] (24) \[\tilde{\rho}(z)= \frac{1}{6}\bigg{(}F+\frac{M^{2}\Psi^{2}(z)}{2}+6\tilde{ \lambda}(z)(h^{2}(z)+\tilde{\lambda}^{2}(z))-M_{eff}^{2}(z)h(z)\Psi(z)\bigg{)}. \tag{25}\] where \[M_{eff}^{2}(z)=M^{2}+2\frac{\alpha_{2}}{\Psi^{2}(z)}+12\tilde{ \lambda}(z) \tag{26}\] We solve the above system of differential Eqs. (22)-(24) numerically by taking the initial values \(h(0)=1\), \(\tilde{\lambda}(0)=0.568\) and \(\Psi(0)=0.555\). The numerical solutions of the Hubble parameter \(h(z)\), deceleration parameter \(q(z)\), Lagrange multiplier \(\tilde{\lambda}(z)\), Weyl vector \(\Psi(z)\) and the density parameter \(\rho\) are described and depicted in the form of plots in various Figs. 1a, 1b, 2a, 2b and 3. In each figure, we have presented five plots corresponding to the five different set values of 3-tuple ( \(\alpha_{1}\), \(\alpha_{2}\), and the mass of the Weyl field \(M\)) as \((1,-1,0.95)\), \((-2.2,-5,5)\), \((2,-3,4)\), \((-1,-3,4)\) and \((-1.5,-2.5,3)\). In Fig. 1a, it is observed that the Hubble parameter is monotonically increasing over redshift (\(z\)) which means that it is decreasing over time (\(t\)) in all the cases. It is also observed that our models are close to the standard \(\Lambda\)CDM model initially for the redshift range between \((0,2)\). However, at higher redshift i.e. \(z>2\) there is a significant difference in the behavior of the growth of the Hubble parameter in our models and \(\Lambda\)CDM model. We recall the expression for the Hubble parameter \(H(z)\) and the deceleration parameter \(q(z)\) in the \(\Lambda\)CDM model as \[H(z)=H_{0}\sqrt{\Omega_{DM}(1+z)^{3}+\Omega_{\Lambda}} \tag{27}\] and \[q(z)=-1+\frac{3(1+z)^{3}(\Lambda_{DM})}{2(\Omega_{\Lambda}+\Omega_{DM}(1+z)^{ 3}))} \tag{28}\] where \(\Omega_{M}\) and \(\Omega_{\Lambda}\) are the density parameters of the cold dark matter (pressure less) and dark energy (also known as cosmological constant) respectively. The numerical values of density parameters are taken as \(\Omega_{DM}\equiv 0.3\) and \(\Omega_{\Lambda}\equiv 0.7\). The deceleration parameter (\(q\)) in terms of Hubble parameter (\(H(z)\)) and red-shift \(z\) is obtained as, \[q(z)=(1+z)\frac{1}{H(z)}\frac{dH(z)}{dz}-1 \tag{29}\] Fig. 0(b) describes the evolution of the deceleration parameter \(q(z)\) for all the five values of model parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)). It is found that the deceleration parameter \(q(z)\) increases with a red shift (\(z\)) and decreases with time (\(t\)). We also observe that all the plots are found more or less nearer to the \(\Lambda\)CDM model. There is a phase transition from deceleration in the past to acceleration at present. The value of the deceleration parameter at \(z=0\) for different cases are \(-1.04\), \(-0.55\), \(-0.69\), \(-0.44\), and \(-0.54\) approximately, and the corresponding transition redshifts are obtained as \(0.2377\), \(0.4547\), \(0.3447\), \(0.635\) and \(0.4333\) (approximately). Fig. 1(a) depicts the evolution of the Lagrangian multiplier \(\tilde{\lambda}\). It is observed that it decreases with redshift (\(z\)) i.e. increases with time (\(t\)). For the different values of model parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)), the graph behaves in a similar manner. However, \(\tilde{\lambda}(z)\) becomes negative approximately after \(z>7\). Fig. 1(b) describes the evolution of the Weyl vector component \(\Psi(z)\) with respect to redshift (\(z\)) for all the cases. It initially decreases then increases with higher values of redshift \(z\) in the range \(z\in(0,2)\). It becomes an increasing function after redshift \(z>0.5\). Fig. 3 depicts the evolution of the matter density \(\tilde{\rho}(z)\). The matter density is monotonically increasing with the increasing values of redshift (\(z\)) which means that it is decreasing with time (\(t\)). However, the matter density entirely Figure 1: The evolution of the Hubble parameter and deceleration parameter over redshift \(z\) are described in the six plots in the Figs. (0(a)) and (0(b)) respectively. The five plots in each figure correspond to the five different set values of 3-tuple ( \(\alpha_{1}\), \(\alpha_{2}\), and the mass of the Weyl field \(M\)) as \((1,-1,0.95)\) in Blue color, \((-2.2,-5,5)\) in Cyan color, \((2,-3,4)\) in Brown color, \((-1,-3,4)\) in Purple color and \((-1.5,-2.5,3)\) in Orange color. The sixth red-colored plot is that of the \(\Lambda\)CDM model with the purpose of comparing our results with the standard model. depends on the evolution of model parameters \(\alpha_{1}\) and \(\alpha_{2}\) and \(M\). Figure 3: The plot of energy-matter density \(\tilde{\rho}(z)\) vs. redshift \(z\). The five plots in each figure correspond to the five different set values of 3-tuple ( \(\alpha_{1}\), \(\alpha_{2}\), and the mass of the Weyl field \(M\)) as \((1,-1,0.95)\) in Blue color, \((-2.2,-5,5)\) in Cyan color, \((2,-3,4)\) in Brown color, \((-1,-3,4)\) in Purple color and \((-1.5,-2.5,3)\) in Orange color. Observational data analysis In this section, we use the three observed data sets namely the Observed Hubble data set of 77 points, the 580 distance modulus SNIa data set, and the 1048 supernova Pantheon data sets of apparent magnitudes to compare our theoretical results with those of observed data sets with the help of error bar plots. We have also computed the Chi-square to see the order of fit. Fig. 4a contains five theoretical plots of Hubble parameter \(H(z)\) corresponding to the five different set values of model parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)) and a red-colored plot corresponding to \(\Lambda\)CDM model along with the observed Hubble parameter data set points and corresponding error bars for different redshifts in the range (\(0\leq z\leq 2.5\)). It is observed that our theoretical plots pass closely to the data set points as well as the \(\Lambda\)CDM plot. We also calculated the following Chi-square to see statistically the order of fit and we have found that \(\chi^{2}=77.908\), \(52.4227\), \(23.555\), \(31.6531\) and \(47.2343\) respectively which is a good fit. \[\chi^{2}=\sum_{i=1}^{77}\frac{(H_{th}(z_{i})-H_{ob}(z_{i}))^{2}}{\sigma(z_{i} )^{2}}, \tag{30}\] where \(H_{th}(H_{0}*h_{th})\) and \(H_{ob}\) are the theoretical and observational values of the Hubble parameter at redshift \(z\). \(H_{0}\) is the current value of the Hubble parameter and it is taken as \(70\,Mpc/sec/km\). The luminosity distance (\(d_{L}\)) plays a very important role in astronomy as it determines the distance through the luminosity of a stellar object. The luminosity distance of any object is given by [62] \[D_{l}(z)=(1+z)H_{0}\int_{0}^{z}\frac{1}{H(z*)}dz*, \tag{31}\] and the distance modulus of a luminous object is related to the luminosity distance through the following equation: \[\mu(z)=m_{b}-M=5LogD_{l}(z)+\mu_{0}, \tag{32}\] where \(m_{b}\) and \(M\) are the apparent and absolute magnitude of the object and \(\mu_{0}=25+5Log\big{(}\frac{c}{H_{0}}\big{)}\). Fig. 4b contains five theoretical plots of Distance modulus \(\mu(z)\) corresponding to the five sets of values of model parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)) and a red colored plot corresponding to \(\Lambda\) CDM model. It also carries 580 union 2.1 SNIa distance modulus data set points and error bars for different redshifts in the range (\(0\leq z\leq 1.5\)). It is observed that our theoretical plots pass closely to the data set points as well as the \(\Lambda\) CDM plot. We also calculate the following Chi-square to see statistically the order of fit and we have found that \(\chi^{2}=598.321\), \(589.545\), \(575.795\), \(585.698\) and \(585.227\) respectively which is a good fit. \[\chi^{2}_{\mu}=\sum_{i=1}^{580}\frac{(\mu_{th}(z_{i})-\mu_{ob}(z_{i}))^{2}}{ \sigma(z_{i})^{2}}, \tag{33}\] where \(\mu_{th}\) and \(\mu_{ob}\) are the theoretical and observational values of the distance modulus at redshift \(z\). The Apparent distance can be calculated from Eq. (32), \[m_{b}=M+\mu(z) =-19.07+\mu(z) \tag{34}\] where \(M\) is the absolute magnitude of the object and \(\mu\) is the distance modulus. Fig. 5 contains five theoretical plots of apparent magnitude \(m_{b}(z)\) corresponding to the five sets of values of model parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)) and a red-colored plot corresponding to \(\Lambda\) CDM model. Fig. also carries 1048 Pantheon data set points of apparent magnitudes and error bars for redshifts in the range (\(0\leq z\leq 2.26\)). It is observed that our theoretical plots pass closely to the data set points as well as the \(\Lambda\) CDM plot. We also calculate the following Chi-square to see statistically the order of fit and we have found that \(\chi^{2}=5855.05\), \(5123.7\), \(6720.94\), \(4915.24\) and \(5181.11\) respectively which is a good fit. \[\chi^{2}_{m_{b}}=\sum_{i=1}^{1048}\frac{(m_{bth}(z_{i})-m_{bob}(z_{i}))^{2}}{ \sigma(z_{i})^{2}}, \tag{35}\] where \(m_{bth}\) and \(m_{bob}\) are the theoretical and observational values of the distance modulus at redshift \(z\). ## V Conclusion In this paper, we have explored an FLRW accelerating universe model in the Weyl type \(f(Q)\) gravity by taking the particular functional form of \(f(Q)\) as\(f(Q)=({H_{0}}^{2})(\alpha_{1}+\alpha_{2}\ log({H_{0}^{-2}}Q))\). We solve the field equations numerically by taking the initial values of model parameters \(h(0)=1\), \(\tilde{\lambda}(0)=0.568\) and \(\Psi(0)=0.555\) and five different set values of 3-tuple parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and the mass of the Weyl field \(M\)) as \((1,-1,0.95)\), \((-2.2,-5,5)\), \((2,-3,4)\), \((-1,-3,4)\) and \((-1.5,-2.5,3)\). The numerical solutions of the Hubble parameter \(h(z)\), deceleration parameter \(q(z)\), Lagrange multiplier \(\tilde{\lambda}(z)\), Weyl vector \(\Psi(z)\) and the density parameter \(\rho\) are described and depicted in the form of plots in Figure 4: The two figures contain error bar plots of 77 Observational Hubble \(H(z)\) data set and 580 union 2.1 SN Ia Distance Modulus data set vs. redshift \(z\). The five regular plots in each figure are our theoretical plots of Hubble parameter and distance modulus corresponding to the five different set values of 3-tuple ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)) as \((1,-1,0.95)\) in Blue color, \((-2.2,-5,5)\) in Cyan color, \((2,-3,4)\) in Brown color, \((-1,-3,4)\) in Purple color and \((-1.5,-2.5,3)\) in Orange color. The red colored plot represents the \(\Lambda\)CDM model. various figures 1a, 1b, 2a, 2b and 3. In each figure, we have presented five plots corresponding to the five different set values of parameters ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)). The salient features of the model are described in brief as follows: 1. The model shows a transition from decelerating in the past to acceleration at present which means that the deceleration parameter \(q\) was positive in the past and it is negative at present. The value of the deceleration parameter at \(z=0\) for different cases are \(-1.04\), \(-0.55\), \(-0.69\), \(-0.44\), and \(-0.54\) approximately, and the corresponding transition redshifts are obtained as 0.2377, 0.4547, 0.3447, 0.635 and 0.4333 (approximately). 2. We have solved Weyl type \(f(Q)\) gravity field equations numerically and have obtained numerical solutions to the Hubble and deceleration parameters, distance modulus, and apparent magnitudes of stellar objects like SNIa Supernovae. 3. We have also obtained numerical solutions for the Weyl vector (\(w\)), non-metricity scalar (\(Q\)), and the Lagrangian multiplier (\(\lambda\)) appearing in the action of \(f(Q)\) gravity. 4. We have compared the theoretical results of Hubble and deceleration parameters with those of the standard \(\Lambda\)CDM model. From Fig. 1a and 1b, it is found that our models are coinciding with the standard \(\Lambda\)CDM in the range of redshift \(z\in(0,2)\). 5. In order to make our model compatible on observational grounds, we use three types of data sets: The Observed Hubble data set of 77 points, 580 distance modulus union 2.1 SNIa data set, and 1048 supernova Pantheon data sets of apparent magnitudes. We have compared our theoretical results with the error bar plots of the three data sets described earlier and it is found that our results fit well with the observed data set points. 6. The model envisages a unique feature that although the universe is filled with perfect fluid as dust whose pressure is zero, the weyl vector dominance \(f(Q)\) creates acceleration in it. Figure 5: The figure contains error bar plots of 1048 pantheon data points of apparent magnitudes for redshifts in the range (\(0\leq z\leq 2.26\)). The five regular plots in the figure are our theoretical apparent magnitudes plots corresponding to the five different set values of 3-tuple ( \(\alpha_{1}\), \(\alpha_{2}\), and \(M\)) as (\(1,-1,0.95\)) in Blue color, (\(-2.2,-5,5\)) in Cyan color, (\(2,-3,4\)) in Brown color, (\(-1,-3,4\)) in Purple color and (\(-1.5,-2.5,3\)) in Orange color. The red color plot represents the \(\Lambda\)CDM model Appendix: Weyl geometry in brief. The Riemann geometry permits parallel transportation of a vector along an infinitesimal loop in such a way that its magnitude remains constant whereas its direction may change as per the nature of the intrinsic property of curved space-time. We may see it as follows: The variation of components of a vector \(v^{i}\) on parallel transportation is given as: \[\delta v^{i}=v^{k}R^{i}_{klj}s^{lj} \tag{36}\] where \(s^{lj}\) is the area of the loop and \(R^{i}_{klj}\) is the Riemannian curvature tensor. It can be verified that the infinitesimal change in the magnitude of the vector \(v^{k}\) on parallel displacement through the loop is nil. \[\delta(g_{ij}v^{i}v^{j})=2v^{k}v^{j}R_{jkl\eta}s^{l\eta}=0 \tag{37}\] Weyl introduced an intrinsic vector field \(w_{i}\) and a semi-metric connection \(\tilde{\Gamma}^{\alpha}_{ij}\) which is defined as \[\tilde{\Gamma}^{\alpha}_{ij}\equiv\Gamma^{\alpha}_{ij}+g_{ij}w^{\alpha}-\delta ^{\alpha}_{i}w_{j}-\delta^{\alpha}_{j}w_{i} \tag{38}\] where \(\Gamma^{\alpha}_{ij}\) is the Christoffel symbol with respect to the metric \(g_{ij}\). The semi-metric connection means that it has both metric and vector components. The curvature tensor corresponding to the newly defined semi-metric tensor is denoted as \(\tilde{R}_{ij\alpha\beta}\). It has a both symmetric and an anti-symmetric part which is given by \[\tilde{R}_{ij\alpha\beta}=\tilde{R}_{(ij)\alpha\beta}+\tilde{R}_{[ij]\alpha \beta}, \tag{39}\] where \[\tilde{R}_{[ij]\alpha\beta}=R_{ij\alpha\beta}+2\nabla_{\alpha}w_{[ig_{j}] \beta}+2\nabla_{\beta}w_{[jg_{i}]\alpha}+2w_{\alpha}w_{[ig_{j}]\beta}+2w_{ \beta}w_{[jg_{i}]\alpha}-2w^{2}g_{\alpha[ig_{j}]\beta}, \tag{40}\] and \[\tilde{R}_{(ij)\alpha\beta}=g_{ij}W_{\alpha\beta} \tag{41}\] respectively, and \[W_{ij}=\nabla_{j}w_{i}-\nabla_{i}w_{j}. \tag{42}\] In the Weyl geometry, the infinitesimal change in the magnitude of the vector \(v^{i}\) on parallel displacement through the loop is not zero. \[\delta|v|=|v|W_{l\eta}s^{l\eta}, \tag{43}\] where \(|v|^{2}=v_{i}v^{i}\). In it, the divergence of the metric tensor is not zero under the semi-metric affine connection. We get the following expression for it \[Q_{aij}\equiv\tilde{\nabla}_{\alpha}g_{ij}=\partial_{\alpha}g_{ij}-\tilde{ \Gamma}^{\eta}_{\alpha i}g_{\eta j}-\tilde{\Gamma}^{\eta}_{\alpha j}g_{\eta i} =2w_{\alpha}g_{ij}. \tag{44}\] We note that in the Riemannian geometry, the covariant derivative of the metric tensor is zero, i.e. \(\nabla_{\alpha}g_{ij}=0\). The tensor \(Q_{\alpha ij}\) is a three-indexed tensor. It can not be fully contracted with the help of metric tensor \(g_{ij}\) (only even order tensors can be contracted to scalar). So it is proposed an alternative non-metricity scalar \(Q\) is defined as follows. \[Q\equiv-g^{ij}\bigg{(}L^{\alpha}{}_{\beta i}L^{\beta}{}_{j\alpha}-L^{\alpha}{} _{\beta\alpha}L^{\beta}{}_{ij}\bigg{)}. \tag{45}\] where \(L^{\alpha}{}_{ij}\) is defined as \[L^{\alpha}{}_{ij}=-\frac{1}{2}g^{\alpha\gamma}\bigg{(}Q_{i\gamma j}+Q_{j \gamma i}-Q_{\gamma ij}\bigg{)}. \tag{46}\] From Eqs. (44) - (46), we get the following important relation, \[Q=-6w^{2}. \tag{47}\]
2304.09693
A model for seagrass species competition: dynamics of the symmetric case
We propose a general population dynamics model for two seagrass species growing and interacting in two spatial dimensions. The model includes spatial terms accounting for the clonal growth characteristics of seagrasses, and coupling between species through the net mortality rate. We consider both intraspecies and interspecies facilitative and competitive interactions, allowing density-dependent interaction mechanisms. Here we study the case of very similar species with reciprocal interactions, which allows reducing the number of the model parameters to just four, and whose bifurcation structure can be considered the backbone of the complete system. We find that the parameter space can be divided into ten regions with qualitatively different bifurcation diagrams. These regimes can be further grouped into just five regimes with different ecological interpretations. Our analysis allows the classifying of all possible density distributions and dynamical behaviors of meadows with two coexisting species.
Pablo Moreno-Spiegelberg, Damià Gomila
2023-04-19T14:35:11Z
http://arxiv.org/abs/2304.09693v1
# A model for seagrass species competition: dynamics of the symmetric case ###### Abstract We propose a general population dynamics model for two seagrass species growing and interacting in two spatial dimensions. The model includes spatial terms accounting for the clonal growth characteristics of seagrass, and coupling between species through the net mortality rate. We consider both intraspecies and interspecies facilitative and competitive interactions, allowing density-dependent interaction mechanisms. Here we study the case of very similar species with reciprocal interactions, which allows reducing the number of the model parameters to just four, and whose bifurcation structure can be considered the backbone of the complete system. We find that the parameter space can be divided into ten regions with qualitatively different bifurcation diagrams. These regimes can be further grouped into just five regimes with different ecological interpretations. Our analysis allows the classifying of all possible density distributions and dynamical behaviors of meadows with two coexisting species. **Mathematics Subject Classification.** 92D25,35B36,35B32,35K55. ## 1 Introduction Seagrass meadows are key to marine coastal ecosystems [1]. They provide food, protection, and structural support to many marine species [2]. Moreover, seagrass meadows are an important sink of carbon dioxide [3], protect the coastline against strong waves [4, 5], and contribute to nutrient sedimentation. From a socioeconomic point of view, seagrass ecosystems support fishing and human development. During the last decades, a decline in seagrass beds associated with trawling, pollution, global warming, or competition with invasive species, among other anthropogenic effects, has been observed [6, 7, 8]. Preventive, palliative, and restoration measures must be taken to reduce the consequences of this declining [9, 10]. Not only seagrasses are in danger, but about half of the marine ecosystems have also been identified as strongly affected by multiple anthropogenic drivers [11]. No wonder UN has declared with urgency 2021-2030 as the "Decade of Ocean Science for Sustainable Development" as well as the "Decade of Ecosystem Restoration". Dynamical models provide a framework to study the meadow receding process and to understand the mechanisms that govern the ecosystem dynamics. This can be used to estimate the resilience and alert about the proximity of tipping points, after which vegetation systems collapse. Furthermore, they can also be used to make predictions about the evolution of the meadows under different scenarios. This provides useful information to take decisions in ecosystems management. Two different approaches have been used to study the dynamics of seagrass meadows. The first is based on microscopic agent based models where information on each plant shoot (and apex) is explicitly computed. The dynamics are defined in these systems as a markovian process at shoot level, where apices grow and/or branch, generating new shoots and apices, and both die with a given rate [12, 13]. The second approach is based on macroscopic models where only spatial densities are considered [14, 15]. In these models, the evolution of plant density is described by a system of partial differential equations (PDEs). Even if the macroscopic models lack information on individual shoots and the rhizome network, they are computationally more efficient to study large systems. Furthermore, bifurcation analysis can be applied to PDEs, providing analytical information about instabilities and tipping points under changing conditions. Interaction between species is a relevant mechanism in seagrasses dynamics. While some species of seagrasses coexist in space creating mixed meadows, others arrange in separated monospecies beds with interfaces. Some species have been seen in both arrangements for different conditions, suggesting some kind of transition between these behaviors. Interspecies interaction is then key in determining the evolution of ecosystems with invasive species. In a global change scenario like the one we are currently experiencing, the interaction between native species with different responses to the new conditions, e.g. due to global warming, can also determine the evolution of the ecosystems [16, 17]. Introducing interspecific interactions to current seagrasses models is necessary to study this process. So far, in the context of seagrass dynamics, interactions between species have only been studied in microscopic models [18, 19]. The addition of interspecies interaction in macroscopic models of seagrasses is, so far, unexplored. In this work, we present a generalization of a single species seagrass macroscopic model [15] considering local interspecies interaction. Furthermore, we study in detail the bifurcation diagram of the symmetric case, where the two species are similar and the interaction between them is reciprocal. This simple scenario captures the backbone of the general model and, despite its simplicity, it gives a remarkable variety of scenarios with complex behaviors. These scenarios can be related to biotic interactions between species, while the transitions between them are mediated by abiotic (environmental) changes in the mortality rate. ## 2. The Model In [15], a simple model to describe meadows of clonal-growth plants was proposed. In that work the evolution of the plant density \(n(\vec{r},t)\) is described by the following partial differential equation: \[\partial_{t}n=-n\omega(n)+d_{0}\nabla^{2}n+d_{1}((\nabla^{2}n)n+||\nabla n||^{ 2}) \tag{1}\] where \(\omega(n)=-\omega_{b}(n)+\omega_{d}(n)\) is the net death rate, being \(\omega_{b}>0\) the branching rate and \(\omega_{d}>0\) the death rate, in principle, both density dependent. The elongation of the rhizome of clonal plants combined with the branching lead to an effective diffusion with coefficient \(d_{0}\) and to a nonlinear diffusion with coefficient \(d_{1}\). Additionally, a gradient squared term with coefficient \(d_{1}\), characteristic of clonal growth, appears also in the model [15]. To describe a two species system, using Eq. (1), we couple two different vegetation density fields through the mortality term to describe both intraspecific and interspecific interactions: \[\partial_{t}n_{i}=n_{i}Q_{i}[\vec{n}]+d_{i0}\nabla^{2}n_{i}+d_{i1}((\nabla^{2} n_{i})n_{i}+||\nabla n_{i}||^{2}) \tag{2}\] where we consider local interactions only in the net mortality term, given by a quadratic polynomial: \[Q_{i}(\vec{n})= -\omega_{i}+\vec{a_{i}}\cdot\vec{n}-b_{i1}^{2}n_{1}^{2}-b_{i2}^{2 }n_{2}^{2}-b_{i3}n_{1}n_{2} \tag{3}\] where \(\omega_{i}\) is the net mortality of species \(i\) in absence of other plants, \(\vec{a_{i}}=(a_{i1},a_{i2})\), and \(\vec{n}=(n_{1},n_{2})\). \(a_{ii}\) and \(a_{ij}\) (\(i\neq j\)) are the slopes of the linear change in the net mortality rate due to intraspecific and interspecific interactions respectively. A term \(a_{ij}>0\) describes a facilitative interaction for moderate densities while \(a_{ij}<0\) describes a competitive interaction. The quadratic terms \(b_{ij}>0\) are saturation parameters that always describe competitive interactions for high plant densities, acting as a carrying capacity and giving an upper bound to plant density. We consider the cross saturation term as \(b_{i3}=2b_{ii}b_{ij}\) for \(i\neq j\), simplifying the mortality term to a parabolic form: \[Q_{i}(\vec{n})= -\omega_{i}+\vec{a_{i}}\cdot\vec{n}-(\vec{b_{i}}\cdot\vec{n})^{2}. \tag{4}\] This way, considering equal interspecific and intraspecific interactions the mortality term is a function of the total density only, i.e. the sum of the densities of both species \(n_{1}+n_{2}\), as expected if \(n_{1}\) and \(n_{2}\) were the same species. The local part of Eq. (2) corresponds to a generalized Lotka-Volterra equation [20, 21] with up to quadratic terms in the mortality rate (4), in both inter and intraspecies interactions. The use of these nonlinear interactions is supported by both theoretical and field observations. Specifically, recent studies have shown that interspecific plant-plant facilitation is density dependent and it has a single maximum for intermediate densities [22]. Also, monospecific seagrass meadows show an abrupt collapse of the plant population for small increases of a stressor above a given critical value [23, 24], which indicates the presence of tipping points in the system. Both behaviors need at least up to quadratic nonlinear terms in (4) to be properly described. The obtained model is versatile and can represent species with different growth dynamics. It also allows a flexible representation of the different interactions between plants, such as competition, mutualism, amensalism, or parasitism. Additionally, the model is easily scalable to more than two species, making it a useful tool for studying multispecies seagrass meadows dominant in tropical climates. The plasticity of the model allows then for a comprehensive understanding of the complex interactions within ecosystems. ## 3. The symmetric case In this section, we consider in detail the simplified case in which both plants are similar and have symmetric interactions, in such a way that the mortality and the intraspecies and interspecies terms are the same for both species, greatly reducing the number of parameters. This implies reciprocal interactions, i.e. mutualism and competition are the only possible relationships. For this situation, \(\omega_{1}=\omega_{2}:=\omega\), \(a_{11}=a_{22}:=a_{1}\), \(a_{12}=a_{21}:=a_{2}\), \(b_{11}=b_{22}:=b_{1}\), \(b_{12}=b_{21}:=b_{2}\), \(d_{10}=d_{20}:=d_{0}\), and \(d_{11}=d_{21}:=d_{1}\). Notice that, in this symmetric case, \(Q_{1}(n_{1},n_{2})=Q_{2}(n_{2},n_{1})\equiv Q(n_{1},n_{2})\). Considering low-density intraspecies facilitation (i.e. \(a_{1}>0\)), the equations can be reduced to an adimensional form through the change of variables \[n^{\prime}_{1} =\frac{b_{1}^{2}}{a_{1}}n_{1} n^{\prime}_{2} =\frac{b_{1}^{2}}{a_{1}}n_{2} t^{\prime} =\frac{a_{1}^{2}}{b_{1}^{2}}t \vec{r^{\prime}} =\frac{a_{1}}{b_{1}\sqrt{d_{0}}}\vec{r}, \tag{5}\] and using the following rescaled parameters \[\omega^{\prime} =\frac{b_{1}^{2}}{a_{1}^{2}}\omega \alpha =\frac{a_{2}}{a_{1}} \beta =\frac{b_{2}}{b_{1}} \delta =\frac{a_{1}}{b_{1}^{2}}\frac{d_{1}}{d_{0}}. \tag{6}\] Dropping the primes, Eqs. (2) become: \[\dot{n_{1}}= n_{1}Q(n_{1},n_{2})+\nabla^{2}n_{1}+\delta((\nabla^{2}n_{1})n_{1}+ ||\nabla n_{1}||^{2})\] \[\dot{n_{2}}= n_{2}Q(n_{2},n_{1})+\nabla^{2}n_{2}+\delta((\nabla^{2}n_{2})n_{2 }+||\nabla n_{2}||^{2}) \tag{7}\] where \[Q(n_{1},n_{2})=-\omega+n_{1}+\alpha n_{2}-(n_{1}+\beta n_{2})^{2}. \tag{8}\] The new parameter \(\omega\) is proportional to the net mortality of plants in the absence of interactions. We consider it depends on abiotic factors, i.e. it changes with the environmental conditions. Parameters \(\alpha\) and \(\beta\) give the ratio between interspecific and intraspecific interactions. Finally, \(\delta\) is a parameter proportional to the ratio between nonlinear and linear diffusion. In this work, we assume the parameters \(\alpha\), \(\beta\), and \(\delta\) not to depend on abiotic factors, and to be determined by the characteristics of the interacting species. Throughout this work, we fix \(\delta=0.5\) and use \(\alpha\) and \(\beta\) as the parameters characterizing the species, and \(\omega\) as the control parameter whose variations reflect changes in the environment. For fixed biotic parameters (\(\alpha\), \(\beta\)), a change in the value of \(\omega\) can qualitatively modify the behavior of the system by crossing different bifurcation points. Advancing results to be discussed in detail later, we find that the parameter space (\(\alpha\), \(\beta\)) can be partitioned into ten different regions (see Fig. 1), in each of which the bifurcation diagram as a function of \(\omega\) is qualitatively different from the others. These ten regions in the parameter space can be further grouped into five different cases (color shaded regions in Fig. 1), each with a different ecological interpretation. ### Homogeneous steady solutions and their bifurcations The local dynamical system can present up to nine different homogeneous steady states (HSS). These fixed points have been classified into four different groups according to the relative concentration of the different species: one unpopulated \(P_{0}\); four mono-species \(P_{1}^{l}\), \(P_{1}^{h}\), \(P_{2}^{l}\), \(P_{2}^{h}\); two symmetric mixed \(P_{S}^{l}\), \(P_{S}^{h}\); and two asymmetric mixed \(P_{A1}\), \(P_{A2}\) (see Fig. 2). Solutions with a high plant density (labeled with the super-index \(h\)) and solutions with a lower plant density (labeled with the super-index \(l\)) can be distinguished in the case of symmetric mixed and monospecies HSSs. These solutions can be related by pairs since they are created via Saddle-Node bifurcations. Due to the symmetry between species, \(P_{A1}\), \(P_{1}^{l}\), and \(P_{1}^{h}\) have symmetric solutions (\(P_{A2}\), \(P_{2}^{l}\), and \(P_{2}^{h}\)) with interchanged plant densities. In the symmetric case considered here, two symmetric Figure 1: Projection of the full phase diagram on the \((\alpha,\beta)\) plane. Lines represent codimension-2 bifurcations and singular points. These lines divide the interaction parameter plane \((\alpha,\beta)\) in ten different regions with qualitatively unique bifurcation diagrams as a function of \(\omega\), labeled with roman numerals. These regions are grouped into 5 ecological cases: competition exclusion shaded in pink (regions I, II, and III); dynamic coexistence shaded in purple (region IV); low-density coexistence, shaded in yellow (region V); high-density coexistence, shaded in blue (regions VI and VII), and mutualism, shaded in green (regions VIII, IX and X). The red line represents the projection of the codimension-2 bifurcation where \(SN_{S}\) and \(T_{0}\) cross (\(\alpha=-1\)); the brown line where \(Pitch\) and \(T_{0}\) cross; the purple line where \(Pitch\), \(SN_{S}\) and \(Hopf\) converge; and the green curve where \(T\), \(SN\) and \(Hopf\) converge. Finally, the blue line represents a singular case, \(\beta=1\), where the value of \(\omega\) at which \(Pitch\) and \(T\) take place diverges to \(-\infty\). solutions are completely equivalent, so from now on we will drop the sub-indices 1 and 2 to refer indistinctly to these solutions, and we will present the results just for the former. The HSSs are created and change their stability through different bifurcations. Plant density values of each HSS and the corresponding bifurcations are listed in Tables 1 and 2 respectively. In Fig. 3 we show the \begin{table} \begin{tabular}{|c|c|c|} \hline Label & Name & Value (\(n_{1}\),\(n_{2}\)) \\ \hline \(P_{0}\) & Bared state/unpopulated & \((0,0)\) \\ \hline \(P^{h}\) & High populated monospecific & \((0,\frac{1+\sqrt{1-4\omega}}{2});(\frac{1+\sqrt{1-4\omega}}{2},0)\) \\ \hline \(P^{l}\) & Low populated monospecific & \((0,\frac{1-\sqrt{1-4\omega}}{2});(\frac{1-\sqrt{1-4\omega}}{2},0)\) \\ \hline \(P^{h}_{S}\) & High populated symmetric mixed & \(\frac{1+\alpha+\sqrt{(1+\alpha)^{2}-4\omega(1+\beta)^{2}}}{(1,1)}\) \\ \hline \(P^{l}_{S}\) & Low populated symmetric mixed & \(\frac{1+\alpha-\sqrt{(1+\alpha)^{2}-4\omega(1+\beta)^{2}}}{2(1+\beta)^{2}}(1,1)\) \\ \hline \(P_{A}\) & Assymetric mixed & \(\left(\frac{1-\alpha}{2(1-\beta^{2})}\pm\sqrt{\frac{\omega\rho-\omega}{(1- \beta)^{2}}},\frac{1-\alpha}{2(1-\beta^{2})}\mp\sqrt{\frac{\omega\rho-\omega} {(1-\beta)^{2}}}\right)\) \\ \hline \end{tabular} \end{table} Table 1. Homogeneous steady states of Eq. (7). \begin{table} \begin{tabular}{|c|c|c|} \hline Label & Name & Critical point \(\omega_{c}\) \\ \hline \(T_{0}\) & Degenerate bared state transcritical & 0 \\ \hline \(SN\) & Monospecific Saddle Node & 0.25 \\ \hline \(SN_{S}\) & Symmetric mixed Saddle Node & \(\frac{(1+\alpha)^{2}}{4(1+\beta)^{2}}\) \\ \hline \(T\) & Monospecific transcritical & \(\frac{(1-\alpha)(\alpha-\beta^{2})}{(1-\beta^{2})^{2}}\) \\ \hline \(Pitch\) & Pitchfork of the symmetric state & \(\frac{(1-\alpha)(1-3\beta+3\alpha-\beta)}{4(1-\beta)^{2}(1+\beta)}\) \\ \hline \(Hopf\) & Andronov-Hopf of asymmetric mixed state & \(\frac{(1-\alpha)(1-2\alpha-2\beta-\beta^{2})}{4(1-\beta)^{2}(1+\beta)}\) \\ \hline \end{tabular} \end{table} Table 2. Local bifurcations of the HSS. Figure 2. Schematic representation of the system’s homogeneous steady state (HSS) solutions. The figure shows the nullclines, i.e. zero-growth isoclines for the different species, of the local (homogeneous) system. Nullclines for the \(n_{1}\) (\(n_{2}\)) density are shown in blue (red) dashed lines. Points where the two nullclines cross correspond to fixed points of the local systems, i.e. HSSs. There are up to nine of these fixed points, which have been classified into four different groups: unpopulated (black dot), monospecies (blue dots), symmetric mixed states (green dots), and asymmetric mixed states (red dots). Notice the gray dashed symmetry line. ten qualitatively different bifurcation diagrams of the system as a function of \(\omega\). These bifurcation diagrams correspond to values of \(\alpha\) and \(\beta\) in each corresponding region in Fig. 1. The unpopulated solution, \(P_{0}\), is a trivial solution of the system which exists for any parameter values. It is stable for \(\omega>0\) and unstable for \(\omega<0\), losing its stability via a degenerate (due to the imposed symmetry) transcritical bifurcation, \(T_{0}\), at \(\omega=\omega_{c}=0\) involving \(P^{l}\) and either \(P_{S}^{l}\) or \(P_{S}^{h}\). The symmetric mixed state involved in this bifurcation is \(P_{S}^{h}\) for low values of \(\alpha\) (Fig. 3 I and VI) and \(P_{S}^{l}\) otherwise (Fig. 3 II-V and VII-X). When crossing \(T_{0}\) changing \(\omega\), the involved populated solutions change their sign, having biological relevance only those solutions with positive plant density. Note that the positive HSSs involved in the bifurcation are always unstable close to \(T_{0}\) due to dominating low-density intraspecies facilitative interaction. Monospecies solutions \(P^{l}\) and \(P^{h}\) are characterized by the absence of one of the two species. The system can present four of these solutions, two with the absence of \(n_{1}\) and, equivalently, two symmetric solutions with the absence of \(n_{2}\). These fixed points are generated in two simultaneous monospecific-Saddle-Node (\(SN\)) bifurcations. The higher branch of the \(SN\) corresponds to \(P^{h}\), stable under density perturbations of the same species, while the lower branch corresponds to \(P^{l}\), which is always unstable. For a single species the system shows the so-called Allee effect, a positive correlation between the growth rate and the population size for small densities [25]. For \(\omega>0\), the Allee effect is strong, and there is a threshold (given by \(P^{l}\)) below which the plant density decays. The system can show bistability between the unpopulated, \(P_{0}\), and the higher populated monospecific solutions, \(P^{h}\), in this regime. For \(\omega<0\), the system displays a weak Figure 3. Bifurcation diagram as function of the mortality \(\omega\) in the different regions shown in Fig. 1. The x-axis represents the difference between the population of the two different species, \(n_{1}-n_{2}\), the z-axis represents the density of species 2, \(n_{2}\), and the y-axis represents the control parameter \(\omega\). The branch on the y-axis (\(n_{2}=0\) and \(n_{1}-n_{2}=0\)) corresponds to \(P_{0}\) (black dot in Fig. 2); branches on the x-y plane (\(n_{2}=0\)) correspond to \(P^{l}\) and \(P^{h}\) (blue points in Fig. 2), branches on the y-z plane (\(n_{1}-n_{2}=0\)) correspond to \(P_{S}^{l}\) and \(P_{S}^{h}\) (green points in Fig. 2), and branches out of these planes correspond to \(P_{A}\) (red points in Fig. 2). Asymmetric mixed steady states with a concentration of \(n_{2}\) higher than \(n_{1}\) and monospecies states with species 1 are not shown but, due to the symmetry of the system, these solutions have the same bifurcation diagram as the equivalent solutions shown here. Solid black lines represent stable fixed points, red dashed lines saddle points, and red dotted lines unstable nodes or spirals. Colored squares around the diagrams group them into the five different ecological frameworks. The numbers and colors match those used in Fig. 1. Allee effect, i.e. there is no threshold for the growth of plant density. Thus, in this regime, \(P^{h}\) will be stable and \(P_{0}\) unstable, while \(P^{l}\) is negative and does not have a biological meaning in this context. The transition between these two regimes, i.e. between monospecific strong and weak Allee effect, occurs through the already discussed transcritical bifurcation \(T_{0}\), involving \(P_{0}\) and \(P^{l}\). When considering the presence of the other species, the stability of the higher populated monospecific state, \(P^{h}\), is not guaranteed. In regions I-IV and VI-VII, \(P^{h}\) is stable right from the \(SN\), which corresponds to a \(SN_{-}\) of the local system. Otherwise, in regions V, IX, and X, the \(SN\) corresponds to a \(SN_{+}\), and \(P^{h}\) is unstable to perturbations consisting of a small population of the other species. Away from SN, \(P^{h}\) can still change its stability through a transcritical bifurcation (\(T\)) with \(P_{A}\), see for instance Fig. 3 V and VI-VIII. Crossing this bifurcation point, by decreasing \(\omega\), \(P_{A}\) enters a quadrant of negative values, losing its biological meaning. On the other hand \(P^{h}\) changes its stability, either losing it in a catastrophic transition (see Fig. 3 VI-VIII) or gaining it (Fig. 3 V). Symmetric mixed solutions (\(P^{h}_{S}\) and \(P^{l}_{S}\)) are characterized by having the same population of both species, \(n_{1}=n_{2}\). These solutions are generated at a saddle-node bifurcation with symmetric plant concentrations (\(SN_{S}\)). By decreasing \(\omega\) to 0, either \(P^{h}_{S}\) or \(P^{l}_{S}\), will interact with \(P_{0}\) in \(T_{0}\) changing its sign. In contrast with the monospecific saddle node (\(SN\)), which occurs always for positive densities, as low densities intraspecific facilitation is assumed in this work, the \(SN_{S}\) might occur for negative population values (see Fig. 3 I and VI), and therefore the solutions have no biological meaning at the bifurcation. When this happens (regions I and VI), \(P^{h}_{S}\) interacts with \(P_{0}\) at \(T_{0}\), becoming positive for \(\omega<0\), while \(P^{l}_{S}\) takes always negative values in this case. When \(SN_{S}\) occurs for positive density values, i.e. in regions II-V and VII-X (see Fig. 3), two different scenarios are found when considering the stability of \(P^{h}_{S}\) at the bifurcation point. On one hand, \(P^{h}_{S}\) is stable at the bifurcation point in regions IV, V, and VIII-X; where we label the \(SN_{S}\) bifurcation as \(SN_{S-}\). On the other hand, \(P^{h}_{S}\) is unstable at the bifurcation in regions I-III, VI, and VII; where we denote the \(SN_{S}\) bifurcation as \(SN_{S+}\). A symmetric mixed solution, either \(P^{h}_{S}\) or \(P^{l}_{S}\), is also involved in a Pitchfork bifurcation (\(Pitch\)), i.e. a spontaneous symmetry breaking of the system, from where a pair of asymmetric mixed solutions (\(P_{A}\)) emerges. Depending on the region, this bifurcation affects one branch or the other of \(P_{S}\) (see Fig. 3). In regions I, II, and X, \(Pitch\) involves \(P^{l}_{S}\) but for negative values; and \(P_{A}\) does not have biological meaning for any value of \(\omega\). In regions III, VIII, and IX, \(Pitch\) involves \(P^{l}_{S}\) with positive values. In regions IV, V, VI, and VII \(Pitch\) affects \(P^{h}_{S}\), changing the stability of this point. In this last case, we can make a relevant distinction. In regions IV and V \(Pitch\) is supercritical and \(P_{A}\) is stable after the bifurcation, while in regions VI and VII \(Pitch\) is subcritical and \(P_{A}\) is unstable. Moreover, in region IV, \(P_{A}\) undergoes a Andronov-Hopf bifurcation (\(Hopf\)), where the stability of \(P_{A}\) changes by decreasing \(\omega\) before reaching the \(T\) bifurcation. After the Hopf bifurcation a stable homogeneous limit cycle with densities oscillating around \(P_{A}\) is observed. The dynamics of the limit cycle will be discussed in Section 4.1.2. The regions in the (\(\alpha\), \(\beta\)) parameter space where each archetypal bifurcation diagram is found are shown in Fig. 1. The curves separating the different regions are given by the projection of codimension-2 bifurcations and singular parameter values of the complete four dimensional parameter space on the (\(\alpha\), \(\beta\)) plane. Regions I and II, and VI and VII are separated by the a codimension-2 bifurcation point in which \(SN_{S}\) and \(T_{0}\) occur for the same parameter values, shown as a red line at \(\alpha=-1\) in Fig. 1. Regions II and III, and IX and X are separated by the codimension-2 bifurcation in which \(T_{0}\), \(T\), and \(Pitch\) occur for the same parameter values, shown as a brown line at \(\alpha=1\) in Fig. 1. Regions III and IV, and VII and VIII are separated by the codimension-2 bifurcation in which \(Pitch\) and \(SN_{S}\) occur for the same parameter values, marked as a purple line in Fig. 1 (\(\alpha=\beta\)). This codimension-2 point, in the case separating regions III and IV, also involves \(Hopf\) and \(DH\) bifurcations, in a Bogdanov-Takens bifurcation. The separation between regions IV and V, and VIII and IX are given by the codimension-2 point in which \(T\), \(Pitch\), and \(Hopf\) occur for the same parameter values, marked in green in Fig. 1 (\(2\alpha=(1+\beta)^{2}\)). Finally, the blue line in Fig. 1, separating regions I, II, and V from VI, VII, and X respectively, represents a singular point in the (\(\alpha\), \(\beta\)) subspace, given by \(\beta=1\). Approaching this value of \(\beta\), the critical value of \(\omega\) at which the bifurcations affecting \(P_{A}\) occur, i.e. \(Pitch\) and \(T\), diverges to \(-\infty\). ## 4. Interaction scenarios The structure of the HSS bifurcation diagram as a function of the net mortality rate \(\omega\) changes depending on the values of inter/intra-species interaction ratios (\(\alpha\), \(\beta\)), as shown in Figs. 1 and 3. Nevertheless, some of these regimes differ in bifurcations affecting only unstable HSS or involving solutions with negative density values. Therefore, we can group the ten cases into just five scenarios with significantly different behavior and ecological interpretation. The regions encompassed in each scenario are shaded with the same color in Fig. 1 and grouped by dashed-line boxes in Fig. 3. We further classify the 5 scenarios in two cases: scenarios for large saturation ratios (\(\beta>1\)) and scenarios for small saturation ratios (\(\beta<1\)). ### Scenarios for large saturation ratios (\(\beta>1\)) In this section we study the large-saturation-ratio case, i.e. \(\beta>1\), meaning that the interspecific saturation term is larger than the intraspecific one. Therefore, in this region of the parameter space monospecies meadows are favored, especially for the large densities appearing for small mortality rates. Nevertheless, for intense interspecific facilitation (large values of \(\alpha\)) stable mixed meadows (either \(P_{S}^{h}\) or \(P_{A}\)) can appear for intermediate mortality rates, as well as more exotic behaviors such as oscillations or excitability, due to strongly nonlinear dynamics. A representative phase diagram of this region in the \((\alpha,\omega)\) parameter space is shown in Fig. 4 for \(\beta=3\). We next discuss the three different dynamical regimes in this scenario. #### 4.1.1. Competitive exclusion scenario For \(\alpha<\beta\) (regions I, II, and III; pink shaded in Fig. 1), there are no stable mixed states. In this scenario, plants compete with each other for all plant densities. Representative bifurcation diagrams are shown in Fig. 3 I-III. Figure 4. \((\alpha,\omega)\) phase diagram for \(\beta=3\), crossing through regions I-V in Fig. 1. This phase diagram is representative of any configuration with \(\beta>1\). Dotted lines represent bifurcation involving negative steady points, i.e. solutions without physical meaning. The blue line represents \(SN\), solid (dashed) when it corresponds with \(SN_{-}\) (\(SN_{+}\)). The green line represents \(SN_{S}\), solid (dashed) when it corresponds with \(SN_{S-}\) (\(SN_{S+}\)). The red solid (dashed) line represents the supercritical (subcritical) pitchfork bifurcation from where \(P_{A}\) emerges. The orange solid (dashed) line represents \(T\) involving \(P^{h}\) (\(P^{l}\)). The black line represents \(T_{0}\), solid when involving \(P_{S}^{h}\), and dashed (dot-dashed) when involving \(P_{S}^{l}\) as an unstable node (saddle). Finally, the purple line represents the supercritical \(Hopf\) of \(P_{A}\). Dots mark the codimension-2 points. For mortality values above the saddle-node bifurcation of the mono-species solutions SN (i.e. \(\omega>0.25\)) the only possible state of the system is bare soil (\(P_{0}\)), to which any initial condition will converge. For \(\omega\in(0,0.25)\) the system shows bistability. On one hand, \(P_{0}\) is still stable, and not dense enough initial conditions die out (strong Allee effect). On the other hand, \(P^{h}\), with either one or the other species, is stable, and dense enough initial conditions will form monospecific meadows. Here \(P^{l}\) acts as a critical density below which the system goes to bare soil and above which the system develops a meadow. For lower moralities (\(\omega<0\)), the system tends always to monospecific solutions (weak Allee effect). In this scenario the system displays a hysteresis cycle; the system has a tipping point at \(\omega=0.25\) where the populated solution collapses to the bare state. On the other hand, at \(\omega=0\), \(P_{0}\) destabilizes and for \(\omega<0\) each species may grow at different places, forming domains separated by fronts. Typically the system shows curvature driven coarsening, in such a way that closed domains will tend to a circular shape and shrink, following its size a \(t^{1/2}\) scaling law, until disappearing completely [26]. In this case, the final state at long times is always either a single species meadow or regions of different species separated by flat fronts Fig. 5 a-h. This phase separation scenario can be related to dominating competitive ecological interactions between species. This situation is structurally unstable, and any breaking of the symmetry between species will make the dominant one to overrun the other and colonize all the space. #### 4.1.2 Strongly nonlinear regime The region with \(\beta<\alpha<\frac{1}{2}(1+\beta^{2})\) (region IV, shaded in purple in Fig. 1) presents a highly nonlinear behavior for intermediate values of mortality. This behavior is generated due to the interplay between strong quadratic interspecies facilitation terms and also strong cubic interspecies saturation. As usual, for large enough mortality rates, the only possible final state of the system is \(P_{0}\), and any initial non-zero population decays. For lower mortality values, the system shows bistability between \(P_{0}\) and \(P_{S}^{h}\). However, for smaller mortality values, \(P_{S}^{h}\) destabilizes through a supercritical pitchfork bifurcation, leading to a phase separation dynamics of the two asymmetric solutions \(P_{A}\), as shown in Fig. 6 i-p. For even lower mortalities, \(P_{A}\) undergoes a Hopf bifurcation and densities \(n_{1}\) and \(n_{2}\) oscillate around these states. The dynamics of the limit cycle for decreasing values of \(\omega\) is shown in Fig. 7. Decreasing \(\omega\), the limit cycle growths in amplitude and approaches \(P_{S}^{l}\) and \(P_{S}^{h}\) simultaneously (see Fig. 7a, b and c). Close to these fixed points, the limit cycle slows down (see Fig. 7e and f). Eventually, decreasing \(\omega\) even more, the limit cycle touches \(P_{S}^{l}\) and \(P_{S}^{h}\) in a Double-Heteroclinc connection (\(DH\)), as shown in Fig. 7 c). After this bifurcation point, the limit cycle is destroyed and the local system presents Type-I excitable behavior (see Fig. 7d and g). In this excitable regime, homogeneous initial conditions below a threshold, given by the stable manifold of \(P_{S}^{l}\), decay to the bare state. Homogeneous initial conditions above this threshold will, however, make a large excursion in phase space to finally come back to the bare state again, an excitable trajectory (see Fig. 7d and g). This leads to the apparent paradoxical absence of persistent populated solutions in the so called "excitable regime". This paradoxical behavior can be related to the "enrichment paradox" [27] observed in many population dynamics models. However, in this case, localized initial conditions grow in this regime, creating a turbulent state that expands onto bared soil. An example of this regime is shown in Fig. 7 h-o. At difference with other models [28, 29], for the parameters used in this study we have not observed stable travelling pulses in the excitable regime, only turbulent states. #### 4.1.3 Obligate mutualism to monospecific transition For \(\alpha>\frac{1}{2}(1+\beta^{2})\) (region V, shaded in yellow in Fig. 1) we observe a smooth transition from mixed symmetric, \(P_{S}^{h}\), to monospecies, \(P^{h}\), meadows through asymmetric states, \(P_{A}\) (see panel V in Fig. 3). This transition can be understood as an obligate mutualism interaction for low plant densities, but a competitive interaction for high densities, giving a competitive exclusion scenario for small mortalities. In this scenario, the system presents a hysteresis cycle between populated solutions and \(P_{0}\). For high mortality, \(P_{0}\) is the only possible state. Decreasing the mortality the system crosses \(SN_{S}\), after which it shows bistability between \(P_{0}\) and \(P_{S}^{h}\). If we follow the populated branch \(P_{S}^{h}\) while decreasing \(\omega\), there is a point where the system eventually crosses a supercritical pitchfork bifurcation (\(Pitch\)) and \(P_{S}^{h}\) losses stability. Initial conditions around \(P_{S}^{h}\) slightly below this point tend to phase separate driven by curvature, forming domains of either one of the two assymmetric solutions, \(P_{A1}\) or \(P_{A2}\), as shown in Fig. 6 i-p. Decreasing \(\omega\) even more, \(P_{A}\) becomes more and more asymmetrical, one of the two species increasing its density while the other decreases it until \(P_{A}\) eventually reaches \(P^{h}\) in \(T\). This gives a continuous transition of the populated stable solutions from \(P_{S}^{h}\) to \(P^{h}\) while decreasing \(\omega\). Here the stable bare state coexists with stable populated solutions (either \(P_{S}^{h}\), \(P_{A}\) or \(P^{h}\)) until \(\omega=0\) where it losses its stability through \(T_{0}\). Crossing this threshold, the system undergoes a phase separation involving either the monospecific solutions (see Fig. 5 a-h) or the asymmetric mixed solutions (see Fig. 5 q-x), depending on the relative position of \(Pitch\) and \(T\) due to the parameter values. ### Scenarios for small saturation ratio (\(\beta<1\)) In this section we study the scenarios with a small saturation ratio, i.e. when \(\beta<1\) and therefore the intraspecies saturation is greater than the interspecies one. In this region of parameter space, the \(P_{S}^{h}\) is favored, Figure 5: Numerical simulation after the \(T_{0}\), for \(\omega=-0.1\). The simulations are initialized around \(P_{0}\) adding small noise. The panels show frames of the density fields for \(n_{1}\) (a-d, i-l, and q-t) and \(n_{2}\) (e-h, m-p, and u-x). The simulation has been performed for three different parameter configurations, showing the 3 different transitions when crossing the bifurcation point. Panels a-h show phase separation to \(P^{h}\), for \(\alpha=5\) and \(\beta=2.5\). Panels i-p show the transition to \(P_{S}\), for \(\alpha=6\) and \(\beta=1.5\). Panels q-x show phase separation to \(P_{A}\), for \(\alpha=7.7\) and \(\beta=2.5\). especially for small mortality values where it shows large densities. Nevertheless, for small values of the parameter \(\alpha\), describing interspecies competition or just very weak interspecies facilitation for low plant densities, the system can also show monospecific meadows for intermediate mortalities. A representative phase diagram of this region in the \((\alpha,\omega)\) parameter space is shown in Fig. 8 for \(\beta=0.1\). We next discuss the two different cases in this scenario. #### 4.2.1 Competitive exclusion to facultative mutualism transition For small values of \(\alpha\), i.e. \(\alpha<\beta\) (regions VI and VII, blue shaded in Fig. 1), the system tends to \(P^{h}\) for intermediate mortality values, and to \(P^{h}_{S}\) for lower mortalities. The transition between these two configurations is abrupt, and the system shows a hysteresis cycle. This hysteresis cycle coexists with another one between populated and unpopulated solutions (see Fig. 3 VI-VII). The transition from monospecific, \(P^{h}\), to mixed symmetric, \(P^{h}_{S}\), meadows occurs through \(T\), that involves the (unstable) \(P_{A}\). The transition from \(P^{h}_{S}\) to \(P^{h}\) occurs after a subcritical pitchfork, \(Pitch\). This transition leads to a spontaneous symmetry breaking and a phase separation of the two monoospecific solutions (see Fig. 6 a-h). The populated-unpopulated hysteresis cycle involves \(P^{h}\), which is destroyed at \(SN\) for \(\omega=0.25\). For larger mortalities, any initial condition decays to the bared state \(P_{0}\). \(P_{0}\) is stable above \(T_{0}\). After this bifurcation, and depending on the parameters, the system can show a phase separation to \(P^{h}\) (see Fig. 5 a-h) or converge to \(P^{h}_{S}\) (see Fig. 5 i-p). #### 4.2.2 Obligate and facultative mutualism For large values of \(\alpha\), i.e. \(\alpha>\beta\) (regions VIII, IX, and X, shadowed in green in Fig. 1), \(P^{h}_{S}\) is always stable below \(SN_{s}\). Representative bifurcation diagrams of this region are shown in Fig. 3 VIII, IX and X. Figure 6. Numerical simulations close to \(Pitch\) bifurcation. The simulations are initialized around \(P^{h}_{S}\) with small gaussian noise. The panels show frames of the density fields for \(n_{1}\) (a-d and i-l) and \(n_{2}\) (e-h and m-p). The simulation has been performed for two different parameter configurations, one showing the behavior after the subcritical \(Pitch\) (a-h), with \(\omega=-6.04\), \(\alpha=-2\), and \(\beta=0.3\); and the other after the supercritical \(Pitch\) (i-p), with \(\omega=-0.616\), \(\alpha=6\), and \(\beta=2\). Figure 7. Oscillatory and turbulent regimes around asymmetric solutions. Panels a)-d) show sketches of the phase diagram for different values of \(\omega\) in region V: a) between \(Pitch\) and \(Hopf\); b) after the \(Hopf\), \(P_{A}\) destabilizes and a stable limit cycle emerges. Decreasing \(\omega\) further, the limit cycle grows in amplitude and approaches \(P_{S}^{h}\) and \(P_{S}^{l}\), until it touches them, c), at a double heteroclinic (\(DH\)). Approaching this bifurcation point, the period of the oscillations diverges and the limit cycle is destroyed after crossing it. After this point, the system shows local excitability, see red trajectory in panel d). Panels e)-g) show the time evolution of each species density in the oscillatory regime for \(\alpha=4\) and \(\beta=3\), and (e) \(\omega=0.275\) far from the \(DH\) bifurcation, (f) \(\omega=0.27082\) closer to \(DH\), and (g) \(\omega=0.27\) in the excitable regime passed the \(DH\). Panels h)-o) show the turbulent regime observed in the excitable region for \(\omega=0.251\). For large mortality values, the symmetric mixed solution has a tipping point (\(SN_{s}\)) and the system collapses to the bare state. This bare state coexists with the symmetric mixed solution until \(\omega=0\), after which the system converges to the symmetric mixed solution (see Fig. 5 i-p). In region VIII there is a small set of mortality values for which the monospecific solution is also stable, showing the system bistability between monospecific and symmetric mixed populated states. ## 5. Conclusions We have presented a general spatiotemporal population dynamics model for two interacting seagrass species. The interaction between species has been introduced as a coupling through the mortality rate, with up to quadratic density dependent terms. This allows modeling different types of interactions. Regarding intraspecific interactions, these nonlinear terms allow low-density facilitation and high-density sturation leading to bounded solutions, i.e. Allee effect. For the interspecific interactions, the nonlinear density dependence allows, for some parameters, the prevalence of monospecific solutions, and species segregation for large density solutions associated with low mortality rates. The system include also non-linear diffusion and a gradient squared term to model clonal reproduction. In this work we have analyzed in detail the symmetric scenario of the general model, where intraspecific interactions are equal for both species and interspecific interaction is reciprocal. This scenario reduces the model parameters to just 4 in its adimensional form. We have characterized the bifurcation diagram of the symmetric scenario, which can be considered as a backbone of the complete system. The parameter space of the symmetric scenario can be divided into ten different regions according to the values of the biotic parameters \(\alpha\) and \(\beta\) determining the ratio between the intraspecific and interspecific interaction strengths. The bifurcation diagrams of the fixed points in each of these regions as a function of the net mortality rate \(\omega\), the parameter depending on abiotic factors, are qualitatively different. Furthermore, we can group these Figure 8. Phase diagram for \(\beta=0.1\) crossing through regions VI-X. This phase diagram is representative of any configuration with \(\beta<1\). Dotted lines represent bifurcation involving negative steady points, i.e. solutions without physical meaning. The blue solid (dashed) line represents the \(SN_{-}\) (\(SN_{+}\)) bifurcation. The green line represents the \(SN_{S}\), solid (dashed) when corresponding to \(SN_{S-}\) (\(SN_{S+}\)). The red solid (dashed) line represents the subcritical (supercritical) \(Pitch\) involving \(P_{S}^{h}\) (\(P_{S}^{l}\)). The orange solid (dashed) line represents \(T\) involving \(P^{h}\) (\(P^{l}\)). The black line represents \(T_{0}\), solid when involves \(P_{S}^{h}\), dashed when involves \(P_{S}^{l}\) while it is a node, and dotted-dashed when involves \(P_{S}^{l}\) while it is a saddle. The purple dotted line represents the \(Hopf\) bifurcation of \(P_{A}\), with negative density values. Dots mark the codimension-2 points. regions into five different scenarios with different ecological interpretations, including obligate and facultative mutualism, competitive exclusion, and strongly nonlinear regimes, as well as transitions between them. Some of these scenarios (regions VI-X in Fig. 1) are compatible with a linear interaction between species, corresponding in the model to \(\beta=0\). Nevertheless, some of the dynamics found in regions I-V are incompatible with just lineal interspecific interaction in a symmetric system. These dynamics include stable asymmetric states, oscillations, turbulence, and competitive exclusion. We have only studied in detail the symmetric case of the proposed models. Nevertheless, in many real cases, the interacting species are very different and the interaction can be asymmetric. Therefore, a natural extension of our work is to apply the model to particular cases, as has been already done with some seagrass microscopic models and macroscopic single species systems [14, 18, 19]. ## Appendix A Linear stability analysis. In this appendix, we describe the stability analysis used to study the bifurcations affecting HSSs. In particular, we show that there are no finite wavelength instabilities, a.k.a. Turing instabilities, for any of these solutions. To study the linear stability of HSSs we consider small perturbations of the form: \[\vec{n}_{q}=\vec{n}_{q}^{0}e^{\sigma_{q}t+iqx} \tag{9}\] where \(\sigma_{q}\) is the eigenvalue associated with the eigenvector \(\vec{n}_{q}^{0}\) of the Jacobian matrix around the HSS: \[J_{q}(n_{1}^{*},n_{2}^{*})=J_{0}(n_{1}^{*},n_{2}^{*})+\begin{pmatrix}-(1+ \delta n_{1}^{*})q^{2}&0\\ 0&-(1+\delta n_{2}^{*})q^{2}\end{pmatrix} \tag{10}\] where \(J_{0}\) is the homogeneous Jacobian matrix, given by: \[J_{0}(n_{1}^{*},n_{2}^{*})=\begin{pmatrix}J_{11}&J_{12}\\ J_{21}&J_{22}\end{pmatrix}=\begin{pmatrix}Q(n_{1}^{*},n_{2}^{*})+n_{1}^{*}-2n_ {1}^{*}(n_{1}^{*}+\beta n_{2}^{*})&n_{1}^{*}[\alpha-2\beta(n_{1}^{*}+\beta n_ {2}^{*})]\\ n_{2}^{*}[\alpha-2\beta(n_{2}^{*}+\beta n_{1}^{*})]&Q(n_{2}^{*},n_{1}^{*})+n_ {2}^{*}-2n_{2}^{*}(n_{2}^{*}+\beta n_{1}^{*})\end{pmatrix} \tag{11}\] The bifurcations presented in this paper can straightforwardly be obtained through the study of the eigenvalues of the \(J_{0}\) matrix. To detect pattern forming instabilities one must consider the full Jacobian \(J_{q}(n_{1}^{*},n_{2}^{*})\). Although in the symmetric case the diffusion coefficients are equal, \(d_{10}=d_{20}\), the presence of nonlinear diffusion does not allow to discard, a priory, the presence of a Turing instability in the system. In what follows, however, we prove that, despite nonlinear diffusion, no Turing instability can take place in the symmetric case for any of the HSSs. Six different conditions must be fulfilled in order to a Turing instability to take place. First, both field of the homogeneous solution must be positive to have physical meaning: \[n_{1,2}^{*}\geq 0. \tag{12}\] Second, the solution might be linearly stable under homogeneous perturbation, and therefore following two conditions must be fulfilled: \[\tau =J_{11}+J_{22}<0 \tag{13}\] \[\Delta =J_{11}J_{22}-J_{12}J_{21}>0. \tag{14}\] Finally, the transition must happen for a real critical wavenumber and a positive value of the control parameter \(\delta\): \[q_{c}^{2} =\frac{J_{11}(1+\delta n_{2}^{*})+J_{22}(1+\delta n_{1}^{*})}{2(1 +\delta n_{1}^{*})(1+\delta n_{2}^{*})}>0 \tag{15}\] \[\delta_{c} >0. \tag{16}\] ### Turing of the unpopulated solution In this subsection, we prove that the unpopulated solution has no physically meaningful Turing instability. First, the growth of a non-zero-wavenumber perturbation on top of the bare state implies regions of the space with a negative value of the population density of at least one species. These solutions, therefore don't have physical meaning and are forbidden, by construction, on the system. Nevertheless, we can compute the square critical wavenumber \(q_{c}^{2}\) equation to obtain, \(q_{c}^{2}=-\omega\). Therefore this critical wavenumber only exists for negative values of \(\omega\). Computing the determinant and the trace of the linearized system for perturbations with \(q_{c}\), we obtain \(\tau_{c}=0\) and \(\Delta_{c}=0\), showing that the value at this point does not depend on \(\delta\), meaning that the eigenvalue associated with \(q_{c}\) will be a double geometric-degenerated zero and never will be positive. This point, therefore, is not associated with a Turing instability but is a consequence of the symmetries of the problem. ### Turing of the monospecies solutions In this subsection, we prove that there is no physically meaningful Turing instability for monospecific homogeneous solutions. As the matrix given by Eq. (10) is triangular, the eigenvalues are easily obtained for the \(P^{l,h}\). Their eigenvalues are given by the following equation: \[\lambda_{1}(q)=n^{*}-2n^{*2}-(1+\delta n^{*})q^{2}\] \[\lambda_{2}(q)=-\omega+\alpha n^{*}-\beta^{2}{n^{*2}}-q^{2} \tag{17}\] where \(n^{*}\) is the plant density of the populated specie. The only relative maximum of both eigenvalues is for \(q=0\) and, therefore, no Turing instability can take place for monospecific HSSs. ### Turing of the symmetric mixed solution Using Eq. (13) and Eq. (15) for \(P^{l,h}_{S}\) we obtain: \[q_{c}^{2}=\frac{\tau}{4(1+\delta n_{i}^{*})}. \tag{18}\] Assuming conditions (12), and (16) are fulfilled, we obtain that \(q_{c}^{2}>0\iff\tau>0\), which contradicts either condition (13) or (15). Therefore it is not possible to fulfill all the conditions at the same time and there is no Turing instability for symmetric mixed solutions. ### Turing of asymmetric mixed solutions To work with asymmetric mixed solutions we will make use of the following change of variables: \(\mu=n_{1}+n_{2}\), \(\nu=n_{1}-n_{2}\). The asymmetric mixed solution is given by \(\mu^{*}=\frac{1-\alpha}{1-\beta^{2}}\) and \(\nu^{*}=\pm 2\sqrt{\frac{\omega\nu-\omega}{(1-\beta)^{2}}}\). Notice that condition (12) is only fulfil when \(|\nu^{*}|<\mu^{*}\) and \(\mu^{*}>0\). With this change of variables, condition (13) reads: \[\tau=\mu^{*}(1-\mu^{*}-\beta\mu^{*})-\nu^{*2}(1-\beta)<0. \tag{19}\] While, assuming conditions (12) and (16) are fulfilled, we can focus just in the numerator of (15) and rewrite it as: \[\tau+\delta(J_{11}n_{2}^{*}+J_{22}n_{1}^{*})>0. \tag{20}\] As \(\tau<0\), a necessary but not sufficient condition for this last inequality is: \[J_{11}n_{2}^{*}+J_{22}n_{1}^{*}=\frac{1}{2}(\mu^{*2}-\nu^{*2})(1-\mu^{*}-\beta\mu^ {*})>0, \tag{21}\] and,as \(|\nu^{*}|<\mu^{*}\), this condition reduces to \(1-\mu^{*}-\beta\mu^{*}>0\), or, as \(\beta>0\), to \(\mu^{*}<\frac{1}{1+\beta}\). Substituting this last expression on (19), assuming \(\mu^{*}>0\), we arrive to the necessary condition \(\beta<1\). Now, from the same expression and considering again \(|\nu^{*}|<\mu^{*}\) we arrive to: \[\mu^{*}(1-\mu^{*}-\beta\mu^{*})<\nu^{*2}(1-\beta)<\mu^{*2}(1-\beta) \tag{22}\] and therefore: \[\mu^{*}>\frac{1}{2}. \tag{23}\] Altogether we get \(\frac{1}{2}<\mu^{*}<\frac{1}{1+\beta}<\frac{1}{2}\), wich has no solution. Therefore we conclude that there is no Turing instability of the asymmetric solutions. _Acknowledgements._ We acknowledge financial support from project CYCLE (PID2021-123723OB-C22) funded by MCIN/AEI/10.13039/501100011033 and ERDF "A way of making Europe", the Maria de Maeztu project CEX2021-001164-M funded by the MCIN/AEI/10.13039/501100011033, and the European Union's Horizon's 2020 research and innovation programme (Grant agreement ID: 101093910, Ocean Citizen). PMS acknowledges support from the FPI grant RTI2018-095441-B-C22.
2310.02659
A note on the geometry of the two-body problem on $S^2$
Leveraging on the results of arXiv:2210.13644 , we carry out an investigation of the algebraic three-fold $\Sigma_{C,h}$, the common level set of the Hamiltonian and the Casimir, for the two-body problem for equal masses on $S^2$ subject to a gravitational potential of cotangent type. We determine the topology of its compactification $\overline{\Sigma}_{C,h}$ and how it bifurcates with respect to the admissible values of $(C,h)$, ($C$ being the fixed value of the Casimir and $h$ the fixed value of the Hamiltonian). This bifurcation diagram is actually equal to the bifurcation diagram that describes relative equilibria. We also prove that for $h$ sufficiently negative $\Sigma_{C,h}$ is equipped with a global contact form obtained from the environment symplectic form via a suitable Liouville vector field.
Alessandro Arsie, Nataliya A. Balabanova
2023-10-04T08:34:37Z
http://arxiv.org/abs/2310.02659v2
# A note on the geometry of the two-body problem on \(S^{2}\) ###### Abstract Leveraging on the results of [2], we carry out an investigation of the algebraic three-fold \(\Sigma_{C,h}\), the common level set of the Hamiltonian and the Casimir, for the two-body problem for equal masses on \(S^{2}\) subject to a gravitational potential of cotangent type. We determine the topology of its compactification \(\overline{\Sigma}_{C,h}\) and how it bifurcates with respect to the admissible values of \((C,h)\), \((C\) being the fixed value of the Casimir and \(h\) the fixed value of the Hamiltonian). This bifurcation diagram is actually equal to the bifurcation diagram that describes relative equilibria. We also prove that for \(h\) sufficiently negative \(\Sigma_{C,h}\) is equipped with a global contact form obtained from the environment symplectic form via a suitable Liouville vector field. _Keywords_: Hamiltonian systems, Dynamical systems, Contact Structure, Isoenergy surface, Topology, Compactification. ###### Contents * 1 Introduction * 2 Preliminaries * 3 Topology of the compactification of \(\Sigma_{C,h}\) * 4 Contact structure on \(\Sigma_{C,h}\) ## 1 Introduction The history of the interactions between Geometry and Dynamics is long and distinguished ([5, 10]). On one hand Geometry provides a global framework to analyse dynamical behaviour, while on the other Dynamics has led to the discovery of new geometrical and topological structures (Symplectic/Poisson/Contact geometry and their topological aspects, just to name a few). In recent years, there has been an increased interests in using tools from Symplectic and Contact Topology to study the global structure of complicated dynamical systems. Two recent beautifully written expositions about this area are [6] and [9]. One of the main tools in this study is the realisation that often a Hamiltonian vector field on a fixed level set of the Hamiltonian is simply a positive time reparameterization of the Reeb vector field of a contact manifold (for more details see Section 4). For instance, this allows one to use results coming from Contact Topology pertaining the existence of closed trajectories for the Reeb vector field to conclude for the existence of periodic solutions of a Hamiltonian system. These techniques are definitely not perturbative, so they allow one to explore regimes that are beyond the reach of tools like bifurcation theory. For example, given a Riemannian manifold \((M,g)\), and a mechanical Hamiltonian \(H\colon T^{*}M\to\mathbb{R}\) where locally in cotangent coordinates \(H(q,p):=\frac{1}{2}|p|_{g}+V(q)\), \(|\cdot|_{g}\) is the norm induced by the inverse of \(g\) and \(V\) is smooth function on \(M\), it is well known that if \(c>\sup(V)\), then the energy hypersurface \(\Sigma:=H^{-1}(c)\) is fibre-wise star-shaped and thus of contact type (see [6], Remark 2.6.5). Our goal in this note follows this line of development, although is much more limited. We study the topology of the compactification of the common level set of a Hamiltonian and a Casimir for a Hamiltonian system that describes the interaction of two equal masses on \(S^{2}\) interacting through a gravitation potential of cotangent type, (the generalisation of the gravitational potential on a surface of positive constant curvature). Using an ingenious symplectic reduction developed in [4], the problem is reduced from eight dimensions to a five-dimensional Poisson manifold. Inside this five-dimensional Poisson manifold, we consider the compactified three-fold \(\overline{\Sigma}_{C,h}\) obtained by taking the common level set of the (reduced) Hamiltonian \(\mathcal{H}=h\) and the non-trivial Casimir \(\mathcal{C}=C\). We analyse the homeomorphism type of this three-fold and how it depends on the values of \(h\) and \(C\). After having recalled some basic facts about Contact Geometry, we show that \(\Sigma_{C,h}\) is a hypersurface of contact type if \(h\) is sufficiently negative. In this case, the Hamiltonian vector field is a suitable positive time reparameterisation of the Reeb vector field. Unfortunately, since \(\Sigma_{C,h}\) is not compact, we can not use the corresponding theorems about the existence of a closed characteristic for the Reeb vector field. In general, it is a difficult problem to determine if a (regularised) energy hypersurface admits a compatible contact form. For instance, it has been proved in [1] the the regularised energy hypersurface of the planar restricted three-body problem if of restricted contact-type for all energies below the one corresponding to the first Lagrange point and for those slightly above that. See also [9] for other examples. This note was prompted by the desire to investigate some geometric aspects left unexplored in the work [2], where the authors analysed collision trajectories and the regularization for the two-body problem with equal masses on \(S^{2}\) subject precisely to a gravitational potential of cotangent type. ## 2 Preliminaries We just sketch the setup of the problem, directing the reader towards [2, 4, 7], for more details. We assume that the two bodies of equal mass (that can be without loss of generality assumed to be equal to \(1\)) are placed on the surface of a two-dimensional unit sphere \(S^{2}\) and are interacting with an attracting potential. The in-built \(SO(3)\)-symmetry of the setup allows us to perform reduction with respect to the symplectic \(SO(3)\) action on \(T^{*}S^{2}\times T^{*}S^{2}\), leaving us with a five-dimensional system in the variables \(q\), \(p\), \(m_{1}\), \(m_{2}\), \(m_{3}\). The potential \(V(q)\) is assumed to be \(-\cot(q)\). The reduced variables have the following physical meaning: \(q\) is the angle that separates the two bodies, \(p\) its Lagrangian dual, and \(m_{1},\ m_{2},\ m_{3}\) are the coordinates of the angular momentum in the coordinate system that moves with the two bodies. For convenience, we sometimes make the substitution \(\xi=\cot(q)\) The system has two invariant quantities: the Hamiltonian \[\mathcal{H}=\frac{1}{2}\left(m_{1}^{2}+m_{2}^{2}-2m_{1}p+2p^{2}+\xi(-2-2m_{2}m _{3}+m_{3}^{2}\xi)+m_{3}^{2}(1+\xi^{2})\right) \tag{2.1}\] and the Casimir \[\mathcal{C}=m_{1}^{2}+m_{2}^{2}+m_{3}^{2}.\] The symplectic structure on \(T^{*}S^{2}\times T^{*}S^{2}\) reduces to a Poisson structure on \(\mathbb{R}^{2}\times\mathbb{R}^{3}\), with non-zero Poisson brackets in the reduced variables given by \[\{m_{1},m_{2}\} =-m_{3}, \{m_{2},m_{3}\}=-m_{1},\] \[\{m_{1},m_{3}\} =m_{2}, \{\xi,p\}=-(\xi^{2}+1). \tag{2.2}\] ## 3 Topology of the compactification of \(\Sigma_{C,h}\) In this Section we study the compactification of the invariant variety given by the intersection of the energy hypersurface \(\Sigma_{h}\) and the Casimir hypersurface \(\Sigma_{C}\) in \(\mathbb{R}^{5}\). We show that topologically this is either \(S^{1}\times S^{2}\) or the connected sum of three copies of \(S^{1}\times S^{2}\). This analysis is partly inspired by the ideas detailed in [3]. One important difference however, is that the system we are considering is not integrable in general (see [11]), therefore it does not give rise to a two-dimensional Lagrangian foliation. The energy hypersurface \(\Sigma_{h}\) is given by: \[\Sigma_{h}:=\left\{(\xi,p,m_{1},m_{2},m_{3})\mid\frac{1}{2}\left(m_{1}^{2}+m_{ 2}^{2}-2m_{1}p+2p^{2}+\xi(-2-2m_{2}m_{3}+m_{3}^{2}\xi)+m_{3}^{2}(1+\xi^{2}) \right)=h\right\},\] while the Casimir hypersurface \(\Sigma_{C}\) is given by: \[\Sigma_{C}:=\{(m_{1},m_{2},m_{3},\xi,p)\in\mathbb{R}^{5},\text{ such that }m_{1}^{2}+m_{2}^{2}+m_{3}^{2}=C\}.\] Notice that \(\Sigma_{C}\) is non-empty only for \(C\geq 0\) and for \(C>0\) it is a four-dimensional cylinder in \(\mathbb{R}^{5}\) of the form \(S_{C}^{2}\times\mathbb{R}^{2}\), where \((\xi,p)\in\mathbb{R}^{2}\) and \(S_{C}^{2}\) is the two-dimensional sphere of radius \(\sqrt{C}\) in \(\mathbb{R}^{3}\) (the space of \((m_{1},m_{2},m_{3})\)). Of course \(S_{C}^{2}\) reduces to a point for \(C=0\). For the moment we consider the case \(C>0\). The invariant three-dimensional variety on which the dynamics unfolds is given by \(\Sigma_{C,h}:=\Sigma_{C}\cap\Sigma_{h}\subset\mathbb{R}^{5}\). In this case, there is a natural projection \(\pi:\Sigma_{C}\to S_{C}^{2}\). Restricting \(\pi\) to \(\Sigma_{C,h}\subset\Sigma_{C}\), we obtain a projection which we still denote with \(\pi:\Sigma_{C,h}\to S_{C}^{2}\). As we shall see below, the image of this projection is is either \(S_{C}^{2}\), or \(S_{C}^{2}\setminus\{\Delta_{1},\Delta_{2}\}\) or \(S_{C}^{2}\cupcup_{i=1}^{4}\Delta_{i}\), where \(\Delta_{i}\)s are open disks. In order to determine the topology of the (compactification) of \(\Sigma_{C,h}\) we study the inverse image \(\pi^{-1}(P)\) as \(P\) varies in \(S_{C}^{2}\). Consider the equation for the Hamiltonian (2.1) rewritten as sum of two squares using the Casimir: \[2\left(p-\frac{m_{1}}{2}\right)^{2}+2\left(m_{3}\xi-\frac{m_{2}m_{3}+1}{2m_{3}} \right)^{2}=2h-C+\frac{m_{1}^{2}}{2}+2\left(\frac{m_{2}m_{3}+1}{2m_{3}}\right)^ {2} \tag{3.1}\] Using (3.1), we prove: **Lemma 3.1**.: _If a point \(P\in S^{2}_{C}\) belongs to the image of the projection \(\pi:\Sigma_{C,h}\to S^{2}_{C}\), its pre-image in \(\Sigma_{C,h}\) is one of the following:_ 1. _an ellipse, if_ \(P\) _lies strictly inside_ \(\mathrm{lm}\)__\(\pi\) _and not on the equator_ \(m_{3}=0\)_;_ 2. _a parabola, if_ \(P\) _belongs to the equator_ \(m_{3}=0\)_;_ 3. _a point, if_ \(P\) _belongs to the boundary of_ \(\mathrm{lm}\)__\(\pi\)_._ Proof.: First we prove that the circle \(\{m_{3}=0\}\subset S^{2}_{C}\) is always in the image of \(\pi\). Indeed, set \(m_{3}=0\) and consider a point \(P=(m_{1},m_{2},0)\in S^{2}_{C}\). Then using (2.1) and the Casimir, \(\pi^{-1}(P)\) is described by the parabola \[\mathcal{C}_{C,h,m_{1}}:=\{(p,\xi)\in\mathbb{R}^{2}\text{ such that }C-2m_{1}p+2p^{2}-2\xi=2h\} \tag{3.2}\] inside \(\mathbb{R}^{2}\) with coordinates \((\xi,p)\), while \((C,h,m_{1})\) are all fixed. Incidentally, these are precisely the fibres in the projection \(\pi\) that cause \(\Sigma_{C,h}\) to be non-compact. If \(P\) is strictly inside the image of the projection \(\pi\) and not on the equator \(\{m_{3}=0\}\), then the right hand side of (3.1) is strictly positive and therefore \(\pi^{-1}(P)\) is indeed an ellipse in the plane with coordinates \((\xi,p)\). If a point \(P\) belongs to the boundary of the image of \(\pi\) then the right hand side of (3.1) is identically zero and this corresponds to a unique point in the plane with coordinates \((\xi,p)\). Finally, if the right hand side of (3.1) is strictly negative, then \(P\) does not belong to the image of \(\pi\). As it was stated in the proof of Lemma 3.1, the preimage via \(\pi\) of the circle \(\{m_{3}=0\}\subset S^{2}_{C}\) is responsible for the non-compactness of \(\Sigma_{C,h}\). However, that can be easily remedied by adding one point to each parabola and thus making them diffeomorphic to circles. In this way, we get a new compactified variety \(\overline{\Sigma}_{C,h}\), which we proceed to project onto \(S^{2}_{C}\) in the same manner; now the preimage of each point on the sphere is either empty, or a point or a curve diffeomorphic to a circle. **Proposition 3.2**.: _The compactification \(\overline{\Sigma}_{C,h}\) is a hypersurface in \(S^{2}\times S^{2}_{C}\)._ Figure 3.2: When \(\pi(\overline{\Sigma}_{C,h})\) is a sphere with two holes (i,e, a cylinder), the preimage of the every ‘vertical’ interval on this cylinder is a sphere, entailing \(\overline{\Sigma}_{C,h}\simeq S^{1}\times S^{2}\). Proof.: To formalise what we have said above, we apply the inverse stereographic projection to the \((\xi,p)\) plane: the North pole will serve as the 'additional' point we glue to the parabolae in order to close them. We substitute \(\xi=\frac{x}{1-z},\ p=\frac{y}{1-z}\) and multiply \(\mathcal{H}\) by \((1-z)^{2}\). In doing so, we describe \(\overline{\Sigma}_{C,h}\), the compactification of \(\Sigma_{C,h}\), as the common level set of the three polynomials in \(\mathbb{R}^{6}\). After a straightforward computation, these are given by: \[\overline{\Sigma}_{C,h}=\begin{cases}m_{1}^{2}+m_{2}^{2}+m_{3}^{2}=C,\\ x^{2}+y^{2}+z^{2}=1,\\ (m_{1}^{2}+m_{2}^{2}+m_{3}^{2}-2h)(1-z)^{2}+2y^{2}-2m_{1}y(1-z)+2m_{3}^{2}x^{2 }-2x(1-z)(1+m_{2}m_{3})=0\end{cases}, \tag{3.3}\] which is indeed a hypersurface in \(S^{2}\times S^{2}_{C}\). In [4], the authors present a diagram in the \((C^{2},h)\)-plane (see Figure 9 (b) in the aforementioned work) the describes the bifurcation of relative equilibria (in our case, equilibria in the reduced coordinates \((m_{1},m_{2},m_{3},\xi,p)\)) according to different values of \(h\) and \(C^{2}\). In particular, the curves in that diagram describe the pairs \((C^{2},h)\) for which relative equilibria exist. Outside those curves, relative equilibria do not exist. For convenience we call the union of all those curves \(\gamma\). In Subsection 3.1 we are going to prove that this bifurcation diagram is also the bifurcation diagram that controls the change in the topology of \(\overline{\Sigma}_{C,h}\). **Lemma 3.3**.: _The variety \(\Sigma_{C,h}\) is smooth whenever \((C^{2},h)\notin\gamma\)._ Proof.: We need to show that the two differentials \(\mathrm{d}\mathcal{H}\) and \(\mathrm{d}\mathcal{C}\) are linearly independent on \(\Sigma_{h}\cap\Sigma_{C}\). First, if \((C^{2},h)\notin\gamma\), then \(\mathcal{H}\) is never zero on \(\Sigma_{C,h}\), since if it were zero somewhere, it would give rise to relative equilibria. Note that the Poisson structure \(\sigma\) (see equation 2.2)) has rank \(4\) everywhere on the points of \(\Sigma_{C,h}\), since in particular it has rank \(4\) on \(\Sigma_{C}\). Therefore its kernel is one-dimensional and it is spanned by \(d\mathcal{C}\) on the points of \(\Sigma_{C,h}\), since there \(d\mathcal{C}\) is always non-vanishing there. Therefore, both \(d\mathcal{H}\) and \(d\mathcal{C}\) are nowhere vanishing on \(\Sigma_{C,h}\). Thus they are linearly dependent iff they are proportional via a non-where zero factor. On the other hand if \(d\mathcal{H}\) were proportional to \(d\mathcal{C}\) at a point of \(\Sigma_{C,h}\), then the corresponding Hamiltonian vector field would have an equilibrium there, while we are considering the pairs \((C,h)\) for which relative equilibria do not exist (i.e. equilibria for the Hamiltonian system in \((m_{1}m_{2},m_{3},\xi,p)\)). This proves that \(\Sigma_{C,h}\) is indeed smooth if the pair \((C^{2},h)\notin\gamma\). **Lemma 3.4**.: _Suppose the pair \((C^{2},h)\notin\gamma\). Then \(\mathrm{Sing}(\overline{\Sigma}_{C,h})=\overline{\Sigma}_{C,h}\setminus\Sigma _{C,h}\), where \(\mathrm{Sing}\) is the singularity locus._ Proof.: By Lemma 3.3, it is clear that \(\mathrm{Sing}(\overline{\Sigma}_{C,h})\subset\overline{\Sigma}_{C,h}\setminus \Sigma_{C,h}\). Furthermore, as a variety \(\overline{\Sigma}_{C,h}\) fails to be smooth precisely at those points that belong to the common level set (3.3) for which the Jacobian matrix (or its transpose) of the system (3.3) fails to have maximum rank. This matrix is given by \[\begin{bmatrix}2x&0&4\,{m_{3}}^{2}x-(2-2z)\,(1+m_{2}\,m_{3})\\ 2y&0&4\,y-2\,m_{1}\,(1-z)\\ 2z&0&-\,(2\,C-4\,h)\,(1-z)+2\,m_{1}\,y+2\,x\,(1+m_{2}\,m_{3})\\ 0&2m_{1}&-2\,y\,(1-z)\\ 0&2m_{2}&-2\,x\,(1-z)\,m_{3}\\ 0&2m_{3}&4\,m_{3}\,x^{2}-2\,x\,(1-z)\,m_{2}\end{bmatrix} \tag{3.4}\] It is clear that the rank of this matrix falls when \(z=1\). There are precisely the points that are added to \(\Sigma_{C,h}\) in order to make it compact. Outside those points, by Lemma 3.3 the matrix above has necessarily full rank, provided the pair \((C^{2},h)\notin\gamma\). **Proposition 3.5**.: _Let \(\pi:\overline{\Sigma}_{C,h}\to S_{C}^{2}\) be the usual projection from the compactified variety. Then \(\operatorname{Im}\pi\) can be:_ 1. _a sphere with no holes;_ 2. _a sphere with two holes;_ 3. _a sphere with four holes._ Proof.: The proof is purely computational. Observe that in order for the preimage of a point \(P\in S_{C}^{2}\) not to be empty, the left hand side of (3.1) must be non-negative at \(P\), i.e. \[2h-C+\frac{m_{1}^{2}}{2}+2m_{3}^{2}\left(\frac{m_{2}m_{3}+1}{2m_{3}^{2}} \right)^{2}\geq 0, \tag{3.5}\] Since we are considering a subset of \(S_{C}^{2}\), \(m_{1}^{2}\) may be replaced by \(C-m_{2}^{2}-m_{3}^{2}\) in the expression above and since (3.5) is even in \(m_{1}\) we do not need to take the square root or choose the sign. Furthermore, once we have substituted \(m_{1}^{2}=C-m_{2}^{2}-m_{3}^{2}\) in the expression above, only \(m_{2}\) and \(m_{3}\) appear and effectively we have projected \(\pi(\overline{\Sigma}_{C,h})\) to the \((m_{2},m_{3})\)- plane. The image of this projection will then be contained inside the disk \(m_{2}^{2}+m_{3}^{2}\leq C\). The expression (3.5) turns into \[2h-\frac{C}{2}-\frac{m_{3}^{2}}{2}+\frac{1}{2m_{3}^{2}}+\frac{m_{2}}{m_{3}} \geq 0.\] The image of \(\pi(\overline{\Sigma}_{C,h})\) projected to the \((m_{2},m_{3})\)-plane is clearly the intersection of this region with the disk \(m_{2}^{2}+m_{3}^{2}\leq C\). The left hand side of (3.1) is even as a function of \(m_{1}\); this means that with the projection to the \((m_{2},m_{3})\)-plane there are no overlaps and no holes are lost; thus, we can deduce the number of holes in \(\pi(\overline{\Sigma}_{C,h})\) from the system of equations \[\begin{cases}m_{2}^{2}+m_{3}^{2}=C;\\ 2h-\frac{C}{2}-\frac{m_{2}^{2}}{2}+\frac{1}{2m_{3}^{2}}+\frac{m_{2}}{m_{3}}=0 \end{cases} \tag{3.6}\] Expressing \(\frac{m_{2}^{2}}{m_{3}^{2}}=C-1\), substituting it into the second equation, squaring and subsequently multiplying by \(4m_{3}^{4}\) yield an 8th degree polynomial in \(m_{3}\): \[\frac{m_{3}^{8}}{4}+\left(\frac{C}{2}-2h\right)m_{3}^{6}+\left(\frac{C^{2}}{4 }-2Ch+4h^{2}+\frac{1}{2}\right)m_{3}^{4}+\left(2h-\frac{3C}{2}\right)m_{3}^{2} +\frac{1}{4}=0 \tag{3.7}\] This can be viewed as a polynomial in \(m_{3}^{2}\) and thus the corresponding equation can only have an even number of solutions (zero can't be among them). To the same end, one can observe that the system (3.6) is symmetric with respect to the transformation \((m_{2},m_{3})\mapsto(-m_{2},-m_{3})\) and therefore has an even number of solutions, each of which corresponds to an intersection of two curves. Therefore, the number of holes can be 0, 2 or 4, depending on the values of \(h\) and \(C\). All of these cases take place for varying values of \(h\) and \(C\) and are depicted in Figure 3.1. For a discussion of the bifurcation diagram pertaining to the appearance of these holes in the image of \(\pi\) see Subsection 3.1. There we will also show that this bifurcation diagram coincides with the bifurcation diagram describing the existence of relative equilibria in [4]. **Theorem 3.6**.: \(\overline{\Sigma}_{C,h}\) _is homeomorphic to one of the following manifolds:_ 1. \(S^{1}\times S^{2}\)_, if_ \(\pi:\overline{\Sigma}_{C,h}\to S^{2}_{C}\) _is surjective or its image is the sphere minus two open disks._ 2. \(\left(S^{1}\times S^{2}\right)\#\left(S^{1}\times S^{2}\right)\#\left(S^{1} \times S^{2}\right)\)_, if the image of_ \(\pi:\overline{\Sigma}_{C,h}\to S^{2}_{C}\) _is the sphere minus four open disks, where_ \(M\#N\) _denotes the connected sum of manifolds_ \(M\) _and_ \(N\)_._ Proof.: **Case 1** The simplest case is when \(\pi\) is surjective. To be clear, we denote with \(S^{2}_{C}\) the sphere corresponding to the Casimir and with \(S^{2}\) the sphere corresponding to the compactication of \(\mathbb{R}^{2}\). By Proposition (3.2) we already know that \(\overline{\Sigma}_{C,h}\) is a hypersurface in \(S^{2}\times S^{2}_{C}\). For each \(p\in S^{2}_{C}\), we have a corresponding simple continuous closed curve in \(S^{2}\) given explicitly by an equation. Let \(\phi_{p}:S^{1}\to S^{2}\) be a continuous paramaterisation of the simple continuous curve in \(S^{2}\). This depends continuously on \(p\in S^{2}_{C}\) and therefore \(\Psi:S^{1}\times S^{2}_{C}\to S^{2}\times S^{2}_{C}\) given by \(\Psi(t,p)=(\phi_{p}(t),p)\) is continuous. Furthermore, notice that \(\phi_{p}\) is a homeomorphism onto its image for each \(p\in S^{2}_{C}\) and that \(pr_{2}\circ\Psi=pr_{2}\). Therefore, \(\Psi\) is a homeomorphism onto its image \(W:=\Psi(S^{1}\times S^{2}_{C})=\overline{\Sigma}_{C,h}\subset S^{2}\times S^{ 2}_{C}\). Now \(\pi:\overline{\Sigma}_{C,h}\to S^{2}_{C}\) is the same as \(pr_{2}|_{W}\). Thus \(\Psi^{-1}:W\to S^{1}\times S^{2}_{C}\) is a global continuous trivialisation with \(pr_{2}\circ\Psi=pr_{2}|_{W}\). Figure 3.3: Transforming \(K\) into a disk with cut out lunettes in the case when \(\pi(\overline{\Sigma}_{C,h})\) is a disk with three holes. **Case 2** In the second case, the image of \(\pi\) is \(S^{2}\setminus\{\Delta_{1}\cup\Delta_{2}\}\) where \(\Delta_{1}\) and \(\Delta_{2}\) are open disks. Therefore the image of \(\pi\) is diffeomorphic to the compact cylinder \(S^{1}\times[0,1]\). If we remove also the boundaries of these disks and apply Case 1, we see that what we obtain is homeomorphic to the trivial fibration with base the open cylinder \(S^{1}\times(0,1)\) and fibre \(S^{1}\). Over each boundary point of this cylinder, the inverse image of \(\pi\) is given by a point while over each point in \(S^{1}\times(0,1)\) the inverse image of \(\pi\) is a circle. Now if we look at the inverse image under \(\pi\) of any ruling \(Q\times[0,1]\), where \(Q\) is a fixed point in \(S^{1}\) we get that \(\pi^{-1}(Q\times[0,1])\) is homeomorphic to a sphere as it is immediate to see (c.f. Figure 3.2). As \(Q\) varies in \(S^{1}\), it is immediate to see that in this case \(\overline{\Sigma}_{C,h}\) is homeomorphic to \(S^{1}\times S^{2}\). **Case 3** The third case is the most involved. Before dealing with it, we introduce some preliminary constructions. First observe that we can represent a \(3\)-dimensional sphere \(S^{3}\) as a fibration \(\Pi\colon S^{3}\to\overline{\Delta}\) over a closed disk: the fibres over the interior points are homeomorphic to \(S^{1}\) and those over the boundary points are just points. This is given by \[\Pi\colon\{x\in\mathbb{R}^{4}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=1\}\to\{ (x_{1},x_{2})\in\mathbb{R}^{2}:x_{1}^{2}+x_{2}^{2}\leq 1\}.\] Now consider a closed lunette \(L\subset\overline{\Delta}\) as the ones we cut from a two-dimensional disk in the bottom left part of Figure 3.3 and the inverse image \(\Pi^{-1}(L)\). It is clear that this is just homeomorphic to a \(3\)-dimensional ball \(B^{3}\) since the inverse image of \(\Pi\) above every point in \(L\setminus\partial\overline{\Delta}\) is a circle while above every point in \(L\cap\partial\overline{\Delta}\) the inverse image is a point. So \(\Pi^{-1}(L)\) is like having the lunette rotate by \(360\) degrees around the segment \(L\cap\partial\overline{\Delta}\). For analysing the third case, we also need the following preliminary construction. With reference to the previous paragraph, consider again \(\Pi:S^{3}\to\overline{\Delta}\). This time we consider two disjoint closed lunettes \(L_{1},L_{2}\) in \(\overline{\Delta}\). Consider \(\Pi^{-1}(\overline{\Delta}\setminus L_{1}\cup L_{2})\), this is of course just \(S^{3}\) with two disjoint closed balls \(B^{3}_{1},B^{3}_{2}\) removed. We claim that if we glue \(S^{3}\) identifying two the boundaries \(\partial B^{3}_{1},\partial B^{3}_{2}\) the resulting manifold is homeomorphic to \(S^{1}\times S^{2}\). To see this, represent \(S^{3}\) in \(\mathbb{R}^{4}\) as \(S^{3}=\{x\in\mathbb{R}^{4}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=1\}\) and the two closed balls as \(B^{3}_{1}:=\{x\in S^{3}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\leq\epsilon^{2},\ x_{4}>0\}\) and \(B^{3}_{1}:=\{x\in S^{3}:x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\leq\epsilon^{2},\ x_{4}<0\}\). Using the meridians corresponding to the coordinate \(x_{4}\), it is clear that any point \(Q\) in \(S^{3}\setminus\{B^{3}_{1}\cup B^{3}_{2}\}\) after the gluing of the \(2\)-dimensional spheres \(\partial B^{3}_{1}\) and \(\partial B^{3}_{2}\) can be represented by a point \(P(Q)\) on \(S^{2}\) (obtained by intersecting the meridian going through \(Q\) with \(S^{2}\)) and a coordinate on the meridian. But under identification of \(\partial B^{3}_{1}\) and \(\partial B^{3}_{2}\), each meridian becomes a circle (each meridian intersects the two boundaries at two different points which are then identified under gluing). Therefore any point \(P\) on \(S^{3}\setminus\{B^{3}_{1}\cup B^{3}_{2}\}/\) (/ represents the operation of gluing) is uniquely identified with \((Q(P),\theta(P))\in S^{2}\times S^{1}\). Thus \(S^{3}\setminus\{B^{3}_{1}\cup B^{3}_{2}\}/\) is homeomorphic to \(S^{2}\times S^{1}\). For illustration, see Figure 3.4. If instead of two lunettes in \(\overline{\Delta}\) we have four that are removed, we have correspondingly four balls \(B^{3}\) removed from \(S^{3}\). We argue that the manifold obtained by removing these four balls from \(S^{3}\) and identifying the four boundary spheres \(\partial B^{3}\) pairwise is homeomorphic to the connected sum of two copies of \(S^{1}\times S^{2}\). Moving these balls on \(S^{3}\) it is always possible to arrange them in such a way that one pair of the boundary spheres \(\partial B^{3}\) that are identified lie on one of the hemisphere cut out on \(S^{3}\) by a hyperplane containing the \(x_{4}\)-axis, while the other pair of boundary spheres lie on the opposite hemisphere. We can cut out \(S^{3}\) along this hyperplane and represent it as a connected sum \(S^{3}\#S^{3}\). If we separate each of these spheres, then on each of them we have can remove two balls, identify the corresponding pair of boundaries and obtain by the construction detailed above a manifold homeomorphic to \(S^{1}\times S^{2}\). Gluing them again, we get the claim. We are now in a position to easily deal with the third case. In this case the image of \(\pi\) is \(S^{2}\) with four open disks removed, call it \(K\). Using stereographic projection with centre inside one of these disks, \(K\) can be presented in the plane as in the upper left part of Figure 3.3 (\(K\) is the inside of the figure-eight-like configuration). In order to understand the topology of \(\pi^{-1}(K)\), we cut \(K\) along the segment shown in Figure 3.3 so that \(K\) turns out to be homeomorphic to a closed disk \(\overline{\Delta}\) in which six small open half-circle \(\{C_{1},\ldots C_{6}\}\) on the boundary are singled out and identified pairwise (see the lower right part of Figure 3.3 where the half-circles are dashed). Observe that the inverse image \(\pi^{-1}(\overline{C_{i}})\) for each \(i\) is indeed homeomorphic to a sphere, since over any point in \(C_{i}\) there is a circle, while over any point in \(\partial C_{i}\) (which is also a boundary point of the disk and a boundary point of the original \(K\)) there is a point. These spheres are to be identified pairwise to produce \(\overline{\Sigma}_{C,h}\). Now, instead of singling out the six small half-circles, we add six small lunettes (see the lower left part of figure 3.3). Each of these lunettes is bounded on one side by one of the \(C_{i}\) and on the other side by piece of the boundary of the disk, in this way obtaining a full closed disk \(\overline{\Delta}\). Over each point inside the disk there is circle while over each point in the boundary there is a point. So the inverse image of the disk is indeed \(S^{3}\). The inverse image of each of these lunettes in \(S^{3}\) is again a ball \(B^{3}\). When we glue \(S^{3}\) pairwise along the boundaries of \(\partial B^{3}\) of these balls (these are just the inverse images of the \(\overline{C_{i}}\)'s we singled out before), we get that the resulting manifold is homeomorphic to the connected sum \((S^{1}\times S^{2})\#(S^{1}\times S^{2})\#(S^{1}\times S^{2})\) (one copy of \(S^{1}\times S^{2}\) for each pair of sphere that is identified). The degenerate case \(C=0\) is immediate. In this case indeed, \(m_{1}=m_{2}=m_{3}=0\) and the Hamiltonian gives an invariant curve in the \((\xi,p)\) plane given by \(\{p^{2}-\xi=h\}\), that is a family of parabolae, depending on \(h\). So the invariant manifold \(\Sigma_{0,h}\) is one dimensional and is not compact in this case either. It can be compactified adding a point of at infinity, so \(\overline{\Sigma}_{C,h}\) is homeomorphic to a circle. ### Bifurcation diagram Changes in the topology of \(\overline{\Sigma}_{C,h}\) depend on the values of \(h\) and \(C\). In this section, we discuss the corresponding bifurcation diagram. As we explain below, this diagram turns out to be the same as the one appearing in Figure 9 (b) of [4] describing bifurcation and existence of relative equilibria for different values of \((C,h)\). The computations are laborious, so we present just the essential points. We consider the polynomial (3.7) in \(m_{3}\). Since the number of holes in the image of \(\pi\) is determined by the number of its roots, the bifurcations happen when this polynomial has zeros of multiplicity greater than one. Since (3.7) is an even function in \(m_{3}\), we rewrite it as a polynomial of fourth degree in \(y=m_{3}^{2}\). We proceed to divide it by monomials \((y-y_{0})^{2}\) and \((y-y_{0})^{3}\) and demand that for each case all the coefficients in the remainders (which are polynomial functions of \(y_{0}\)) have a common positive root \(y_{0}^{*}\). For the case of division by \((y-y_{0})^{3}\), computations (which we omit for the sake of brevity) yield only one possible value of the parameters: \((C,h)=(2,1)\). This turns out to be the point where the two curves separate. When dividing by \((y-y_{0})^{2}\), we get a remainder of the form \[\begin{split}& y\left(2C^{2}y_{0}-16Chy_{0}+6cy_{0}^{2}-6C+32h^{2}y_ {0}-24hy_{0}^{2}+8h+4y_{0}^{3}+4y_{0}\right)\\ &-C^{2}y_{0}^{2}+8Chy_{0}^{2}-4Cy_{0}^{3}-16h^{2}y_{0}^{2}+16hy_ {0}^{3}-3y_{0}^{4}-2y_{0}^{2}+1\end{split} \tag{3.8}\] The constant (in \(y\)) coefficient simplifies to \[-(y_{0}(C-4h+y_{0})+1)(y_{0}(C-4h+3y_{0})-1),\] with roots \[\begin{split}\begin{cases}y_{0}=\frac{1}{6}\left(-\sqrt{(C-4h)^{ 2}+12}-C+4h\right)\\ y_{0}=\frac{1}{6}\left(\sqrt{(C-4h)^{2}+12}-C+4h\right)\\ y_{0}=\frac{1}{2}\left(-\sqrt{(C-4h)^{2}-4}-C+4h\right)\\ y_{0}=\frac{1}{2}\left(\sqrt{(C-4h)^{2}-4}-C+4h\right)\end{cases}\end{split} \tag{3.9}\] The first expression above is always negative; the second always positive; the last two are positive when \(4h-C>0\) and negative otherwise. Additionally, they only exist when \((C-4h)^{2}\geq 4\). Substituting the first and the last two (assuming they are positive) expressions from (3.9) into the coefficient of \(y\) from (3.8), taking into account the existence conditions described above and Figure 3.5: Bifurcation diagram demanding that the coefficient in question be \(0\), we get the bifurcation diagram in Figure 3.5 (the tangent line stops at the point \(C=2,h=1\) precisely because \((C-4h)^{2}\geq 4\)). The two equations in \(h\) and \(C\) that describe the curves in the bifurcation diagram are as follows: \[\begin{cases}\frac{1}{54}\left(\sqrt{(C-4h)^{2}+12}-C+4h\right)^{3}+\frac{1}{6} (C-4h)\left(\sqrt{(C-4h)^{2}+12}-c+4h\right)^{2}\\ +\frac{1}{3}\left((C-4h)^{2}+2\right)\left(\sqrt{(C-4h)^{2}+12}-C+4h\right)-6C+ 8h=0,\\ h=\frac{C}{2}.\end{cases} \tag{3.10}\] As we have mentioned, this bifurcation diagram seems to closely resemble that for relative equilibria in [4] (Figure 9 (b)). More precisely, **Proposition 3.7**.: _The two bifurcation diagrams coincide._ Proof.: The bifurcation diagram in [7] is constructed in the following manner: two types of relative equilibria can be parametrised by \(m_{3}\) and \(q\); their explicit forms are \[\begin{cases}m_{1}=p=0,\\ m_{3}=\pm\sqrt{\tan\left(\frac{q}{2}\right)}\\ m_{2}=\mp\tan\left(\frac{q}{2}\right)^{\frac{3}{2}}\end{cases},\begin{cases}q= \frac{\pi}{2}\\ m_{1}=p=0\\ m_{2}=-\frac{1}{m_{3}}.\end{cases} \tag{3.11}\] These expressions, when substituted into the formulae for \(\mathcal{C}\) and \(\mathcal{H}\), give two curves in the \(\{C,h\}\) plane, parametrised respectively by \(q\) and \(m_{3}\): \[\begin{cases}\left\{C(q),h(q)\right\}=\left\{\tan^{3}\left(\frac{q}{2}\right)+ \tan\left(\frac{q}{2}\right),\frac{1}{2}\left(\tan^{3}\left(\frac{q}{2}\right) +\tan\left(\frac{q}{2}\right)\right)+\cot(q)\left(\tan^{2}\left(\frac{q}{2} \right)+\tan\left(\frac{q}{2}\right)\cot(q)-1\right)\right\},\\ \left\{C(m_{3}),h(m_{3})\right\}=\left\{m_{3}^{2}+\frac{1}{m_{3}^{2}},\frac{1 }{2}\left(m_{3}^{2}+\frac{1}{m_{3}^{2}}\right)\right\}\end{cases} \tag{3.12}\] On the other hand, equations (3.10) for the bifurcations of the topology of \(\Sigma_{C,h}\) are formulated explicitly through \(C\) and \(h\); we want to demonstrate that the loci of (3.10) and (3.12) coincide. Let us ascertain that the first equations from the two pairs describe the same curve. To do so, we substitute \(C=\tan^{3}\left(\frac{q}{2}\right)+\tan\left(\frac{q}{2}\right)\) and \(h=\frac{1}{2}\left(\tan^{3}\left(\frac{q}{2}\right)+\tan\left(\frac{q}{2} \right)\right)+\cot(q)\left(\tan^{2}\left(\frac{q}{2}\right)+\tan\left(\frac{ q}{2}\right)\cot(q)-1\right)\) into the first equation in (3.10). Simplifying this expression, we get identical \(0\), which proves that the locus described by (3.12) is contained in one described by (3.10). To prove the reverse inclusion, it is enough to see that when \(q\in[0,\pi]\), \(C\) ranges from \(0\) to \(+\infty\) and \(h\) from \(-\infty\) to \(+\infty\). The pair of second equations is simpler: it is clear that they both describe parts of the line \(C=2h\). However, restrictions imposed by square roots in (3.9) stipulate that the part in question is a ray starting at the point \((C,h)=(2,1)\). In order to complete our proof, we need to show that the part of the line described by the second equation in (3.11) is the same ray. This can be easily seen by finding a minimum of the function \(C(m_{3})=m_{3}^{2}+\frac{1}{m_{3}^{2}}\); it is achieved when \(m_{3}=\pm 1\) and \(C(\pm 1)=2\). Thus, the two bifurcation diagrams are the same. ## 4 Contact structure on \(\Sigma_{C,h}\) In this Section, we are going to prove that \(\Sigma_{C,h}\) is of contact type when its projection to \(S_{C}^{2}\) is diffeomorphic to a cylinder and if \(h\) is sufficiently negative (see Theorem 4.5). For the sake of completeness, we first summarize a few results from contact geometry (see [1, 8] and many others). **Definition 4.1**.: _Let \(X\) be a manifold of dimension \(2n-1\). A contact form \(\alpha\) on \(X\) is a 1-form such that \(\alpha\wedge d\alpha^{n-1}\) is a nowhere vanishing volume form on \(X\). \((X,\alpha)\) is called a (co-oriented) contact manifold._ In contact geometry, one is often interested in a weaker notion, that of a contact structure on \(X\): this is just a hyperplane distribution \(\eta\subset TX\) which is maximally non-integrable (locally \(\eta=\ker(\alpha)\) where \(\alpha\) is a (local) 1-form and the condition \(\alpha\wedge d\alpha^{n-1}\neq 0\) means that \(\eta\) is maximally non-integrable). Notice that if \(\alpha\) is a contact form, then \(d\alpha_{|\eta}\) is symplectic on \(\eta\). For our purposes, we will always use the stronger notion of co-oriented contact manifold \((X,\alpha)\), since this is the notion of contact manifold related to dynamics. Indeed, given \((X,\alpha)\) there is associated to it a canonical vector field, called the Reeb vector field \(R_{\alpha}\) which is defined via the two conditions \(i_{R_{\alpha}}d\alpha=0\) and \(i_{R_{\alpha}}\alpha=1\). Let us remark that the Reeb vector field is always transverse to the contact distribution \(\eta:=\ker(\alpha)\) induced by \(\alpha\), since, as we remarked above \(d\alpha_{|\eta}\) is symplectic. It turns out that in some situations the Reeb vector field is just a (positive) time reparametrization of a Hamiltonian vector field. For this to occur, let \((M,\omega)\) be a symplectic manifold, let \(H:M\to\mathbb{R}\) be a smooth Hamiltonian and let \(X:=H^{-1}(e)\), where \(e\) is a regular value of \(H\). Assume that there exists a tubular neighborhood \(U\) of \(X\) in \(M\) such that \(\omega_{|U}\) is exact, say equal to \(d\lambda\). Thus on \(U\) it makes sense to look for a vector field \(V\) such that \(i_{V}\omega=\lambda\). Any such a vector field is called a _Liouville_ vector field. In particular, if \(V\) is a Liouville vector field, it follows automatically that \(L_{V}\omega=\omega\), where \(L_{V}\) is the Lie derivative along \(V\). Thus \(V\) acts as a symplectic dilation on \(U\). The following important result holds: **Theorem 4.2**.: _In the situation above, if \(V\) is transverse to \(X\), then \(i_{V}\omega_{|X}=\lambda_{|X}\) is a contact form \(\alpha\) on \(X\). Furthermore, the Reeb vector field \(R_{\alpha}\) associated to this contact form is a (positive) time reparametrisation of the Hamiltonian vector field \(Y_{H}\)_ For a proof and further details see for instance [6]. Part of the importance of this result stems from the fact that the Hamiltonian dynamics is described in a completely geometric manner. All the results about the dynamics of the Reeb vector field (like existence of periodic orbits) can be translated into results about Hamiltonian dynamics. In the situation described by Theorem 4.2, \((X,\alpha)\) is in particular a hypersurface of contact type inside the symplectic manifold \((M,\omega)\). More in general, following A. Weinstein (see [12]) one has the following: **Definition 4.3**.: _Let \((M,\omega)\) be a symplectic manifold. Let \(X\subset M\) be a smooth hypersurface and let \(\mathcal{L}_{X}\subset TX\) be the characteristic line bundle spanned by the kernel of \(\omega_{|X}\). \(X\) is called of contact type if there exists a 1-form \(\alpha\) on \(X\) such that_ 1. \(d\alpha=\omega_{|X}\) _and_ 2. \(\alpha(v)\neq 0\) _for any non-zero_ \(v\in\mathcal{L}_{X}\)_._ _Then \(\alpha\) is called a contact form on \(X\)._ In the Definition 4.3, \(\alpha\) is indeed a contact form on \(X\) since \(d\alpha_{|\ker(\alpha)}\) is symplectic, so that \(\alpha\wedge d\alpha^{n-1}\) defines a nowhere vanishing volume form on \(X\). Observe that when \(X:=H^{-1}(e)\) for a smooth function \(H:M\to\mathbb{R}\) with regular value \(e\), then the associated Hamiltonian vector field \(Y_{H}\) is tangent to \(X\) and \(Y_{H}\) is a nowhere vanishing section of \(\mathcal{L}_{X}\). Thus, as in Theorem 4.2 both \(Y_{H}\) and \(R_{\alpha}\) are both nowhere vanishing sections of \(\mathcal{L}_{X}\) Now we proceed to show that if \(h\) is sufficielnly negative, \(\Sigma_{C,h}\) is a hypersurface of contact type inside the symplectic manifold \((0,\pi)\times\mathbb{R}\times S^{2}_{C}\) (in \((q,\ p,\ m_{i})\)-coordinates) equipped with symplectic form \[\omega=\mathrm{d}q\wedge\mathrm{d}p+i^{*}(m_{1}\mathrm{d}m_{2}\wedge\mathrm{d} m_{3}+m_{2}\mathrm{d}m_{3}\wedge\mathrm{d}m_{1}+m_{3}\mathrm{d}m_{1}\wedge \mathrm{d}m_{2}),\] where \(i:S^{2}_{C}\hookrightarrow\mathbb{R}^{3}\). In order to account for the fact that \(\{m_{1},m_{2},m_{3}\}\in S^{2}_{C}\), we make a spherical change of coordinates, substituting \(m_{1}\to\sqrt{C}\cos(\theta)\cos(\phi)\), \(m_{2}\to\sqrt{C}\cos(\theta)\sin(\phi)\), \(m_{3}\to\sqrt{C}\sin(\theta)\), with \(\phi\in[0,2\pi)\) and \(\theta\in[-\frac{\pi}{2},\frac{\pi}{2}]\). Thus the symplectic form on \((0,\pi)\times\mathbb{R}\times S^{2}_{C}\) written in local coordinates becomes: \[\omega=\mathrm{d}p\wedge\mathrm{d}q+C^{\frac{3}{2}}\cos(\theta)\,\mathrm{d} \phi\wedge\mathrm{d}\theta. \tag{4.1}\] In Section 3 we state the inequality condition at a point on \(S^{2}_{C}\) for preimage of it in \(\Sigma_{C,h}\) not to be empty; here, we restate it in terms of \(\theta\) and \(\phi\): \[C\cos^{2}(\theta)+\frac{\csc^{2}(\theta)}{C}-2C+2\cot(\theta)\sin(\phi)+4h\geq 0. \tag{4.2}\] The Hamiltonian will have the form \[C\sin^{2}(\theta)\cot^{2}(q)+p^{2}-\cot(q)-\cos(\theta)\left(\sqrt{C}p\cos( \phi)+C\sin(\theta)\cot(q)\sin(\phi)\right)=h-\frac{C}{2} \tag{4.3}\] The cylindrical subset of \(S^{2}_{C}\), as the one described by the equation (4.2), unlike the entire \(S^{2}_{C}\), admits a Liouville vector field. Since \(\omega\) has two 'independent' parts (see equation (4.1)), so will the Liouville vector field. **Remark 4.4**.: As \(h\to-\infty\), the width of the projection \(\Sigma_{C,h}\to S^{2}_{C}\) tends to \(0\). Namely, \(\theta\) is contained within a very narrow neighbourhood of \(0\). We refer to these values of \(\theta\) as _permitted values_. **Theorem 4.5**.: _The manifold \(\Sigma_{C,h}\) is of contact type when \(h=h(C)\) is sufficiently negative._ Proof.: Firstly, observe that per the bifurcation diagram 3.5, for every value of \(C\) we can find \(h(C)\) that is negative enough, such that \(\pi(\Sigma_{h,C})\) is the complement of two open disks on \(S^{2}\). Secondly, by Remark (4.4) this complement can be made as narrow in \(\theta\) as desired by decreasing the value of \(h\). The proof is based on constructing a Liouville vector field \(X\) in a tubular neighborhood of \(\Sigma_{C,h}\) inside the symplectic manifold \((0,\pi)\times\mathbb{R}\times S^{2}_{C}\). We divide it in four parts. In the first part, we construct the Liouville vector \(X\), but do not completely determine it. In the second part, we consider points \(R\) in the interior of \(\pi(\Sigma_{C,h})\subset S^{2}_{C}\) and for which \(\theta\neq 0\) (points that are not on the equator of \(S^{2}_{C}\)). Then \(\pi^{-1}(R)\) consists of a simple closed curve in the \((q.p)\) plane like the one in Figure 4.1. In this step we prove that such curves are always transverse in the \((q,p)\) plane to a central vector field with a centre in a certain point, whose coordinates depend on \(\theta\) and \(\phi\). This in turns allows us to determine uniquely the Liouville vector field \(X\) so that \(X\) is transverse to \(\Sigma_{C,h}\) at points whose projection on \(S^{2}_{C}\) lies in the interior of \(\pi(\Sigma_{C,h})\) and does not lie on the equator of \(S^{2}_{C}\). The third step deals with showing that the Liouville vector field thus obtained is transverse to \(\Sigma_{C,h}\) also at points whose projection on \(S^{2}_{C}\) lies on the equator, i.e. for \(\theta=0\). The last step is focused on showing that the Liouville vector fields is transverse to \(\Sigma_{C,h}\) at those points that project to the boundary of \(\pi(\Sigma_{C,h})\). **Step 1** For the Liouville vector field \(X\) we consider the following ansatz: \[X=\frac{1}{2}\left(\left(\left(p-f_{1}(\phi,\theta)\right)\frac{\partial}{ \partial p}+\left(q-f_{2}(\phi,\theta)\right)\frac{\partial}{\partial q}+f_{3} (p,q,\phi,\theta)\frac{\partial}{\partial\phi}+f_{4}(q,p,\phi,\theta)\frac{ \partial}{\partial\theta}\right). \tag{4.4}\] Bringing together the like terms and imposing that \(X\) in (4.4) is a Liouville vector field for the symplectic form \(\omega\) in (4.1), i.e. \(L_{X}\omega=\omega\) we obtain: \[\begin{split}&\frac{1}{2}\left[\left(\frac{\partial f_{2}}{ \partial\phi}+C^{\frac{3}{2}}\cos(\theta)\frac{\partial f_{4}}{\partial p} \right)\mathrm{d}\phi\wedge\mathrm{d}p+\left(\frac{\partial f_{2}}{\partial \theta}-C^{\frac{3}{2}}\cos(\theta)\frac{\partial f_{3}}{\partial p}\right) \mathrm{d}\theta\wedge\mathrm{d}p+2\mathrm{d}p\wedge\mathrm{d}q\\ &+\left(-\frac{\partial f_{1}}{\partial\phi}+C^{\frac{3}{2}}\cos (\theta)\frac{\partial f_{4}}{\partial q}\right)\mathrm{d}\phi\wedge\mathrm{ d}q+\left(-\frac{\partial f_{1}}{\partial\theta}-C^{\frac{3}{2}}\cos(\theta)\frac{ \partial f_{3}}{\partial q}\right)\mathrm{d}\theta\wedge\mathrm{d}q\\ &+C^{\frac{3}{2}}\left(\cos(\theta)\frac{\partial f_{3}}{\partial \phi}+\cos(\theta)\frac{\partial f_{4}}{\partial\theta}-\sin(\theta)f_{4} \right)\mathrm{d}\phi\wedge\mathrm{d}\theta\end{split}\right]=\omega. \end{split} \tag{4.5}\] This entails the following system of partial differential equations: \[\begin{cases}\cos(\theta)\frac{\partial f_{2}}{\partial\phi}+\cos(\theta) \frac{\partial f_{4}}{\partial\theta}-\sin(\theta)f_{4}=2\cos(\theta),\\ \frac{\partial f_{2}}{\partial\phi}+C^{\frac{3}{2}}\cos(\theta)\frac{\partial f _{4}}{\partial\rho}=0,\\ \frac{\partial f_{2}}{\partial\phi}-C^{\frac{3}{2}}\cos(\theta)\frac{\partial f _{3}}{\partial p}=0,\\ \frac{\partial f_{1}}{\partial\phi}-C^{\frac{3}{2}}\cos(\theta)\frac{\partial f _{4}}{\partial q}=0,\\ \frac{\partial f_{1}}{\partial\theta}+C^{\frac{3}{2}}\cos(\theta)\frac{ \partial f_{3}}{\partial q}=0.\end{cases} \tag{4.6}\] The general solution to this system has the form \[\begin{cases}f_{3}=\frac{1}{C^{\frac{3}{2}}\cos(\theta)}\frac{\partial f_{2}} {\partial\theta}\ p-\frac{1}{C^{\frac{3}{2}}\cos(\theta)}\frac{\partial f_{1} }{\partial\theta}\ q+m(\phi,\theta),\\ f_{4}=-\frac{1}{C^{\frac{3}{2}}\cos(\theta)}\frac{\partial f_{2}}{\partial\phi} \ p+\frac{1}{C^{\frac{3}{2}}\cos(\theta)}\frac{\partial f_{1}}{\partial\phi} \ q+l(\phi,\theta)\\ \cos(\theta)\frac{\partial l(\phi,\theta)}{\partial\theta}+\cos(\theta)\frac{ \partial m(\phi,\theta)}{\partial\theta}-\sin(\theta)l(\phi,\theta)=2\cos( \theta).\end{cases} \tag{4.7}\] We will determine explicitly the coefficients of \(X\) in Step 2. Notice that it depends on two arbitrary functions \(f_{1},f_{2}\). Figure 4.1: Intersection of \(\Sigma_{C,h}\) with \((q,p)\)-plane for various values of \(\theta\) **Step 2** For illustrative purposes, we will refer to Figure 4.3; the actual curve will be more stretched vertically but will in principle be of the same shape. The expression 4.3 is quadratic in \(p\); therefore, we can solve it as a regular quadratic equation, obtaining \[\begin{split} p&=\frac{1}{2}\left[\sqrt{C}\cos( \theta)\cos(\phi)-\sqrt{C\cos^{2}(\theta)\cos^{2}(\phi)+4C\sin(\theta)\cos( \theta)\cot(q)\sin(\phi)-4C\sin^{2}(\theta)\cot^{2}(q)-2C+4(h+\cot(q))}\right] \\ p&=\frac{1}{2}\left[\sqrt{C}\cos(\theta)\cos(\phi)+ \sqrt{C\cos^{2}(\theta)\cos^{2}(\phi)+4C\sin(\theta)\cos(\theta)\cot(q)\sin( \phi)-4C\sin^{2}(\theta)\cot^{2}(q)-2C+4(h+\cot(q))}\right]\end{split} \tag{4.8}\] As we remarked above, our shape is symmetric; therefore, we can consider only the expression for the upper part of it, i.e. the second line of (4.8). The expression under the square root, \[C\cos^{2}(\theta)\cos^{2}(\phi)+4C\sin(\theta)\cos(\theta)\cot(q)\sin(\phi)-4 C\sin^{2}(\theta)\cot^{2}(q)-2C+4(h+\cot(q)),\] can be easily seen to be a quadratic polynomial in \(\cot(q)\). It has one maximum, denoted in Figure 4.3 by B since its leading coefficient is strictly negative, at least when \(\theta\neq 0,\pi\) which holds under our assumptions. We draw a vertical line \(l\) through this point, dividing \(\Gamma\) into two parts. Additionally, we will refer to the horizontal line of symmetry as \(m\) and their intersection as O. The point where \(q\) achieves its minimal value is D and maximal value is C. We want to demonstrate the following: **any central vector field centered at O is transverse to \(\Gamma\)**. As is was remarked above, the expression under the square root in (4.8) has one maximum at \[q=\arctan\left(\frac{2C}{C\cot(\theta)\sin(\phi)+\csc^{2}(\theta)}\right). \tag{4.9}\] Therefore, for every fixed \(\theta\) and \(\phi\), (4.8) is a strictly increasing function when \(q\) lies between D and O and strictly decreasing otherwise. Figure 4.2: The projection of \(\Sigma_{C,h}\) to \(S^{2}\) (in purple) and projection of \(X\) with \(p\) and \(q\) as in (4.12) This entails that the tangent vector field to \(\Gamma\) to the points "north-east" to the left of the line \(\mathrm{l}\) and "south-west" to the right of it. Any central vector field with centre at \(\mathrm{O}\), on the contrary, points "north-west" to the left of \(\mathrm{l}\) and "north-east" to its left, since \(\mathrm{O}\mathrm{e}\mathrm{l}\). Additionally, the tangent vector field to \(\Gamma\) is vertical at \(\mathrm{D}\) and \(\mathrm{C}\). This allows us to conclude that such a vector field will be nowhere tangent to \(\Gamma\). This dictates our choice of \(f_{1}\) and \(f_{2}\): let \[f_{1} :=\frac{\sqrt{C}\cos(\theta)\cos(\phi)}{2},\] \[f_{2} :=\arctan\left(\frac{2C}{C\cot(\theta)\sin(\phi)+\csc^{2}(\theta) }\right),\] as these are the coordinates of the point \(\mathrm{O}\) in Figure 4.3. Additionally, we observe that \[\lim_{\theta\to 0}\arctan\left(\frac{2C}{C\cot(\theta)\sin(\phi)+\csc^{2}( \theta)}\right)=0,\] so \(\lim_{\theta\to 0}f_{2}=0\). To further simplify \(X\) and uniquely fix it, we choose \(m(\phi,\theta)=0,\ l(\phi,\theta)=2\tan(\theta)\). Thus, our Liouville vector field of is \[X =\frac{1}{2}\left(p-\frac{\sqrt{C}\cos(\theta)\cos(\phi)}{2} \right)\frac{\partial}{\partial p}\ +\frac{1}{2}\left(q-\arctan\left(\frac{2C}{C\cot( \theta)\sin(\phi)+\csc^{2}(\theta)}\right)\right)\frac{\partial}{\partial q} \tag{4.10}\] \[+\left(\frac{2\,p\csc^{2}(\theta)\sec(\theta)(C\sin(\phi)+2\cot( \theta))}{\sqrt{C}\left(4C^{2}+\left(C\cot(\theta)\sin(\phi)+\csc^{2}(\theta) \right)^{2}\right)}+\frac{q\tan(\theta)\cos(\phi)}{2C}\right)\frac{\partial}{ \partial\phi}\] \[+\left(\frac{2\sqrt{C}p\csc(\theta)\cos(\phi)}{4C^{2}+\left(C\cot (\theta)\sin(\phi)+\csc^{2}(\theta)\right)^{2}}+2\tan(\theta)-\frac{q\sin( \phi)}{2C}\right)\frac{\partial}{\partial\theta}\.\] **Step 3** Recall that the equator in \(S^{2}_{C}\) is always contained in the interior of \(\pi(\Sigma_{C,h})\) and that the curves about it on the \((p,q)\) plane are unbounded. Now we check directly that \(X\)**is transverse to such points in \(\Sigma_{C,h}\)**. We first observe that: \(\lim_{\theta\to 0}f_{3}(\theta,\phi,p,q)=0\). Now, restricted to the set \(\theta=0\), the Hamiltonian turns into \[p^{2}-\sqrt{c}p\cos(\phi)-\cot(q)=h-\frac{C}{2},\] Figure 4.3: The intersection of \(\Sigma_{C,h}\) with the \((p,q)\)-plane and its division with \(X(H)\) being equal to \[2p^{2}-2p\sqrt{C}\cos(\phi)+\frac{C}{2}\cos(\phi)^{2}+q\csc(q)^{2}=\left(\sqrt{2}p -\frac{\sqrt{C}\cos(\phi)}{\sqrt{2}}\right)^{2}+q\csc^{2}(q)>0, \tag{4.11}\] seeing as \(q\in(0,\pi)\). **Step 4** Now we show that \(X\)**is transverse to \(\Sigma_{C,h}\) at the points that project to the boundary of \(\pi(\Sigma_{C,h})\)**. We only need to show that \(X\) is transverse to the shape in Figure 4.2 on boundary points only; this is what we set out to do. As remarked above, for all these points on \(S_{C}^{2}\) the preimage is a single point in \(p\) and \(q\), namely, \[\left\{\begin{aligned} p&=\frac{\sqrt{C}\cos(\theta) \cos(\phi)}{2},\\ q&=\arctan\left(\frac{2C}{C\cot(\theta)\sin(\phi)+ \csc^{2}(\theta)}\right)\end{aligned}\right. \tag{4.12}\] From setting (4.2)=0, we obtain that \[\left\{\begin{aligned} \sin(\phi)&=(C-2h)\tan( \theta)-\frac{\csc(2\theta)}{C}-\frac{1}{2}C\sin(\theta)\cos(\theta),\\ \cos(\phi)&=\pm\sqrt{1-\left(\frac{1}{4}\tan(\theta )(C\cos(2\theta)-3c+8h)+\frac{\csc(2\theta)}{C}\right)^{2}}\end{aligned}\right. \tag{4.13}\] nominally giving us two cases for \(\cos(\phi)\). Tranversality of \(X\) with respect to the boundary is checked showing that the scalar product with the normal to the boundary is never vanishing. The (outward pointing) normal in question is given by \[\left\{-2\cot(\theta)\cos(\phi),\frac{2\csc^{2}(\theta)\left(C\left(C\sin^{3} (\theta)\cos(\theta)+\sin(\phi)\right)+\cot(\theta)\right)}{C}\right\} \tag{4.14}\] The next step is taking the scalar product of (4.14) and (4.10), substituting (4.12) for \(p\) and \(q\) and consequently the two values of \(\sin(\phi)\) and \(\cos(\phi)\) from (4.13), all of which yields two functions in \(\theta\), which we need to show have constant signs. However, the from of the two functions is identical (this can be verified by observing that the scalar product is quadratic in \(\cos(\phi)\). Figure 4.4: Graph of \(F(\theta)\), with \(C=0.64,\ h=-1000\) The resulting function has the form \[F(\theta) =\frac{1}{C}\left(C^{2}\sin(2\theta)-\cot(\theta)\left(C^{2}-2 \csc^{2}(\theta)\right)+2C(C-2h)\csc(\theta)\sec(\theta)+\csc^{3}(\theta)(-\sec( \theta))\right)\] \[\left[\frac{C\cot(\theta)\left(1-\left(\frac{1}{4}\tan(\theta)(C \cos(2\theta)-3C+8h)+\frac{\csc(2\theta)}{C}\right)^{2}\right)}{4C^{2}+\frac{1 }{4}\left(\csc^{2}(\theta)-C\left(C\cos^{2}(\theta)-2C+4h\right)\right)^{2}}+\right.\] \[\frac{\tan(\theta)\left(C^{2}\cos^{2}(\theta)-2C^{2}+4Ch+\csc^{2} (\theta)\right)\arctan\left(\frac{4C\csc(\theta)}{C^{2}\sin(\theta)+C^{2}-4Ch \csc(\theta)+\csc^{3}(\theta)}\right)}{4C^{2}}+2\tan(\theta)\left.\right]-\] \[\cot(\theta)\left(1-\left(\frac{1}{4}\tan(\theta)(C\cos(2\theta) -3C+8h)+\frac{\csc(2\theta)}{C}\right)^{2}\right)\] \[\left[\frac{\tan(\theta)\arctan\left(\frac{4C\csc(\theta)}{C^{2} \sin(\theta)+C^{2}-4Ch\csc(\theta)+\csc^{3}(\theta)}\right)}{C}+\frac{2\csc^{2} (\theta)\left(-\frac{1}{4}\tan(\theta)(C\cos(2\theta)-3C+8h)+2\cot(\theta)- \csc(2\theta)\right)}{4C^{2}+\frac{1}{4}\left(\csc^{2}(\theta)-C\left(C\cos^{ 2}(\theta)-2C+4h\right)\right)^{2}}\right] \tag{4.15}\] In order to complete our proof, we need the following **Lemma 4.6**.: \(F(\theta)>0\) _for all permitted values of \(\theta\)._ Proof.: The plot of \(F(\theta)\) is depicted in Figure 4.4 for a specific pair \((C,h)\). It clearly is positive and has two minima; however, determining the coordinates of these points analytically is in practice impossible. We will circumvent this by the following: it can be shown that when \(\theta\to 0\), \[F(\theta) =\frac{2C+3}{C^{2}\theta^{2}}+\left(\frac{1}{C^{2}}+\frac{36h- \frac{4}{3}}{C}+2C-8h-17\right)\] \[+\frac{\theta^{2}\left(C\left(5C\left(24\left(4C^{2}-24Ch+C+32h^ {2}\right)-24h-127\right)-28\right)+3\right)}{15C^{2}}+\overline{O}(\theta^{4}).\] This entails that \(F(\theta)\rightarrow+\infty\) when \(\theta\to 0\), but also that the speed with which it does so doesn't depend on \(h\). However, the width of the strip of permitted values of \(\theta\)_does_ depend on \(h\) (Remark 4.4), and therefore for every value of \(C\) we can make \(h\) sufficiently negative so that for all values smaller than that \(F(\theta)\) will be positive. \(F(\theta)\) being positive yields that \(X\) is always transverse at those points of \(\Sigma_{C,h}\) whose projection is the boundary of of the image of \(\pi\), if the energy is sufficiently negative. Thus \(X\) is everywhere transverse to \(\Sigma_{C,h}\) and this proves that \(\Sigma_{C,h}\) is a contact type hypersurface.
2307.01947
Causal Video Summarizer for Video Exploration
Recently, video summarization has been proposed as a method to help video exploration. However, traditional video summarization models only generate a fixed video summary which is usually independent of user-specific needs and hence limits the effectiveness of video exploration. Multi-modal video summarization is one of the approaches utilized to address this issue. Multi-modal video summarization has a video input and a text-based query input. Hence, effective modeling of the interaction between a video input and text-based query is essential to multi-modal video summarization. In this work, a new causality-based method named Causal Video Summarizer (CVS) is proposed to effectively capture the interactive information between the video and query to tackle the task of multi-modal video summarization. The proposed method consists of a probabilistic encoder and a probabilistic decoder. Based on the evaluation of the existing multi-modal video summarization dataset, experimental results show that the proposed approach is effective with the increase of +5.4% in accuracy and +4.92% increase of F 1- score, compared with the state-of-the-art method.
Jia-Hong Huang, Chao-Han Huck Yang, Pin-Yu Chen, Andrew Brown, Marcel Worring
2023-07-04T22:52:16Z
http://arxiv.org/abs/2307.01947v1
# Causal Video Summarizer for Video Exploration ###### Abstract Recently, video summarization has been proposed as a method to help video exploration. However, traditional video summarization models only generate a fixed video summary which is usually independent of user-specific needs and hence limits the effectiveness of video exploration. Multi-modal video summarization is one of the approaches utilized to address this issue. Multi-modal video summarization has a video input and a text-based query input. Hence, effective modeling of the interaction between a video input and text-based query is essential to multi-modal video summarization. In this work, a new causality-based method named Causal Video Summarizer (CVS) is proposed to effectively capture the interactive information between the video and query to tackle the task of multi-modal video summarization. The proposed method consists of a probabilistic encoder and a probabilistic decoder. Based on the evaluation of the existing multi-modal video summarization dataset, experimental results show that the proposed approach is effective with the increase of \(+5.4\)% in accuracy and \(+4.92\)% increase of \(F1\)-score, compared with the state-of-the-art method. Jia-Hong Huang\({}^{1}\), Chao-Han Huck Yang\({}^{2}\), Pin-Yu Chen\({}^{3}\), Andrew Brown\({}^{1}\), Marcel Worring\({}^{1}\)\({}^{1}\)University of Amsterdam, Netherlands \({}^{2}\)Georgia Institute of Technology, USA \({}^{3}\)IBM Research, USA [email protected], [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Video contents are growing at an ever-increasing speed and beyond the capacity of an individual for full comprehension. According to [1], more than 18,000 hours of videos are uploaded to YouTube per minute. Exploring this quantity of video data is a daunting task, and video summarization is gaining traction as the ideal solution [2, 3]. The main idea of video summarization is to automatically generate a short video clip that summarizes the content of an original, longer video by capturing its important parts [4, 5, 6]. However, traditional video summarization approaches, e.g., [7, 8], only create a fixed video summary for a given video. Given that viewers may have different preferences and videos have different ideal summarization, this issue reduces the effectiveness of video exploration. Multi-modal video summarization has been proposed as an approach to improve the effectiveness of video exploration [9, 10]. It generates video summaries for a given video based on the text-based query provided by the user, illustrated in Figure 1. Conventional video summarization only has video input modality, while an efficient choice for multi-modal video summarization is a text-based query, in addition to video [9, 10]. When multi-modal video summarization is used to help video exploration, effectively modeling the implicit relation/interaction between the text-based query and the video is important [9]. In [10], the proposed model exploits a joint representation of vision and language to perform multi-modal video summarization. However, the implicit interaction between the query and the video is not properly modeled because the simple average-based method used in [10] probably is not effective enough. As stated in [11, 12, 13], causal effect modeling, visualized in Figure 2, is helpful for a machine learning task, e.g., image/frame classification, and affects model performance in a positive way. In this work, a new causal video summarizer (CVS) is proposed that tackles the aforementioned issue to improve the performance of a multi-modal video summarization model. The proposed causality-based model consists of a multi-modal feature processing module (MFPM), probabilistic encoding module (PEM), and a probabilistic decoding module (PDM), referring to Figure 3. Studying multi-modal video summarization from the causality perspective [11] eliminates the need for a priori definition of objectives [9] based on high-level concepts. For the implicit interaction between the video and query, an attention mechanism is applied to better capture the interactive information. According to [14], typically the generated video summary by a video summarization algorithm is composed of a set of representative video frames or video fragments. Frame-based video summaries are not restricted by timing or synchronization issues and, therefore, they provide more flexibility in terms of data organization for video exploration purpose [14, 15]. In this work, the proposed CVS is validated on the frame-based multi-modal video summarization dataset Figure 1: Multi-modal video summarization. The input video is summarised taking into account text-based queries. “Input query-1: Sport of running” and “Input query-2: Sport of cycling’ independently drive the model to generate video summaries that contain running-related content and cycling-related content, respectively. [10]. Experimental results show that the proposed method is effective and significantly increases both the accuracy and \(F1\)-score, compared with the state-of-the-art method. ## 2 Related Work ### Video Summarization with A Single Modality Recently, a number of methods with different techniques have been proposed for video summarization, such as [1, 3, 4, 16, 17]. In [4, 16], human-explainable concepts, e.g., interestingness and representativeness, are used to characterize a good video summary. Then, objectives are built based on those concepts. Visual and textual information related to a video can then be captured by the targeted objectives. However, a notable limitation is that it is hard to define/measure all possible confounders [11] a priori, such as compactness [17], uniformity, and representativeness [4]. In [1, 17], an attention mechanism, a generative adversarial network, and reinforcement learning are used to perform video summarization. Since video summarization with a single modality cannot effectively help video exploration [9, 10], in this work a new multi-modal video summarization model is proposed to address this problem. **2.2 Multi-modal Video Summarization** Instead of only considering the visual input, several works have investigated the potential of using additional modalities, e.g., viewers' comments, video captions, or any other available contextual data, to help models' performance. [9, 18, 19, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36]. In [25], a multi-modal video summarization method is introduced for key-frame extraction from first-person videos. In [18], a multi-modal deep-learning-based model is proposed to summarize videos of soccer games. In [19], a semantic-based video fragment selection and a visual-to-text mapping are applied based on the relevance between the original and the automatically-generated video descriptions, with the help of semantic attended networks. Existing multi-modal video summarization models do not focus on causal effect modeling [11]. Hence, the interactive information between the video and query is difficult to capture effectively. In this work, a new causality-based multi-modal video summarization method is introduced, based on a probabilistic encoder-decoder framework and attention mechanism, to better capture the interaction between the video and query and make the video exploration more effective. **2.3 Causality and Variational Autoencoders** Proxy variables, e.g., input data \(\mathbf{X}\) in Figure 2, and the challenges about how to use them correctly have been considered in the literature of causal inference [37]. In many observational studies [4, 16, 38], understanding the best way to derive and measure possible proxy variables is crucial. Building on the previous work [39, 40], the authors of [11] and recent works [38, 41] for vision applications have tried to exploit proxy variables to study conditions for causal identifiability. In many cases, the general idea is that one should first attempt to infer the joint distribution \(p(\mathbf{X},\mathbf{Z})\) between the hidden confounders and the proxy, and then use that knowledge to adjust for the hidden confounders [42, 43, 44]. Take the causal graphical model in Figure 2 as an example. The authors of [43] have shown that if \(\mathbf{X}\) and \(\mathbf{Z}\) are categorical, with \(\mathbf{X}\) having at least as many categories as \(\mathbf{Z}\), and with the matrix \(p(\mathbf{X},\mathbf{Z})\) being full-rank, one could use a matrix inversion formula, an approach called "effect restoration" [44], to identify the causal effect of \(\mathbf{t}\) on \(\mathbf{y}\), referring to _Methodology_ section for details. Recently, the authors of [42] have given the conditions under which one could identify more complicated and general proxy models. The proposed causality-based method for multi-modal video summarization is mainly inspired by [11, 45, 43, 11]. ## 3 Methodology ### Causal Video Summarizer (CVS) In this section, details of the proposed CVS are described. Note that causal effect modeling for real-world multi-modal video summarization is very complicated [11]. Hence, in this work, the assumptions mentioned in [11] are followed to model the problem of multi-modal video summarization based on the concept of causal effect inference, i.e., causal graphical model. The proposed CVS is mainly composed of a multi-modal feature processing module (MFPM), a probabilistic encoding module (PEM), and a probabilistic decoding module (PDM), referring to Figure 3. The generation of a good video summary is affected by latent factors. In this work, the latent factors are considered as the causal effect. Specifically, the concept of causal effect inference [11], i.e., a causal graphical model, is used to model the multi-modal video summarization problem, referring to Figure 2. Figure 2 contains the four key components of the causal graphical model: "input data (\(\mathbf{X}\))", "latent confounder (\(\mathbf{Z}\))", "treatment (\(\mathbf{t}\))", and "output result (\(\mathbf{y}\))". From the modeling perspective of multi-modal video summarization, \(\mathbf{X}\) is "an input video with a text query". \(\mathbf{t}\) is "a visual or textual treatment". Note that treatment in causal effect modeling is a way to make an input characteristic more salient and help a model learn better [11]. \(\mathbf{y}\) is "a relevance score between the input text-based query and video frame or an importance score of a video frame". \(\mathbf{Z}\) is "a variational latent representation" which the proposed model aims to learn from \(\mathbf{X}\) for reconstruction [11, 45], referring to Figure 3 for the practical implementation. The proposed causal model simultaneously generates outcome score labels \(\mathbf{y}\) when \(\mathbf{X}\) is reconstructed, illustrated in Figure 3. Thereafter, video summaries can be created based on the generated outcome score labels. **Objective function for training.** According to [11, 45, 46], variational autoencoder (VAE) aims to learn a variational la Figure 2: Explanation of the causal effect modeling concept, i.e., causal graphical model [11], in multi-modal video summarization. \(\mathbf{t}\) is a treatment, e.g., visual or textual perturbation [12]. \(\mathbf{y}\) is an outcome, e.g., an importance score of a video frame or a relevance score between the input text-based query and video. \(\mathbf{Z}\) is an unobserved confounder, e.g., representativeness or interestingness [4, 9, 16]. \(\mathbf{X}\) is a noisy view [11] on the hidden confounder \(\mathbf{Z}\), say the input text query and video. tent representation \(\mathbf{Z}\) from data \(\mathbf{X}\) for reconstruction; it is capable of learning the latent variables when used in a causal graphical model. Hence, the proposed CVS is built on top of VAE. Now, it is time to form a single objective for the encoder, in Figure 3-(a), and the decoder, in Figure 3-(b), to learn meaningful causal representations in the latent space and generate video summaries. Based on Figure 2, we know that the true posterior over \(\mathbf{Z}\) depends on not just \(\mathbf{X}\), but also on \(\mathbf{t}\) and \(\mathbf{y}\). We have to know the treatment assignment \(\mathbf{t}\) along with its outcome \(\mathbf{y}\) before inferring the distribution over \(\mathbf{Z}\). Hence, the following two auxiliary distributions are introduced for the treatment assignment \(\mathbf{t}\) and the outcome \(\mathbf{y}\), referring to Equations (1), and (2). \[q(t_{i}|\mathbf{x}_{i})=Bern(\sigma(g_{\phi_{5}}(\mathbf{x}_{i}))) \tag{1}\] \[q(y_{i}|\mathbf{x}_{i},t_{i})=\sigma(t_{i}g_{\phi_{6}}(\mathbf{x}_{i})+(1-t_{ i})g_{\phi_{7}}(\mathbf{x}_{i})), \tag{2}\] where \(g_{\phi_{k}}(\cdot)\) is a neural network with variational parameters \(\phi_{k}\) for \(k=5,6,7\). Note that, unlike traditional VAEs simply passing the feature map directly to the latent space, i.e., the upper path of PEM in Figure 3-(a)), the feature map is also sent to the other two paths, i.e., the middle and the lower paths in PEM in Figure 3-(a), for the posterior estimations of the treatment \(t_{i}\) and the outcome \(y_{i}\). The introduced auxiliary distributions help us predict \(t_{i}\) and \(y_{i}\) for new samples. To estimate the parameters of these two distributions, \(q(t_{i}|\mathbf{x}_{i})\) and \(q(y_{i}|\mathbf{x}_{i},t_{i})\), we add an auxiliary objective, referring to Equation (3), to the model training objective over \(N\) data samples. \[\mathcal{L}_{auxiliary}=\] \[\sum_{i=1}^{N}(\log q(t_{i}=t_{i}^{*}|\mathbf{x}_{i}^{*})+\log q(y_{i}=y_{i}^ {*}|\mathbf{x}_{i}^{*},t_{i}^{*})), \tag{3}\] where \(\mathbf{x}_{i}^{*}\), \(t_{i}^{*}\) and \(y_{i}^{*}\) are observed values in the training set. Finally, we have the following overall training objective for the encoder and decoder networks. See Equation (4). \[\mathcal{L}_{causal}=\mathcal{L}_{auxiliary}+\] \[\sum_{i=1}^{N}\mathbb{E}_{q(\mathbf{z}_{i}|\mathbf{x}_{i},t_{i},y_{i})}[\log p (\mathbf{x}_{i},t_{i}|\mathbf{z}_{i})+\log p(y_{i}|t_{i},\mathbf{z}_{i})+\] \[\log p(\mathbf{z}_{i})-\log q(\mathbf{z}_{i}|\mathbf{x}_{i},t_{i},y_{i})]. \tag{4}\] See [11] for the detailed derivation and explanation of Equations (1) to (4), \(\log p(\mathbf{x}_{i},t_{i}|\mathbf{z}_{i})\), \(\log p(y_{i}|t_{i},\mathbf{z}_{i})\), and \(\log p(\mathbf{z}_{i})\). **3.2 Spatial and Channel-wise Attentions** A dual attention network, based on self-attention, has been proposed to adaptively integrate local features with their global dependencies [47, 48]. The dual attention network consists of spatial and channel-wise attention modules. These two modules are capable of modeling the interdependencies in spatial and channel dimensions. To better capture the implicit interactive information, a causal attention mechanism, based on the dual attention network, is introduced to reinforce the probabilistic encoder in Figure 3-(a). In the PEM, the channel-wise attention selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. The spatial attention selectively aggregates the feature at each position by a weighted sum of the features at all positions. **3.3 Video Summary Generation** In Figure 3-(a), during the inference phase, the input video and input text-based query go through the MFPM for generating feature maps \(\mathbf{x}\), and the PEM for generating the probabilistic encoding of the feature maps \(q(\mathbf{z}|\mathbf{x},y,t)\). Then, the generated probabilistic encoding of the feature maps is sent Figure 3: This figure shows the proposed end-to-end causal attention model for multi-modal video summarization, the practical implementation of the causal graphical model in Figure 2. In (a), the input video and input text query are processed by MFPM for generating the feature map \(\mathbf{x}\) which is the input of PEM. Then, PEM generates the corresponding probabilistic encoding \(q(\mathbf{z}|\mathbf{x},y,t)\) of \(\mathbf{x}\) with the operations of the introduced causal attention. In (b), PDM takes \(p(\mathbf{z})\), estimated by \(q(\mathbf{z}|\mathbf{x},y,t)\), to approximately reconstruct \(\mathbf{x}\) based on \(p(\mathbf{x}|\mathbf{z})\) and generate the prediction score labels \(y\) based on \(p(y|\mathbf{z},t)\) at the same time. Please refer to _Methodology_ section for details. to the probabilistic decoder, referring to Figure 3-(b), to reconstruct **x**, based on \(p(\textbf{x}|\textbf{z})\), and generate prediction score labels \(y\), based on \(p(y|\textbf{z},t)\), for the input video and query simultaneously. Finally, based on these generated score labels, a set of video frames is selected from the original input video to create the final video summary. Note that the video summary budget is considered as a user-defined hyperparameter in video exploration [10]. ## 4 Experiments ### Dataset and Evaluation Metrics In the experiments, [10]'s multi-modal video summarization dataset and the introduced causal video summarization dataset (CVSD) with treatment labels are used to verify the proposed method. The introduced CVSD is constructed based on [10]'s dataset. [10]'s dataset is composed of \(190\) videos with a duration of two to three minutes for each video. Each video in their dataset is retrieved based on a given text-based query. The entire dataset is divided into splits of 60%/20%/20% for training/validation/testing, respectively. Annotations from human experts are necessary for the automatic evaluation of multi-modal video summarization. Hence, the authors of [10] sample all of the \(190\) videos at one frame per second (fps), and Amazon Mechanical Turk (AMT) is used to annotate every frame with its relevance score with respect to the given text-based query. A single ground truth relevance score is created for each query-video pair by merging the corresponding relevance annotations from AMT workers. In [10]'s dataset, the maximum number of words of a text query is \(8\) and the maximum number of frames of video is \(199\). For the evaluation metric, the authors of [10] propose to use accuracy as the evaluation metric. That is, a predicted relevance score is considered correct when the predicted score is the same as the majority of human experts' provided score. In addition, motivated by [3, 49], in this work \(F_{1}\)-score is also used to quantify the performance of the proposed model by measuring the agreement between the gold standard and predicted scores provided by the human experts. ### Experimental Setup In this work, [10]'s dataset and the introduced CVSD have the same preprocessing as follows. Since the video lengths in the CVSD and [10]'s dataset are varied, the number of frames for each video is different based on fps \(=1\). The maximum number of frames of a video is \(199\) in the used dataset. Hence, the frame-repeating video preprocessing technique [10] is adopted to make all the videos have the same length of \(199\). ResNet [50] pre-trained on ImageNet [51] is used to extract \(199\) frame-based features for each video, and the feature used is located in the visual layer one layer below the classification layer. The input frame size of the ResNet is \(224\) by \(224\) with red, green, and blue channels. Each image channel is normalized by \(\text{mean}=(0.4280,0.4106,0.3589)\) and standard deviation \(=(0.2737,0.2631,0.2601)\). PyTorch is used for the implementation and to train models with \(50\) epochs, \(0.01\) learning rate, and Adam optimizer [52]. For the hyperparameters of the Adam, \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) are the coefficients used for computing moving averages of gradient and its square. The term added to the denominator to improve numerical stability is \(\epsilon=10^{-8}\). ### Effectiveness Analysis **Effectiveness of Causal Attention Mechanism.** In the experiments, the CVSD is used to verify the effectiveness of the introduced attention mechanism under the following six different situations of inputs: "visual input with salt and pepper treatment (\(V_{skp}\))", "visual input with blurring treatment (\(V_{blur}\))", "clean text-based query input and visual input with salt and pepper treatment (\(T+V_{s\&p}\))", "clean text-based query input and visual input with blurring treatment (\(T+V_{blur}\))", "disturbed text-based query input and visual input with blurring treatment (\(T_{k}+V_{blur}\))". Note that \(T_{k}\) denotes randomly removing \(k\) words, e.g., \(k=2\), from a text-based query input. According to Table 1, the results show that the introduced attention mechanism results in performance improvement in the causal model. The main reason is that the introduced attention mechanism helps to better capture the implicit interactive information between the visual and textual inputs. **Effectiveness of Causal Effect Modeling.** The main difference between the proposed causal model and the existing state-of-the-art multi-modal video summarization method, e.g., [10], is the causal effect modeling. According to [11], proper causal effect modeling improves the performance of a machine learning model. In this work, we claim that if the causal effect modeling is effective, the proposed model's performance can be improved. Based on Table 1 and Table 2, the results show that the proposed causal model for multi-modal video summarization beats the state-of-the-art [10], with the increase of \(+5.4\)% in accuracy and \(+4.92\)% increase of \(F1\)-score. The main reason is the proposed model is a causality-based model which properly considers the causal effect modeling. Hence, the claim is well proved by the experimental results. Qualitative results are presented in Figure 4. Figure 4: This figure shows the randomly selected qualitative results of the proposed causal model and the baseline model. According to this figure, we see that the proposed model (in green) prediction result frame pattern is more similar to the ground truth (in red) pattern than the baseline (in blue). Note that in each frame pattern, orange denotes “not selected frames” and gray denotes “selected frames”. Each video here has 199 frames and we also show the corresponding indices of the selected frames in this figure. ## 5 Conclusion In this work, a new perspective is introduced to build an end-to-end deep causal model for multi-modal video summarization. The proposed Causal Video Summarizer is based on a probabilistic encoder-decoder architecture. Experimental results show that the proposed causal model is effective and achieves state-of-the-art performance, \(+5.4\)% in accuracy and \(+4.92\)% increase of \(F1\)-score. ## 6 Acknowledgments This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765140.
2310.15325
LXMERT Model Compression for Visual Question Answering
Large-scale pretrained models such as LXMERT are becoming popular for learning cross-modal representations on text-image pairs for vision-language tasks. According to the lottery ticket hypothesis, NLP and computer vision models contain smaller subnetworks capable of being trained in isolation to full performance. In this paper, we combine these observations to evaluate whether such trainable subnetworks exist in LXMERT when fine-tuned on the VQA task. In addition, we perform a model size cost-benefit analysis by investigating how much pruning can be done without significant loss in accuracy. Our experiment results demonstrate that LXMERT can be effectively pruned by 40%-60% in size with 3% loss in accuracy.
Maryam Hashemi, Ghazaleh Mahmoudi, Sara Kodeiri, Hadi Sheikhi, Sauleh Eetemadi
2023-10-23T19:46:41Z
http://arxiv.org/abs/2310.15325v1
# LXMERT Model Compression for Visual Question Answering ###### Abstract Large-scale pretrained models such as LXMERT are becoming popular for learning cross-modal representations on text-image pairs for vision-language tasks. According to the lottery ticket hypothesis, NLP and computer vision models contain smaller subnetworks capable of being trained in isolation to full performance. In this paper, we combine these observations to evaluate whether such trainable subnetworks exist in LXMERT when fine-tuned on the VQA task. In addition, we perform a model size cost-benefit analysis by investigating how much pruning can be done without significant loss in accuracy. Our experiment results demonstrate that LXMERT can be effectively pruned by 40%-60% in size with 3% loss in accuracy. ## 1 Introduction and Related Work Over the past few years, many single-modal pretrained models have been proposed. Inspired by this, the vision-and-language pretraining seeks to learn joint representations using visual and textual content to improve the efficiency of vision-language tasks. Both single-modality and cross-modality pre-trained models often have hundreds of millions of parameters. Unfortunately, training these over-parametrized models can be prohibitively time-consuming and costly, making them impractical for resource-limited devices. However, cross-modality pretrained models suffer more from the increased model size due to the higher input space dimension. With the task of Visual Question Answering (VQA) (Antol et al., 2015) in mind, and its ultimate goal of being helpful to the visually impaired, decreasing V+L model size makes it feasible to use them in limited-resource devices. To address this problem, model compression techniques such as pruning have been developed. Deep Learning recently enjoyed welcoming a new powerful pruning method: The Lottery Ticket Hypothesis (LTH) (Frankle and Carbin, 2019). LTH has been shown great success in various fields. It could be a powerful tool to understand the parameter redundancy in the current pretrained V+L models. Thus, we aim to apply LTH to LXMERT(Tan and Bansal, 2020), one of the best-performing two-stream V+L models, to fill this gap. We evaluate our work on VQA (Antol et al., 2015) and compare it with DistillVLM(Fang et al., 2021), which leverages the knowledge distillation technique to compress large visual-linguistic models. Similar to this work, Gan et al. (2021) study LTH for UNITER(Chen et al., 2020). However, UNITER is a single stream V+L model, and LXMERT is a two-stream model; our results are consistent with theirs. ## 2 Methodology In this section, we briefly explain the LXMERT architecture and LTH. Then, we describe how we use LTH to compress the pretrained LXMERT model. LXMERT is a Transformer-based model which takes two inputs: image and text. Internally, LXMERT consists of two types of encoders: single-modality encoders for each modality and a cross-modality encoder using bidirectional cross attention to exchange information and align entities across the modalities. The Lottery Ticket Hypothesis (Frankle and Carbin, 2019) shows that by preserving the original weight initializations from the unpruned network, you can train a network with the topology of the pruned network and achieve the same or better test accuracy within the same number of training iterations. In order to apply LTH to the LXMERT model, we use iterative magnitude pruning. Therefore, we fine-tune LXMERT on the VQA task and iteratively prune 10% of the lowest magnitude weights across the entire model, excluding embedding and output layers. We keep pruning until our model loses roughly half the weights. We use the default settings and hyperparameters of LXMERT Tan and Bansal (2020) to finetune on the VQA v2.0 dataset. ## 3 Experimental Setups and Results The experiments are designed to investigate the effectiveness and stability of LTH on LXMERT in addition to cost-benefit analysis of the number of parameters in the model. We conduct experiments on the widely-used VQA v2.0 Goyal et al. (2017) dataset built based on the MS-COCO Lin et al. (2014) image corpus. ### Effectiveness and Stability The following steps are performed to compress the LXMERT model. 1. The pretrained LXMERT model plus the VQA classifier's randomly initialized weights are saved. 2. The model is fine-tuned on the 3,129 most frequent answers in the VQA v2.0 dataset. 3. Iterative magnitude pruning is applied to find the low-magnitude subnetwork (pruning 50% of the low-magnitude weights). The high-magnitude subnetwork is computed as a compliment of the low-magnitude subnetwork with equal size. A random subnetwork with an equal size is generated for comparison. 4. The saved weights are restored for all three subnetworks. 5. The high-magnitude, low-magnitude, and random subnetworks are fine-tuned and evaluated on the VQA Task using three different seeds for initializing the VQA model to ensure the stability of the results. Results of subnetworks at 50% weights pruning on VQA v2.0 are summarized in Table 1 where DistillVLM Fang et al. (2021) is also listed for comparison. Row 2 to row 5 reports respectively full finetuned LXMERT, low-magnitude, high-magnitude, and random subnetworks. Low-magnitude pruning achieves 97% of full finetuned LXMERT accuracy in overall for both test-dev and test-std and shows marginal improvement over DistillVLM as the baseline. By comparing performance across the subnetworks, random and high magnitude subnetworks perform far worse than low-magnitude subnetwork. Surprisingly, the results demonstrate high-magnitude subnetwork performing better than random subnetwork. This could be a LXMERT specific phenomenon and required further investigation. ### Cost-Benefit Analysis We experiment with low-magnitude subnetwork by pruning 10% of the weights all the way up to 90% of the weights in 10% increments. Accuracy of these pruned models on VQA v2.0 are reported in Figure 1. Our results indicate a significant loss of accuracy after 50% to 60% pruning. ## 4 Conclusion We confirm that LTH pruning is an effective method for pruning V+L pretrained models. We mainly focused on LXMERT, a two-stream V+L pretrained model, but our findings are consistent with Gan et al. (2021)'s results while using UNITER, a single-stream V+L pretrained model. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{3}{c}{test-dev} & \multicolumn{3}{c}{test-std} & \multicolumn{3}{c}{test-std} \\ \cline{2-10} \multicolumn{1}{c}{} & Yes/No & Number & other & Overall & Yes/No & Number & other & Overall \\ \hline DistillVLM & - & - & - & 69.6 & - & - & - & 69.8 \\ \hline LXMERT & 88.24 & 54.45 & 63.05 & 72.45 & 88.29 & 54.37 & 63.18 & 72.63 \\ LXMERT (low-magnitude) & 86.95 \(\pm\) 0.95 & 52.60 \(\pm\) 1.87 & 60.96 \(\pm\) 1.76 & 70.72 \(\pm\) 1.44 & 87.07 \(\pm\) 1.12 & 52.28 \(\pm\) 1.66 & 61.02 \(\pm\) 1.83 & 70.87 \(\pm\) 1.51 \\ LXMERT (high-magnitude) & 74.11 \(\pm\) 0.91 & 42.81 \(\pm\) 1.36 & 50.5 \(\pm\) 0.19 & 59.35 \(\pm\) 0.61 & 74.23 \(\pm\) 0.81 & 42.99 \(\pm\) 0.87 & 50.71 \(\pm\) 0.26 & 59.62 \(\pm\) 0.55 \\ LXMERT (random) & 69.26 \(\pm\) 0.29 & 39.84 \(\pm\) 0.93 & 45.96 \(\pm\) 0.83 & 54.86 \(\pm\) 0.52 & 69.27 \(\pm\) 0.18 & 40.34 \(\pm\) 0.66 & 46.33 \(\pm\) 0.79 & 55.19 \(\pm\) 0.45 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of subnetworks at 50% weights pruning on VQA v2, which reported for both test-dev and test-std. Test-dev is used for debugging and validation experiments. Test-standard is the default test data for the VQA competition. We test each experiment for three different seeds and report the mean and standard deviation of VQA accuracy across three seeds. Figure 1: Model size cost-benefit analysis.
2301.03206
Introducing Model Inversion Attacks on Automatic Speaker Recognition
Model inversion (MI) attacks allow to reconstruct average per-class representations of a machine learning (ML) model's training data. It has been shown that in scenarios where each class corresponds to a different individual, such as face classifiers, this represents a severe privacy risk. In this work, we explore a new application for MI: the extraction of speakers' voices from a speaker recognition system. We present an approach to (1) reconstruct audio samples from a trained ML model and (2) extract intermediate voice feature representations which provide valuable insights into the speakers' biometrics. Therefore, we propose an extension of MI attacks which we call sliding model inversion. Our sliding MI extends standard MI by iteratively inverting overlapping chunks of the audio samples and thereby leveraging the sequential properties of audio data for enhanced inversion performance. We show that one can use the inverted audio data to generate spoofed audio samples to impersonate a speaker, and execute voice-protected commands for highly secured systems on their behalf. To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup.
Karla Pizzi, Franziska Boenisch, Ugur Sahin, Konstantin Böttinger
2023-01-09T08:51:15Z
http://arxiv.org/abs/2301.03206v1
# Introducing Model Inversion Attacks on Automatic Speaker Recognition ###### Abstract Model inversion (MI) attacks allow to reconstruct average per-class representations of a machine learning (ML) model's training data. It has been shown that in scenarios where each class corresponds to a different individual, such as face classifiers, this represents a severe privacy risk. In this work, we explore a new application for MI: the extraction of speakers' voices from a speaker recognition system. We present an approach to (1) reconstruct audio samples from a trained ML model and (2) extract intermediate voice feature representations which provide valuable insights into the speakers' biometrics. Therefore, we propose an extension of MI attacks which we call _sliding model inversion_. Our sliding MI extends standard MI by iteratively inverting overlapping chunks of the audio samples and thereby leveraging the sequential properties of audio data for enhanced inversion performance. We show that one can use the inverted audio data to generate spoofed audio samples to impersonate a speaker, and execute voice-protected commands for highly secured systems on their behalf. To the best of our knowledge, our work is the first one extending MI attacks to audio data, and our results highlight the security risks resulting from the extraction of the biometric data in that setup. Karla Pizzi\({}^{*,1,2}\), Franziska Boenisch\({}^{*,1}\), Ugur Sahin\({}^{*,1,2}\), Konstantin Bottinger\({}^{1}\)\({}^{1}\)Fraunhofer AISEC, Germany; \({}^{2}\)Technical University Munich, Germany {firstname.lastname}@aisec.fraunhofer.de **Index Terms**: speaker recognition, model inversion, privacy ## 1 Introduction Privacy analysis of audio data has shown that speech parameters, such as accent, rhythm, or acoustic properties of speech inherently carry biometric information about the speakers, such as their age, gender, physical health, and geographical origin [1]. Therefore, it is important for machine learning (ML) models in speaker recognition not to leak information about their training data. However, recent research [2, 3, 4] suggests that ML models are, in general, vulnerable to privacy attacks. One particular attack is _model inversion_ (MI) [2] which allows an attacker to retrieve abstract representations for individual classes of the target model's training data. With speaker recognition systems treating each individual as their own class, MI attacks have the potential to cause severe privacy breaches [2]. So far, the feasibility of MI attacks on speaker recognition systems and audio data has never been tested, thus, the question if information on the speakers can be maliciously retrieved remained open. We are the first to show how to adapt and apply MI attacks for audio data. We do so by targeting SincNet, a state-of-the-art neural network (NN)-based speaker recognition model [5, 6]. We show that MI attacks are able to infer both entire audio samples and d-vectors as intermediate representations of the speakers' voice characteristics from the trained target model. Further, we propose the _sliding model inversion_, a novel form of the standard MI attack that leverages sequential processing properties of the audio data to improve inversion success. While with standard MI, the target model successfully identifies up to \(54\)% of the inverted audio samples as their correct speaker class, with our novel attack, we achieve to up to \(90\%\) accuracy. Also, our sliding MI manages to decrease the distance between original and inverted sample in the d-vector representation, hence, yielding higher fidelity inversions. For directly inverting d-vectors, our experiments show that even standard MI achieves \(100\%\) identification success. These results highlight the vulnerability of speaker recognition models to privacy attacks. As a proof-of-concept to showcase that our MI can be exploited as a departure point for further attacks against speaker recognition, we, furthermore, explore using inverted audio samples as inputs for deepfake generation. Such deepfakes could be used to fool voice identification with arbitrary speech samples or to execute any speech command on behalf of the speakers under attack. While our generated deepfakes do not perfectly fool a human listener, as an informal evaluation conducted by the authors shows, they illustrate that privacy attacks can not only be used to disclose sensitive information about the individuals the model was trained on; additionally, they can severely threaten the security of systems relying on voice biometrics. Our contributions can be summarized as follows: * We successfully apply MI attacks on speaker recognition models to invert entire audio samples and d-vectors and experimentally evaluate what kind of random initialization works best as an input for MI attacks on audio data. * We introduce a novel _sliding MI_ which exploits properties of sequential and chunk-wise audio processing. * We show the feasibility of generating deepfakes based on the inferred audio samples. ## 2 Background and Related Work The following section provides background information on speaker recognition systems and attacks against their privacy. Speaker recognition.In this paper, we use a SincNet-based [5] text-independent speaker recognition system. This system uses NNs to extract voice features into so-called d-vectors and adds a classification layer on top of these. The input to the system consists of raw audio waves and the outputs is a per-class probability score over all possible classes (i.e., speakers). Overall, the system is composed of three submodels (see Figure 1) 1) SincNet, a convolutional NN that resembles a band-pass filter; 2) a multi-layer perceptron (MLP) calculating the d-vectors [7]; and 3) a fully connected layer to calculate the probabilities per speaker. It achieves a reported classification error1 of \(5.772\cdot 10^{-3}\) on the TIMIT [8] test data set (measured at sentence level). In its current version, SincNet does not provide any dedicated privacy-preserving mechanisms. Footnote 1: See [https://pythonlang.dev/repo/mravanelli-sincnet/](https://pythonlang.dev/repo/mravanelli-sincnet/). **Privacy in speaker recognition.** The ISO/IEC norm 24745:2011 proposes three general requirements to ensure individual privacy: irreversibility, renewability, and unlinkability of the protected data [9]. To reach this goal in speaker recognition systems, several solutions have been proposed and discussed [9]. However, aiming for any _anonymization_ to protect the privacy of speakers in a speaker recognition model goes directly against the purpose of the system, which is to identify speakers based on their individual characteristics. **MI attacks.** In MI attacks [2], the attacker exploits an ML model's prediction confidence for inverting individual training classes (see Algorithm 1). More formally speaking, a MI attack can be expressed as follows: let \(f\) be the target model under attack. It is trained to map from an \(n\)-dimensional input data point \(x\) to an \(m\)-dimensional vector \(p\) indicating the probability per class, such that \(f:x\mapsto p\), with \(\mathbb{R}^{n}\rightarrow[0,1]^{m}\). To invert the model, we define an objective function in order to use gradient descent. This function is called cost function \(c(x)\) and basically defines how close we are to the information we would like to reconstruct. We set \(c(x)=1-p_{t}\), where \(t\) denotes the target class we would like to gain information about. Starting from a randomly initialized input sample \(x_{0}\), we calculate its cost \(c(x_{0})\). With this at hand, we apply the gradient descent algorithm for \(\alpha\) iterations with a learning rate \(\lambda\) to alter the original input. The aim is to minimize the costs for a specific class [2], such that the resulting data sample is a representation of that class. In speaker recognition, every speaker denotes their own class. Hence, MI can reveal representations of the data from every single individual in the training data set. Particularly, this data can encode biometric as well as paralinguistic features. ``` functionMI(input vector \(x_{0}\), target class \(t\), iterations \(\alpha\), patience \(\beta\), minimum cost threshold \(\gamma\), learning rate for gradient descent \(\lambda\)) for\(i\gets 1,\ \dots,\ \alpha\)do if\(c(x_{i})\geq\max(c(x_{i-1}),\ \dots,\ c(x_{i-\beta}))\)then break if\(c(x_{i})\leq\gamma\)then break \(\operatorname{inverted}[k:(k+w)]\leftarrow\operatorname{argmin}_{x}c(x)\) return\(\operatorname{inverted}[\frac{w}{2}:(l-\frac{w}{2})]\) ``` **Algorithm 1**_Standard_ Model Inversion Attack [2] ## 3 Sliding MI Attack In this section, we present our novel sliding MI attack. It extends standard MI (see Algorithm 1) to sequentially and chunk-wise processed data, e.g., audio data. Instead of using MI to invert every chunk of data separately, our sliding ML iteratively inverts overlapping chunks. This way, some of the input to the MI is already inverted, and hence, in wave-form similar to actual speech data. Thereby, MI can more successfully invert it into representatives of the original speech data. Our sliding MI consists of the following steps: (1) We invert the first window of a randomly initialized input vector. Note that different random initializations yield inversions of different quality. We experiment with several different types of initialization settings as our first main experiment, described in more detail in Section 4.1. (2) Then, we replace the first window of the input vector by the resulting inverted data. (3) Next, we iteratively calculate the inverted data for the subsequent input window. This input's first part consists of the previously inverted data, its second part stems from the random input vector. Note that the amount of overlap for the sliding window determines the proportion of the previously inverted data and the randomly initialized input vector that are used for calculating the inversion during the MI attack. Its value depends on the _stride_ of our inversion. For our experiments, we use a stride of 500 samples (roughly 30ms). Since in this new method, updates rely on the output of the previous inversion, we cannot use parallelization as a speed-up. Instead, the stride determines the computational overhead. By increasing the stride value, computational time can be decreased. For a visualisation of our novel approach, see Figure 2. Note that in addition to the hyperparameters of the standard MI, we need to specify the length of our input data \(l\), the stride \(s\), and the window size \(w\). While \(s\) specifies the overlap between subsequent inversions, \(w\) determines the lengths of the data chunks that are inverted. For each chunk, inversion is performed as an iterative process as in the standard MI. Since the beginning and the end of the inverted vector are iterated less, we cut the returned vector to half the window size. See Algorithm 2 for a formal introduction of our novel sliding ML. ``` functionMI(target class \(t\), length \(l\), stride \(s\), windowsize \(w\), \(\beta,\gamma,\lambda\) as in Algorithm 1) inverted[0,...,\(l\)] \(\leftarrow\)\(\mathcal{N}(\mu,\ \sigma^{2})\) for\(k\gets 0,\ \dots,\ (l-s)\) according to stride \(s\)do \(x\leftarrow\operatorname{inverted}[k:(k+w)]\) for\(i\gets 1,\ \dots,\ \alpha\)do \(x_{i}\gets x_{i-1}-\lambda\cdot\nabla c(x_{i-1})\) if\(c(x_{i})\geq\max(c(x_{i-1}),\ \dots,\ c(x_{i-\beta}))\)then break if\(c(x_{i})\leq\gamma\)then break \(\operatorname{inverted}[k:(k+w)]\leftarrow\operatorname{argmin}_{x}c(x)\) return\(\operatorname{inverted}[\frac{w}{2}:(l-\frac{w}{2})]\) ``` **Algorithm 2**_Sliding_ Model Inversion Attack ## 4 Experiments We conduct three experiments: The _first experiment_ is similar to MI in other domains, i.e., it inverts random input vectors back to the original input data domain. In the _second experiment_ we do not invert the whole NN, but only the layers up to the d-vectors, which provide unique voice features of an individ Figure 1: _Speaker Recognition Model. The speaker recognition model and its three submodels: SincNet obtains \(\clubsuit\) raw audio input and generates \(\clubsuit\) features. These features are input to an MLP which generates the \(\clubsuit\)\(\clubsuit\)\(\clubsuit\)\(d\)-vectors. A single layer performs \(\clubsuit\) classification on them. In experiment 1, we invert full audio samples, while in experiment 2, we invert the d-vectors._ ual (see Figure 1). Inverting these vectors instead of full audio samples reduces the computational costs of the attack. Our _third experiment_ shows that in speaker recognition, our MI attack enables us to impersonate individual speakers, and to synthesize speech samples for them. The spoofing is performed based on the inverted audio samples from the experiment one. We attack the NN-based speaker recognition system using SincNet [5], trained on the TIMIT dataset [8], and we use the pretrained model provided by Ravanelli et al. [5]. To perform our MI attack, we assume the attacker to have white-box access to the target model. This is the case, for example, when the speaker recognition model is deployed to a user-device, e.g. for biometric identification. Further, the attacker needs a unique identifier of their target individual under attack to know which class of the training data to invert. This can be, for example, the name or some pseudonymized combination of characters, in the case of the TIMIT dataset, e.g. "FGMB0". We quantify the success of our experiments as follows: * _Percentage of correctly classified inverted samples_. We quantify the classification accuracy of the original target speaker model on both the inverted audio data and the inverted d-vectors. An inverted sample is "correctly classified" if it is classified as the correct original speaker. * _Euclidean distance between original and reconstructed d-vector_. We measure the Euclidean distance between both d-vectors to specify the similarity between the respective samples. The first metric allows us to analyse if the MI may be considered successful with respect to the target model, i.e., it answers the question of how successfully this model can be fooled. The second metric, in contrast, focuses on the inverted samples' similarity to the original samples. Hence, it quantifies the similarity from the perspective of a human listener. Since MI generates average representations of training classes, we use the target model's classification accuracy on averaged per-speaker samples as a baseline (97.84% and 75.97% of correct classification on train and test data, respectively). In the following, we present our overall experimental setup to then describe every experiment in more detail. ### Experiment 1: Invert Audio Samples In the first experiment, we use MI to calculate full inverted audio samples which could be used to trick the speaker recognition model under attack without human listeners present. The experiment is designed to answer the following three questions: (1) Is it possible to successfully generate inverted audio samples for speaker recognition? (2) Which kind of randomly initialized input vector to the MI attack produces the most successful inverted audio samples (with respect to the classification as the original speaker)? (3) How does our new sliding MI approach improve the results in comparison to standard MI? Experiment.Over all experiments, the audio data chunks are 3200 samples (or 200ms) long, and our sliding MI uses a stride of 500 samples (roughly 30ms). We evaluate the following (random) initializations: * _Plain inputs_: all zeros, all ones, and all minus ones; * _Noises_: white, pink, brown, violet, and blue noise. We generated them with methods pre-implemented in the python-acoustics library and applied the tanh-function to transform them to the interval range \([-1,1]\) with zero mean in a non-linear manner; * _Samplings from distributions_, such as uniform (ranges \([0,1]\) or \([-1,1]\)), Gaussian (with \(\mu=0\) or \(\sigma=0.2\)), Laplace (with \(\mu=0\) or \(b=0.07\)), Gumbel (with \(\mu=0\) or \(\beta=0.1\)), and von Mises (with \(\mu=0\) or \(\kappa=0.1\)); * _Samples from another dataset_: Librispeech [10], used as a plain input or averaged over 50, 100, 150 input vectors or with white noise (\(0.85\cdot\text{input vector}+0.15\cdot\text{noise}\)). For the optimization process within the MI algorithm (see Section 2), we optimize two parameters, namely the maximum number of iterations \(\alpha\) and the learning rate \(l\). For all experiments, setting \(\alpha=1000\) showed to be sufficient. The optimal \(l\) depends on the experiment and is reported in the results. In our evaluation, for every input with its settings and optimal \(l\), we report the following metrics: * _MI Accuracy_: the percentage of correctly classified inverted samples; * _# Correct Speakers_: the number of correctly classified speakers; * _Avg. Eucl. Distance with Std. Deviation_: the average Euclidean distance of the inverted sample to original samples of the same speaker in the d-vector space, calculated on the successfully inverted audio samples. For the Euclidean distances within the d-vector space, the average within-speaker distance in the original training samples can serve as a baseline (\(1.923\cdot 10^{-1}\)). Results._(1) Is it possible to successfully generate inverted audio samples for speaker recognition?_ By looking at the distribution of the inverted samples, we observe that it does not fully match the distribution of the original data. Since the inverted samples also sound differently from the original ones--they do not necessary sound like speech--hence, they do not allow to fool a _human_ listener. However, the results suggest that standard MI is good enough to fool the classification of the _automatic_ speaker recognition system with an accuracy of up to \(54.76\%\). The classification accuracy can be significantly improved to \(90.48\%\) through our new sliding MI. For speaker recognition models in charge of identity control for a highly secured system, this accuracy on inverted data would be beyond acceptable. Our novel sliding MI, also reduces the Euclidean distance between inverted and original samples for some input vectors, in comparison to a standard MI. However, despite this decrease in distance, the reconstructed speech samples still do not sound very close to the original speaker. Figure 2: **Sliding MI.** **We initialize a random input vector.** **Starting from the beginning, we invert the first window based on this vector and replace the vector’s first part by our inverted data.** **For the all subsequent windows, we use parts of the previously inverted vector and fill the remainder with the input vector to apply MI.** **We then iteratively replace the input vector with our inverted data.** _(2) Which kind of randomly initialized input vector to the MI attack produces the most successful inverted audio samples (with respect to the classification as the original speaker)?_ We can also conclude that not all input vectors to the MI are equally suited to create inverted audio samples which successfully fool the speaker model: plain input vectors achieve the lowest quality in inversion with respect to classification accuracy. We assume that this is due to the difficulty of transforming constant vectors into speech-like wave forms through optimization. It seems that random initialization or data that is already in wave form are more suited inputs to MI for audio data: The best classification accuracy can be achieved with white noise and tanh activation. Brown noise exhibits the poorest performance, yet it exhibits a relatively small mean Euclidean distance. The Laplace distributions achieves the overall highest results. See Table 1 for an overview of results. _(3) How does our new sliding MI approach improve the results in comparison to standard MI?_ We observe from the results in Table 1 that sliding MI exhibits a higher performance than standard MI. While with standard MI, the accuracy of the target model on the inverted data is \(54\%\), depending on the random initialization, our sliding MI yields above \(90\%\) accuracy. ### Experiment 2: Partial MI to Invert d-Vectors While in previous applications on other data types, only a complete MI back to the original input domain is valuable, this is different for speaker recognition: the d-vectors, which are feature representations of the voice samples, already carry important paralinguistic information that can, for example, be used to generate spoofed audio samples [11]. Inverting simple d-vectors instead of full audio samples reduces the computational costs of the attack (since it does not need to be performed sequentially), and can be performed with standard MI attacks. Therefore, in our second experiment we set out to invert a model on the intermediate layers with the aim to answer the following two questions: (1) Is it possible to successfully invert d-vector? (2) Which input vector produces the most successful inverted d-vector (with respect to the classification as the original speaker)? Experiment.We apply partial inversion by removing the SincNet and MLP part of the network and only focusing on the submodel 3 for the d-vector inversion (see Figure 1). Results._(1) Is it possible to successfully invert d-vector?_ Our findings show that we can reconstruct d-vectors that are successfully classified as the original speaker (classification accuracy of the target model on them reaches \(100\%\)). This outperforms the baseline where we measure accuracy of the target model on averaged per-speaker samples (\(97.84\%\)). However, even for the best-performing input (zeros), the Euclidean distance of \(0.84\)\(\pm\)\(0.028\) is clearly above the baseline average within speaker distance (\(1.923\cdot 10^{-1}\)). To evaluate whether the inverted d-vectors still leak the individual speakers' privacy, we perform a principal component analysis (PCA) and train a binary classifier to predict the individuals' gender. We fit the PCA to the TIMIT test dataset and transform the inverted d-vectors. Our results are visualized in Figure 3. They suggest that the inverted d-vector leak gender privacy. _(2) Which input vector produces the most successful inverted d-vector (with respect to the classification as the original speaker)?_ Since d-vectors and audio data have different properties, they require different input vectors for successful inversion. Sound noises are good inputs for audio samples. However, our observations suggest that initializing the input vector with sound noises does not yield high-quality inverted d-vector representations. Instead, we found plain zeros perform best when inverting d-vectors. ### Experiment 3: Create Deepfakes Even though the inverted samples are classified correctly by the target model, they do not necessarily carry useful information for human listeners. With the following experiments, we focus on the question: (1) Based on the inverted audio samples, is it possible to generate audio data that resembles the original speaker for a human listener? The experiment can be considered as a proof-of-concept to demonstrate further security risks in speaker recognition systems made possible by our attack. Experiment.To generate the deepfakes, we use the work from [11]. Their architecture consist of three parts: (1) speaker encoder, (2) speech synthesizer, and (3) vocoder. The speaker encoder is used to create a d-vector out of an audio file, which characterizes the audio sample in vector space. With this information, the speech synthesizer creates the mel-spectogram by using the d-vectors. Finally, vocoder grabs the mel-spectogram to perform frequency to time domain conversion of. The underlying speech synthesizer is Tacotron 2 [12] and vocoder is Wavenet [13]. We use the inverted audio samples from our sliding MI, and the inverted d-vectors as input for the method. In principle, inverted audio samples can be fed directly into the speaker encoder for the deepfake generation. However, to use our inverted d-vectors (2048 dimensions), we have to transform them to match the deepfake model's speaker encoding, as it expects d-vectors with 256 dimensions. To do so, we train an MLP to map one vector space to another. The MLP has two hiddens layers with 1024 and 512 neurons and an output layer with 256 neurons. We use tanh activation in the first two layers. We create the training set for this transformation by feeding sound files to our and the deepfake's speaker encoder. Our encodings are used as input while their encodings are treated as the outputs to be learned. Figure 3: _PCA on d-Vectors. PCA fitted on the TIMIT test dataset_ (blue: _female_; green: _male) and used to transform the inverted d-vectors (purple: _female_; red: _male_). Results indicate that inverted d-vectors reveals the individuals’ gender. Results.(1) Based on the inverted audio samples, is it possible to generate audio data that resembles the original speaker for a human listener?_ As reported in experiment 2, our d-vectors, though correctly classified as the original speakers, were far away from the speaker's original d-vectors in the Euclidean space. The question if a generated sample sounds similar to an original one is a semantic question and depends on the sensitivity of the context. From the authors' perspective on inspection case-by-case, the d-vector-based deepfakes did not allow individual speakers characteristics to be recognized. However, based on the inverted audio samples from our novel sliding MI, we were able to generate a few good quality spoofed audio samples that resembled the original speaker.2 With such samples at hand, an attacker could, hence, spoof someone's identity solely based the inverted data from the pre-trained NN. We expect this to become even much more prevalent with more sophisticated deepfake generation systems in the future. Footnote 2: Examples for the original audio data, averaged and inverted samples, and the spoofed audio data generated based on the inverted audio samples, are available at [https://www.dropbox.com/sh/ge6xx90laqmru9b/](https://www.dropbox.com/sh/ge6xx90laqmru9b/)\AABUs3p4EwauOn79q4Rqg4Rwa. ## 5 Countermeasures and Discussion Speaker recognition systems heavily rely on learning individual per-speaker characteristics in order to fulfill the task they are designed for. Therefore, these systems always and necessarily contain information about the speaker data that they were trained on. Noxising out individual speaker characteristics will result in drastically decreased performance of the systems. In particular, pseudonomization [14] and privacy methods that are used in general speech systems (e.g., [15, 16, 17, 18, 19, 20]) render speaker recognition unusable for their original purpose. Protecting against MI attacks.As an alternative, one can consider privacy protection methods that aim at imposing MI attacks. Most existing defenses from other domains focus on suppressing the model confidence score or reducing their utility. This can be done by injecting uniform noise to them [21], reducing their precision [2] or their dispersion [22]. The latter one leads to a decrease in the correlation between the input data and the scores, which renders MI attacks more inaccurate. With a similar aim, the use of regularization in the training loss function has been reported as a defense [23]. Additionally, hardware-oriented solutions to prevent an attacker from accessing the model parameters to decrease MI success, or at least preventing the extraction of intermediate features (see our experiment 2) can be applied [24]. Differential privacy.Initial work empirically showed that Differential Privacy (DP) [25]can reduce the success of MI attack's [2] when using a very large amount of noise, which, in return, drastically degrades the model's performance. Later work suggests that DP training for ML models cannot at all prevent MI attacks [26] because its aim is to dissimulate the presence of a data point in a specific data set and not to protect privacy over classes of data. Limitations.So far, MI attacks require the availability of an NN's confidence scores, and the attack's success depends largely on the random initializations. Especially, many speaker recognition tools depend on the cosine similarity [27, 28] for which the algorithm would need to be updated. Also, the quality of the spoofed audio samples is limited by the deepfake creation methods. As a consequence, the practical impact of our attack might currently still be limited. However, with new and ever more powerful privacy attacks and deepfake methods being proposed, the threat space of exploiting privacy attacks to violate security of speaker recognition systems will gain importance. It is, hence, important to create awareness and to consider and protect privacy and security jointly, rather than separately. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & **Sample Type** & **Learning R.** & **MI Accuracy** & **\# Correct Speakers** & **Avg. E. Dist. \(\pm\) Std. Dev.** \\ \hline \multirow{4}{*}{ \begin{tabular}{c} **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** **SVM** \\ **SVM** **\\ **SVM** \\ **SVM** **SVM** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** ** \\ **SVM** \\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** **\\ **SVM** \\ **SVM** \\ **SVM** ** \\ **SVM** \\ **SVM** ** \\ **SVM** \\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **\\ **SVM** \\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** **SVM** \\ **SVM** **\\ **SVM** **\\ **SVM** ** \\ **SVM** **\\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** \\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** \\ **SVM** ** \\ **SVM** ** \\ **SVM** **\\ **SVM** **SVM** \\ **SVM** **\\ **SVM** ** \\ **SVM** ** \\ **SVM** \\ **SVM** ** \\ **SVM** ** **SVM** \\ **SVM** **\\ **SVM** \\ **SVM** **SVM** \\ **SVM** ** \\ **SVM** **SVM** ** \\ **SVM** ** \\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** ** \\ **SVM** **SVM** \\ **SVM** **\\ **SVM** \\ **SVM** **SVM** ** \\ **SVM** ** \\ **SVM** **\\ **SVM** ** \\ **SVM** ** \\ **SVM** **SVM** **\\ **SVM** ** \\ **SVM** **SVM** **\\ **SVM** ** \\ **SVM** **SVM** **\\ **SVM** ** **\\ **SVM** **SVM** **\\ **SVM** ** \\ **SVM** **SVM** **\\ **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** ** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** ** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **\\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **\\ **SVM** **SVM** \\ **SVM** **SVM** **SVM** **SVM** **SVM** \\ **SVM**SVM** **SVM** ** **SVM** **SVM** **SVM** \\ **SVM** **SVM** **SVM** **SVM** **SVM** \\ ## 6 Conclusion In this work, for the first time, we successfully perform MI attacks on audio data. Therefore, we introduce a novel sliding MI method which leverages the sequential properties of the audio data for improved inversion. We experimentally evaluate the attack's success on a state-of-the-art speaker recognition system. Our results indicate that our inverted audio samples can be used as a departure point for further attacks against the security of the target system. Thereby, we highlight the importance of implementing adequate privacy protection in such systems. ## 7 Acknowledgements This research was supported by the Bavarian Ministry of Economic Affairs, Regional Development and Energy. The authors acknowledge J. Williams for comments on the manuscript.
2302.02448
On semiconductor--metal transition in FeSi induced by ultrahigh magnetic field
At low temperatures, iron monosilicide is a strongly correlated narrow-gap semiconductor. A first order transition to metal state induced by magnetic field was observed for the first time at 355 T in Ref. [Yu. B. Kudasov et al., JETP Lett. 68 (1998) 350]. However, recently a smooth transition from 230 T to 270 T was found under similar conditions in Ref. [D. Nakamura et al., Phys. Rev. Lett. 127 (2021) 156601]. This discrepancy goes far beyond experimental errors and deserves a careful study. A methodological analysis of inductive and RF techniques of conductivity measurements shows that the difference of these critical magnetic field estimations stems from a divergence in dynamic ranges of the techniques. In fact, the above mentioned methods supplement each other. The semiconductor-metal transition under magnetic field in FeSi is a complex phenomenon which occurs at the wide range of magnetic fields.
Yuri Kudasov, Dmitrij Maslov
2023-02-05T18:17:32Z
http://arxiv.org/abs/2302.02448v1
# On semiconductor-metal transition in FeSi induced by ultrahigh magnetic field ###### Abstract At low temperatures, iron monosilicide is a strongly correlated narrow-gap semiconductor. A first order transition to metal state induced by magnetic field was observed for the first time at 355 T in Ref. [Yu. B. Kudasov _et al._, JETP Lett. **68** (1998) 350]. However, recently a smooth transition from 230 T to 270 T was found under similar conditions in Ref. [D. Nakamura _et al._, Phys. Rev. Lett. **127** (2021) 156601]. This discrepancy goes far beyond experimental errors and deserves a careful study. A methodological analysis of inductive and RF techniques of conductivity measurements shows that the difference of these critical magnetic field estimations stems from a divergence in dynamic ranges of the techniques. In fact, the above mentioned methods supplement each other. The semiconductor-metal transition under magnetic field in FeSi is a complex phenomenon which occurs at the wide range of magnetic fields. keywords: high magnetic field, phase transition, FeSi, AC conductivity, magnetization, magnetic flux compression, Kondo insulator + Footnote †: journal: Journal of Magnetism and Magnetic Materials ## 1 Introduction Iron monosilicide (FeSi) has been attracting attention of theorists and experimentalists for several decades [1; 2; 3; 4; 5]. It is a non-magnetic metal above room temperature[1; 5]. However, FeSi becomes a narrow-gap semiconductor below 100 K with the gap width of about 60 meV [2; 5]. Effective masses of mobile charge carriers on the top of the valence band and the bottom of the conduction band grow with decreasing temperature and reach extremely large values (\(m\sim 100m_{0}\)), which makes it possible to attribute FeSi to a rather exotic group of heavy fermion compounds without f-elements [2; 4]. Comparison of experimental data with results of calculations of the electronic structure demonstrated that the heavy carriers in FeSi appeared due to strong electronic correlations in bands formed mainly by iron d-orbitals [6; 7]. This problem was investigated in the framework of the Hubbard model [8]. The hypothesis of FeSi as a Kondo insulator was widely discussed [9; 10]. However, a realistic model should include strong multiorbital correlations [7]. Below 70 K another transport anomalies in FeSi are observed. They are ascribed to quasiparticles of spin-polaron type, which form a very narrow band about 6 meV wide inside the gap [11]. During the last decade, few Kondo insulators were proven to be a topological insulator including SmB\({}_{6}\), which thermodynamical and transport low-temperature properties were similar to those of FeSi [12; 13; 14]. The existence of a thin layer with high conductivity on the surface of iron monosilicide was recently experimentally demonstrated on ultra-thin single crystal samples [4; 15]. Moreover, the observed hysteresis of the Hall effect demonstrated that the 2D conductive surface layer in FeSi is ferromagnetic with a non-magnetic bulk state [15]. At low temperatures and moderate magnetic fields, anisotropic magnetoresistance was revealed in FeSi [4]. The anisotropy was a consequence of an interplay of the surface and bulk charge mobile carriers [12] and transport properties of FeSi was described by means of two-band model [3]. Under ultrahigh magnetic field (above 100 T), the gap in electron spectrum of FeSi should be suppressed by the Zeeman splitting of the conduction and valence bands edges. The LDA+U [16] and LSDA [17; 18] calculations gave an estimation of the critical magnetic field of the transition to the metallic state of about 170 T at \(T=0\) K. In fact, the mechanism of metallization involves a renormalization of the quasiparticle bands by spin fluctuations that could greatly shift this value [19]. For the first time, the semiconductor-metal transition induced by ultrahigh magnetic field in FeSi was experimentally observed and studied in Ref. [20; 21] using induction magnetization measurements and radio-frequency (RF) contactless conductivity measurements. At \(T=5\) K it was observed at 355 T and was accompanied by a step in the magnetic moment. At \(T=78\) K there was a smooth increase in conductivity, and without the step in the magnetic moment. Qualitatively, this was in good agreement with the theoretical magnetic phase diagram [16]: a phase transition of the first kind from singlet semiconductor to ferromagnetic metal existed at low temperatures and a smooth transition did above a certain critical temperature. Recently, FeSi was studied under ultrahigh magnetic field at various temperatures using a similar RF measuring technique [22] and at \(T=6\) K an extended transition was observed, which ended at about 275 T. The discrepancy in the results of Ref. [20; 21; 22] goes far beyond experimental errors and deserves a careful study. A magnetic flux compression technique is the only possibility to investigate transport and magnetic properties of substances at 300 T and above [23]. The fast flux compression is realized by means of high explosives (HEC) [24] or ponderomotive electromagnetic forces (EMC) [25]. The both approaches were used in the investigations of FeSi: the first one was implemented in MC-1 generator, which was used in Ref. [20; 21], the EMC technique was applied in Ref. [22]. A time dependence of magnetic field in these devices had a complex shape: a slow initial part up to approximately 16 T and fast growth of the magnetic field under the flux compression during about 15 \(\mu\)s. The final part of the magnetic field pulse of the MC-1 generator is shown in Fig. 1. It is very similar to that of EMC facility. It showld be mentioned that the ultrahigh magnetic field generation is accompanied by intensive electromagnetic noises, and, that is why, a choice of measuring methods is greatly limited [24]. In the present article, we consider methodological aspects of inductive and RF conductivity measuring techniques and discuss the discrepancy of the results on the semiconductor-metal transition under the ultrahigh magnetic field in FeSi. ## 2 Inductive conductivity measurement The compensation inductive technique is widely used for magnetization and conductivity measurements in pulsed magnetic fields and, in particular, in ultrahigh magnetic fields [20; 26]. The compensation sensor is a pair of small identical coils with counter-winding, which axes are oriented along the external magnetic field (see Fig. 2). A measured sample is installed in one of the coils. In case of a long sample, the electromotive force generated by the sensor consists of two components [27]: \[\mathcal{E}=\alpha\mu_{0}\dot{H}+\mu_{0}NS\dot{M}, \tag{1}\] where \(H\) and \(M\) are the magnetic intensity and magnetization (or specific magnetic moment appeared due to eddy currents), \(\mu_{0}\) is the permeability of free space, \(N\) is the number of turns, \(S\) is the sample sectional area, and \(\alpha\) is the coefficient, which is proportional to compensation error. From Eq. (1) one can see that there is an additional background signal which is proportional to the time derivative of the magnetic flux density. It can be eliminated by comparing it with a background signal obtained from the pick-up coil which is used for magnetic field measurement. The procedure is illustrated in Fig. 3 where signals from Ref. [20] at \(T=77\) K are shown. In a conductive sample, a pulsed magnetic field generates eddy currents, which induce the specific magnetic moment \(M\). To separate the contribution of the sample conductivity to the signal from intrinsic magnetization, an additional measurement has to be performed with a powder sample in dielectric matrix [21]. It gives the magnetization response only. Under ultrahigh magnetic fields a number of specific requirements to the sensor have to be taken into account. For example, due to high rates of Figure 1: The final part of the magnetic field pulse of MC-1 generator. magnetic field growth a voltage reaches huge values even in miniature sensors and can cause an electrical breakdown. To eliminate the problem a special type of winding can be used [27]. When conductivity is small, that is, the magnetic field generated by eddy currents in the sample is much less than the external pulsed magnetic field, conductivity can be determined from the measured electromotive force by means of the following expression [21] \[\sigma(t)=\frac{8}{\pi R^{4}\mu_{0}\dot{B}N}\int_{0}^{t}\mathcal{E}(\tau)\mathrm{ d}\tau \tag{2}\] where \(R\) is the sample radius. Here the sample is assumed to be a long cylinder. The integral form of this equation partially suppresses an effect of noises. Figure 3: Normalized background signal from pick-up coil (blue line) and compensation sensor signal (red line) [20]. Figure 2: Schematic view of compensated inductive sensor. The arrow denotes the external magnetic field direction. Eq. (2) allows estimating a threshold of sensitivity. To do this, the integral in the right side of the equation should be replace by its minimal detectable value. Then, the threshold of sensitivity is inversely proportional to the time derivative of the magnetic flux density, which varies drastically during the pulse of the magnetic filed as one can see in Fig. 1. The upper bound of the dynamical range is defined by a non-linear regime of magnetic field diffusion into the sample while the conductivity becomes sufficiently high. In practice, however, it was limited to the maximum value of the recorded signal [21]. It should be mentioned that an effect of the finite length of the sample on the measurement results can be taken into account by means of a magnetostatic reciprocity theorem [26]. ## 3 RF conductivity measurement The RF method of conductivity measurement has been successfully applied in ultrahigh magnetic field for a long time [28; 20; 21; 29; 30; 22; 31]. The absence of contacts with a sample and using a narrow frequency band makes it possible to suppress electromagnetic interference that are induced by ultrahigh magnetic field. Initially, two small flat coils with a typical diameter of 3 to 5 mm were used, each of which contained several turns. The coils were positioned coaxially with each other. A test sample in the form of a plate was inserted into a small gap between the coils [28]. The axis of the coils was oriented perpendicular to the external magnetic field to reduce the induced voltage. An RF signal from the generator was fed to one of the coils, and the signal from the other coil was sent to the oscilloscope. An amplitude of the measured signal depended on the conductivity of the sample. An analysis of the electrodynamics of the measuring unit showed that there is a wide area of linear conductivity response. Currently, a single-coil measuring unit is used [20; 29; 22; 31] (see Fig. 4). It combines the function of the transmitting and receiving coils. A probing RF signal is applied to the coil through a bandpass filter and cable line. A reflected signal is generated by eddy currents in the sample. It propagates through the same cable and bandpass filter. Then the reflected signal is separated from the probe one. The electrodynamics of RF measurements in the two-coil system was thoroughly investigated [28]. Below we will consider a single-coil system using the same approach. The RF measuring unit has an axial symmetry (Fig. 4). The vector potential in cylindrical coordinates (\(r\), \(z\), \(\varphi\)) is determined by the following expression [28] \[\frac{\partial^{2}A}{\partial r^{2}}+\frac{1}{r}\frac{\partial A}{\partial r}- \frac{A}{r^{2}}+\frac{\partial^{2}A}{\partial z^{2}}=i\omega\mu_{0}\sigma A, \tag{3}\] where \(A\equiv A_{\varphi}\) is the \(\varphi\)-component of the vector potential, \(\omega\) is the circular frequency of the probe signal. By separating the variables \(r\) and \(z\) the Eq. (3) is reduced to Bessel's equation of the first kind on \(r\) and the following equation on \(z\) \[\frac{\partial^{2}A}{\partial z^{2}}={k^{\prime}}^{2}A, \tag{4}\] where \({k^{\prime}}^{2}=k^{2}+i\omega\mu_{0}\sigma\), and \(k\) is the separation parameter. Then, a solution of Eq. 3 under the physical boundary condition at the \(z\)-axis can be written as \[A(r,z)=\int_{0}^{\infty}\big{[}a(k)e^{k^{\prime}z}+b(k)e^{-k^{ \prime}z}\big{]}J_{1}(kr)\mathrm{d}k, \tag{5}\] where \(a(k)\) and \(b(k)\) are the coefficients, \(J_{1}(kr)\) is the Bessel function of the first kind. The whole space is separated into four regions as shown in the bottom panel of Fig. 4. The coefficients \(a(k)\) and \(b(k)\) in each of them are determined by the boundary conditions [28]. There is a significant difference with the two-coil system. Namely, the total voltage on the coil is divergent as it is on the transmitting coil in the two-coil system. This is an effect of the approximation of infinitely thin coil. That is why, instead of an amplitude Figure 4: Schematic view of RF single-coil measuring unit (the coil and sample) and model representation (bottom panel). The arrow denotes the external magnetic field direction. of the total signal we obtain a normalized amplitude of the reflected signal induced by the eddy currents. It has the following complex form \[R=\frac{U}{U_{\infty}}=2\int\limits_{0}^{\infty}\frac{(k^{\prime 2}-k^{2 })\exp(-2ku)\sinh(k^{\prime}t)J_{1}^{2}(ka)}{(k+k^{\prime})^{2}\exp(k^{\prime}t)- (k^{\prime}-k)^{2}\exp(-k^{\prime}t)}\mathrm{d}k\] \[\times\Bigg{[}\int\limits_{0}^{\infty}\exp(-2ku)J_{1}^{2}(ka) \mathrm{d}k\Bigg{]}^{-1}, \tag{6}\] where \(a\), \(t\), and \(u\) are the coil radius, sample thickness, and width of the gap between the coil and sample Fig. 4, \(U_{\infty}\) is the amplitude of the reflected signal at \(\sigma\rightarrow\infty\). An amplitude and phase of the reflected signal are shown in Fig. 5. The geometrical parameters correspond to those of measuring unit in Ref.[21]. Since the amplitude and phase of the reflected signal depend on the product \(\sigma f\), the sensitivity threshold of conductivity inversely proportional to the frequency \(f\). Under heavy electromagnetic interferences the dynamical range of the technique is usually no greater than two orders of magnitude of conductivity. The analysis above gives a good description for the RF measurements at relatively low frequency band (\(f\approx 50\) MHz) [28; 21; 29]. At the same time, while high frequency is applied (\(f\approx 500\div 700\) MHz) [22], a role of interturn and coil-sample capacitances increases. Therefore, the coil becomes a resonant circuit, and the analysis of the measuring unit operation becomes more complex although the underlying physical phenomenon remains the same. The operation of high-frequency measuring unit was studied in detail in Ref. [31]. ## 4 Semiconductor-metal transition in FeSi Experimental data from Ref. [21] and [22] are shown in Fig. 6 together with estimations of dynamical ranges corresponding to different techniques. One can see that they are in a reasonable agreement with each other. The RF technique at 700 MHz [22] was sensitive to much lower conductivity than that at 50 MHz and even more so the inductive compensation method. That is why, it gave the magnetic field dependence of conductivity starting from the initial values up to about \(10^{2}\) [Ohm cm]\({}^{-1}\) at 6 K and \(10^{3}\) [Ohm cm]\({}^{-1}\) at 53 K. The inductive technique had the much higher sensitivity threshold and provided measurement above these values that occurred at higher magnetic field. One can see in Fig. 6 that the low-temperature field dependencies of conductivity of Ref. [21; 22] are in a reasonable agreement and supplement each other. On the other hand, the results in Fig. 6 demonstrate that the semiconductor-metal transition in FeSi is a complex phenomenon which is extended in magnetic field [21; 22]. Magnetoresistance of FeSi in moderate magnetic field up to 10 T is determined by the interplay of the surface and bulk conductivity [4; 3; 12]. At higher magnetic field the in-gap spin-polaron-like states should play an important role because their characteristic energy scale is about 10 meV [11]. Then, the collapse of the intrinsic band gap occurs at ultrahigh magnetic fields. The first order transition with the magnetization step was observed at 5 K, and it was absent at 77 K [21]. The line of the first order transition with the critical point in the magnetic phase diagram [16] is still an open question which can be resolved by magnetization measurements at intermediate temperatures. Figure 5: Amplitude \(R\) and phase \(\varphi\) of the normalized reflected signal of the RF single-coil measuring unit (Eq. 6) as a function of sample conductivity and frequency of the probe signal. Parameters correspond to Ref. [21]: \(a=1.5\) mm, \(t=0.3\) mm, \(u=0.2\) mm (see Fig. 4). Figure 6: Magnetic field dependence of conductivity of FeSi at different initial temperatures. The full and open symbols correspond to the results of Ref. [21] and [22], correspondingly. The dash lines are estimations of the dynamical ranges \(\Delta(T)\)[22, 31], the dash-dot black line is the low threshold of the inductive technique sensitivity (see Eq. 2). The circles and squares are RF measuremnts at 700 MHz [22], the diamond is that at 50 MHz [21], the up- and down-triangles demonstrate the results of inductive measurements [21]. ## 5 Acknowledgement The work was supported by National Center Physics and Mathematics (project "Investigation under high and ultrahigh magnetic fields"). The authors would like to thank Professor A.E. Dubinov for valuable comments.
2308.10017
On *-fusion frames for Hilbert C*-modules
Our main goal in this paper, is to generalize to Hilbert C*-modules the concept of fusion frames. Indeed we introduce the notion of *\~nfusion frames associated to weighted sequences of orthogonally complemented submodules of a Hilbert C*-module, and prove for such a *-fusion frames some fundamental results.
Nadia Assila, Samir Kabbaj, Hicham Zoubeir
2023-08-19T13:54:11Z
http://arxiv.org/abs/2308.10017v1
# On *-fusion frames for Hilbert C*-modules ###### Abstract Our main goal in this paper, is to generalize to Hilbert C*-modules the concept of fusion frames. Indeed we introduce the notion of *-fusion frames associated to weighted sequences of orthogonally complemented submodules of a Hilbert C*-module, and prove for such a *-fusion frames some fundamental results. **2010 Mathematics Subject Classification** : 42C15, 46L05, 46L08. **Key words** : C*-algebras, HilbertC*-module, *-fusion frames. ## 1 Introduction In 1946, Gabor ([18]) performed a new method for the signal reconstruction from elementary signals. In 1952, Duffin and Schaeffer ([14]) developped, in the field of nonharmonic series, a similar tool and introduced frame theory for Hilbert spaces. For more than thirty years, the results of Duffin and Schaeffer has not received from the mathematical community, the interest they deserve, until the publication of the work of Higgins and Young ([27]) where the authors studied frames in abstract Hilbert spaces. In 1986, the work of Daubechies, Grossmann and Meyer ([11]) gave to frame theory the momentum it lacked and allowed it to be widely studied. From this date, several generalizations of the notion of frame have been developed, for example the concept of atomic decompositions ([15]), the concept of continuous frames in Hilbert spaces ([2]), the concept of frames in Banach spaces ([6]), the notion of \(p\)-frames (([1]), the notion of frames associated with measurable spaces ([17]), ([10])), the concept of frames of subspaces ([7]). In 2000, Frank and Larson ([16]) introduced the notion of frames in Hilbert C*-modules as a generalization of frames in Hilbert spaces. Since this innovative work, several generalizations of Hilbertian frame theory have emerged. Let us cite for example the work of A. Khosravi and B. Khosravi ([21]) who introduced the notion of g-frames in a Hilbert C*-module, the contribution of A. Alijani and M. Dehghan ([3]) who developed the notion of *- frame in a Hilbert C*-module, the work of N. Bounader and S.Kabbaj ([5]) who introduced the notion of *-g-frame for Hilbert C*-modules. Fusion frames as a generalisation of frames were introduced by Casazza, Kutyniok and Li in ([8]). The motivation behind fusion frame comes from the need to efficiently process and analyze large data sets. A natural idea is to divide these data sets into suitable smaller and parallel ones that can be processed independently. Fusion frame systems are created to meet these needs through the weighted coherent combination of subsystems. By this approach the fusion frame theory provides a flexible framework that takes into consideration local frames associated with the subsystems ([8]). Our main goal in this paper, is to generalize to Hilbert C*-modules the concept of fusion frames. Indeed we introduce the notion of *-fusion frames associated to weighted sequences of orthogonally complemented submodules of a Hilbert C*-module, and prove for such a *-fusion frames some fundamental results. The paper is structured as follows : In section 2, we state some notations definitions and some results that are useful for the for the proofs of the fundamental results of the paper. In section 3, we give the definition of *-fusion frame associated to a weighted sequence of orthogonally complemented submodules of a Hilbert C*-module and the definition of the analysis and synthesis operators related to a *-fusion frame. After that we prove the following properties of *-fusion frames : * The analysis operator of a *-fusion frame is a well-defined bounded linear adjointable operator. * The synthesis operator of a *-fusion frame is a well-defined bounded linear positive and self-adjoint and invertible operator. Furthermore this operator gives rise to a reconstruction formula. * Every Hilbert C*-module which has a *-fusion frame is countably generated. * There exists a characterisation of *-fusion frames in term of the norm of the Hilbert C*-module. * Let \(E\) and \(F\) be a Hilbert \(\mathfrak{A}\)-modules. Under some conditions on \(E\), the image of a *-fusion frame of \(E\) by an orthogonally perserving mapping from a the Hilbert \(\mathfrak{A}\)-module \(E\) to the Hilbert \(\mathfrak{A}\)-module \(F\) is also a *-fusion frame of \(F.\) * Let \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\) be a sequence of orthogonally complemented submodules of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}.\) The set of weights \(\left(\omega_{n}\right)_{n\in\mathbb{N}^{*}}\)-such that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H},\) is a convex cone of the \(\mathbb{C}\)-vector space \(\mathfrak{A}^{\mathbb{N}^{*}}.\) Section 4 is devoted to the the detailed study of two examples of *-fusion frames. In section 5 : * using a distance already introduced by Dragan S. Djordjevic in ([13]), we define the notion of the angle of two orthogonally complemented submodules of a Hilbert C*-module; * using the same distance, we introduce on the set of all the sequences of orthogonally complemented submodules of a Hilbert C*-module a topology defined by an ecart; * relying on the constructed topolgy and on the notion of angle that we have introduced, we prove for *-fusion frames, a perturbation results of topological and geometric character. For all the material on C*-algebras and Hilbert C*-modules, one can refer to the references ([12]), ([23]), ([25]) and ([26]). ## 2 Preliminary notes Let \(E\) and \(F\) be a nonempty sets. We denote by \(F^{E}\) the set of all the mappings from \(E\) to \(F.\) Let \(f:E\to F\) be a mapping and \(B\) a subset of \(F\) such that \(f\left(E\right)\subset B.\) We denote by by \(f\left|{}_{E}^{B}\right.\) the mapping \(f\left|{}_{E}^{B}:E\to B,\)\(x\mapsto f(x).\) Let \(\left(V,+,.\right)\) be a complex vector space and \(W\) a nonempty subset of \(V.\)\(W\) is said to be a convex cone if for every \(x,y\in W\) and \(\lambda\in\mathbb{R}^{+*}\) we have \(x+y,\)\(\lambda x\in W\). For every Banach space \(E\) we denote by \(\mathcal{B}\left(E\right)\) the Banach space of all bounded linear oprator from \(E\) to itself. For each \(p\in]1,+\infty[,\)we denote by \(l_{p}\left(\mathbb{C}\right)\) the set of all the sequences \(z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\)\(z_{n}\in\mathbb{C}\) with \(\sum\limits_{n=1}^{+\infty}\left|z_{n}\right|^{P}<+\infty.\)\(l_{p}\left(\mathbb{C}\right)\) is a Banach space when it is endowed with the norm \[\left\|.\right\|_{p}: l_{p}\left(\mathbb{C}\right) \rightarrow \mathbb{R}\] \[z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}} \mapsto \left\|z\right\|_{p}:=\left(\sum\limits_{n=1}^{+\infty}\left|z_{n }\right|^{p}\right)^{\frac{1}{p}}\] we denote by \(l_{\infty}\left(\mathbb{C}\right)\) the set of all the bounded sequences \(z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\)\(z_{n}\in\mathbb{C}.\)\(l_{\infty}\left(\mathbb{C}\right)\) is a Banach space when it is endowed with the norm \[\left\|.\right\|_{\infty}: l_{\infty}\left(\mathbb{C}\right) \rightarrow \mathbb{R}\] \[z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}} \mapsto \left\|z\right\|_{\infty}:=\sup\limits_{n\in\mathbb{N}^{*}} \left(\left|z_{n}\right|\right)\] Let \(\mathfrak{A}\) be a C*-algebra. The identity element of \(\mathfrak{A}\) with respect to the addition is then denoted by \(0_{\mathfrak{A}}.\) The C*-algebra \(\mathfrak{A}\) is called unital if it has an identity element with respect to the multiplication, that we will denote by \(1_{\mathfrak{A}},\) with the condition that \(1_{\mathfrak{A}}\neq 0_{\mathfrak{A}}.\) An element \(a\in\mathfrak{A}\) is said to be self-adjoint if \(a^{*}=a.\)\(a\) is said to be positive if \(a^{*}=a\) and \(\sigma(a)\subset\mathbb{R}^{+}.\) An element \(x\in\mathfrak{A}\) is called strictly positive if for any positive nonzero linear form \(f\), we have \(f\left(a\right)\in\mathbb{R}^{+*}.\) **Proposition 2.1** _Let \(\mathfrak{A}\) be a unital C*-algebra and a a self-adjoint element of \(\mathfrak{A}.\) Then a is a positive element of \(\mathfrak{A}\) if and only if \(a=b^{2}\) for some self-adjoint element \(b\) of \(\mathfrak{A}.\)_ **Proposition 2.2.** _Let \(\mathfrak{A}\) be a unital C*-algebra and \(a\in\mathfrak{A}\) a positive element. Then there exists a unique element \(b\in\mathfrak{A}\) such that \(a=b^{2}.\) The element \(b\) is then called the square root of a and is denoted by \(a^{\frac{1}{2}}.\)_ **Proposition 2.3.** _Let \(\mathfrak{A}\) be a unital C*-algebra and \(a\in\mathfrak{A}\) a positive element. Then a is strictly positive if and only if a is invertible._ The center \(Z\left(\mathfrak{A}\right)\) of \(\mathfrak{A}\) is the set of all the elements \(a\in\mathfrak{A}\) such that \(ab=ba\) for every \(b\in\mathfrak{A}.\) Each element of \(Z\left(\mathfrak{A}\right)\) is called a central element of \(\mathfrak{A}.\) **Proposition 2.4.** _For every positif element \(a\in Z(\mathfrak{A}),\)\(a^{\frac{1}{2}}\) belongs to \(Z(\mathfrak{A}).\)_ We consider the binary relation \(\preccurlyeq\) defined on \(\mathfrak{A}\) in the following way \[a\preccurlyeq b\text{ if and only if }b-a\text{ is positive}\] Then \(\left(\mathfrak{A},\preccurlyeq\right)\) is a partial order. Let \(\mathfrak{H}\) be a Hilbert \(\mathfrak{A}\)-module and \(\left\langle.,.\right\rangle\) the inner product on \(\mathfrak{H}.\) For every we set \[\left|x\right|:=\left(\left\langle x,x\right\rangle\right)^{\frac{1}{2}}, \left\|x\right\|_{\mathfrak{H}}:=\left\||x|\right\|_{\mathfrak{A}},\text{ }x\in\mathfrak{H}\] **Proposition 2.5. ([23],** page 6) \(\left\|.\right\|_{\mathfrak{H}}\) is then a norm on \(\mathfrak{H}\) which satisfies, for each \(x_{1},...,x_{N},\)\(y_{1},...,y_{N}\in\mathfrak{H},\) the following property \[\left\|\sum_{j=1}^{N}\left\langle x_{j},y_{j}\right\rangle\right\|_{\mathfrak{ A}}^{2}\leq\left\|\sum_{j=1}^{N}\left\langle x_{j},x_{j}\right\rangle \right\|_{\mathfrak{A}}\left\|\sum_{j=1}^{N}\left\langle y_{j},y_{j}\right\rangle \right\|_{\mathfrak{A}}\] A Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\) is said to be countably generated if there exists a sequence \(\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\) of elements of \(\mathfrak{H}\) such that \(\mathfrak{H}\) equals the norm-closure of the \(\mathfrak{A}\)-linear span of the set \(\left\{x_{n}:n\in\mathbb{N}^{*}\right\}.\) Let \(\mathfrak{K}\) be a closed submodule of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\). The orthogonal complement \(\mathfrak{K}^{\perp}\) of \(\mathfrak{K}\) is the set \[\mathfrak{K}^{\perp}:=\left\{u\in\mathfrak{H}:\text{ }\left\langle u,x\right\rangle=0_{ \mathfrak{A}},\text{ }x\in\mathfrak{K}\right\}\] \(\mathfrak{K}^{\perp}\) is then a closed submodule of the Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}.\) We say that \(\mathfrak{K}\) is orthogonally complemented if \[\mathfrak{H}:=\mathfrak{K}\oplus\mathfrak{K}^{\perp}\] The projection onto \(\mathfrak{K}\) related to the direct sum is then called the orthogonal projection on \(\mathfrak{K}\) and is represented by the notation \(P_{\mathfrak{H}}.\) Let \(\mathfrak{H}\) and \(\mathfrak{K}\) be a Hilbert \(\mathfrak{A}\)-module. A mapping \(T:\mathfrak{H}\rightarrow\mathfrak{K}\) is called an adjointable operator if there exists a mapping \(S:\mathfrak{K}\rightarrow\mathfrak{H}\) such that \[\left\langle T\left(u\right),v\right\rangle=\left\langle u,S\left(v\right) \right\rangle,\ u\in\mathfrak{H},\ v\in\mathfrak{K}\] We denote by \(Hom_{\mathfrak{A}}^{*}\left(\mathfrak{H},\mathfrak{K}\right)\) the set of all \(T:\) is called an adjointable operator from \(\mathfrak{H}\) to \(\mathfrak{K}.\) **Proposition 2.6.** _Let \(\mathfrak{H}\) and \(\mathfrak{K}\) be a Hilbert \(\mathfrak{A}\)-modules._ a. _If a mapping \(T:\mathfrak{H}\rightarrow\mathfrak{K}\) is adjointable operator then there exists a unique mapping \(S:\mathfrak{K}\rightarrow\mathfrak{H}\) such that_ \[\left\langle T\left(u\right),v\right\rangle=\left\langle u,S\left(v\right) \right\rangle,\ u\in\mathfrak{H},\ v\in\mathfrak{K}\] _The mapping \(S\) is then called the adjoint of the mapping \(T\) and is denoted by \(T^{*}.\)_ b. _Every adjointable operator \(T:\mathfrak{H}\rightarrow\mathfrak{K}\) together with its adjoint \(T^{*}:\mathfrak{K}\rightarrow\mathfrak{H}\) are a linear mapping of \(\mathfrak{A}\)-modules and a bounded linear operators of Banach spaces._ c. \(Hom_{\mathfrak{A}}^{*}\left(\mathfrak{H},\mathfrak{K}\right)\) is a Banach space with the usual operator norm. d. \(Hom_{\mathfrak{A}}^{*}\left(\mathfrak{H},\mathfrak{H}\right)\) is a C*-algebra. **Theorem 2.7.** ([4], page 472) _Let \(\mathfrak{H}\) be a Hilbert \(\mathfrak{A}\)-module, and \(T\in\mathcal{B}\left(\mathfrak{H}\right)\) such that \(T^{*}=T\). The following statements are equivalent :_ 1. \(T\) _is invertible_ 2. _There exists a real constants \(m,M>0\) such that_ \[m\left\|x\right\|_{\mathfrak{H}}\leq\left\|T\left(x\right)\right\|_{\mathfrak{ H}}\leq M\left\|x\right\|_{\mathfrak{H}},\ x\in\mathfrak{H}\] 3. _There exists a real constants \(m_{1},M_{1}>0\) such that_ \[m_{1}\left\langle x,x\right\rangle_{\mathfrak{H}}\leq\left\langle T\left(x \right),T(x)\right\rangle\leq M_{1}\left\langle x,x\right\rangle,\ x\in \mathfrak{H}\] Let \(\mathfrak{I}\) an ideal of \(\mathfrak{A}.\)\(\mathfrak{I}\) is said to be an essential ideal of \(A\) if the following implication holds for every ideal \(\mathfrak{N}\) of \(\mathfrak{A}\) \[\mathfrak{N}\cap\mathfrak{I}=\left\{0_{\mathfrak{A}}\right\}\implies\mathfrak{ N}=\left\{0_{\mathfrak{A}}\right\}\] **Theorem 2.8. (([23]), pages 14)** 1. _For any C*-algebra \(\mathfrak{A}\) there exists a unique (up to isomorphism) C*-algebra \(M\left(\mathfrak{A}\right)\) such that \(\mathfrak{A}\) is an essential ideal of \(M\left(\mathfrak{A}\right)\) and for every C*-algebra \(\mathcal{C}\) containing \(\mathfrak{A}\) is an essential ideal of \(\mathcal{C}\), there is a \(\mathfrak{A}\)-homomorphism \(\Phi:\mathcal{C}\to M\left(\mathfrak{A}\right)\) of C*-algebras such that the restriction \(\Phi\left|{\Phi(\mathcal{C})}\right.\) is an \(\mathfrak{A}\)-isomorphism of C*-algebras. \(M\left(\mathfrak{A}\right)\) is called the multiplier algebra of \(\mathfrak{A}.\)_ 2. _If \(\mathfrak{A}\) is unital then \(M\left(\mathfrak{A}\right)=\mathfrak{A}.\)_ Let \(\mathfrak{A}\) be a C*-algebra and \(\mathfrak{H}\) a Hilbert \(\mathfrak{A}\)-module. We denote by \(\mathbb{J}_{\mathfrak{H}}\) the ideal of \(\mathfrak{A}\) generated by \(\left\{\left\langle x,y\right\rangle_{E}:x,y\in\mathfrak{H}\right\}.\) We say that \(\mathfrak{H}\) is full if \(\mathfrak{A}=\mathbb{J}_{\mathfrak{H}}.\) Let \(\mathfrak{H}\) and \(\mathfrak{K}\) be a Hilbert \(\mathfrak{A}\)-modules and \(\Psi:\mathfrak{H}\rightarrow\mathfrak{K}\) a \(\mathfrak{A}\)-linear mapping (not necessarily bounded). We say that \(\Psi\) is orthogonality preserving if \[\left\langle x,y\right\rangle_{E}=0_{E}\implies\left\langle\Psi\left(x\right), \Psi\left(y\right)\right\rangle_{F}=0_{F}\] for every \(x,y\in\mathfrak{H}.\) Combining the theorem 2.8. with the main result in ([24]), we obtain the following result. **Theorem 2.9.** _Let \(\mathfrak{H}\) and \(\mathfrak{K}\) be a Hilbert \(\mathfrak{A}\)-modules and \(\Psi:\mathfrak{H}\rightarrow\mathfrak{K}\) a \(\mathfrak{A}\)-linear mapping. We assume that \(\mathfrak{A}\) is unital and that \(\mathfrak{H}\) is full_. a. \(\Psi\)_is orthogonality preserving if and only if there exists a central positive element \(\nu\) of \(\mathfrak{A}\) such that_ \[\left\langle\Psi\left(x\right),\Psi\left(y\right)\right\rangle_{F}=\nu\left\langle x,y\right\rangle_{E},\text{ }x,y\in\mathfrak{H} \tag{1}\] b. _Assume that \(\Psi\) is orthogonality preserving. Then \(\Psi\) is bounded and the central positive element \(\nu\) of \(\mathfrak{A}\) satisfying the relation (1) is unique. So we denote \(\nu\) by \(\nu\left(\Psi\right).\)_ c. _Assume that \(\Psi\) is bijective and orthogonality preserving, then \(\nu\left(\Psi\right)^{\frac{1}{2}}\) is a strictly positive element of \(\mathfrak{A}\) invertible and \(\Psi:\mathfrak{H}\rightarrow\mathfrak{K}\) is an isomorphism of Hilbert \(\mathfrak{A}\)-module_. **Proposition 2.10.** a. _Let \(\left(E,\left\|.\right\|_{E}\right)\), \(\left(F,\left\|.\right\|_{F}\right)\) be Banach spaces. If \(\varphi:E\to F\) is an isomorphism of Banach spaces, then_ \[\left\|\varphi^{-1}\right\|_{\mathcal{B}\left(E,F\right)}^{-1}\left\|x\right\| _{E}\leq\left\|\varphi\left(x\right)\right\|_{F}\leq\left\|\varphi\right\|_{ \mathcal{B}\left(E,F\right)}\left\|x\right\|_{E},\text{ }x\in E\] b. _Let \(\mu\) be a positive element of a C*-algebra \(\mathfrak{A}.\)_ i. _Assume that \(\mu\) is central in \(\mathfrak{A}.\) Then the mapping_ \[\begin{array}{cccc}L_{\mu}:&\mathfrak{A}&\rightarrow&\mathfrak{A}\\ &a&\mapsto&\mu a\end{array}\] _is increasing._ ii. _Assume now that \(\mu\) is strictly positive in \(\mathfrak{A}.\) Then the following properties hold_ \[\left\{\begin{array}[c]{c}\left\|\mu^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|a \right\|_{\mathfrak{A}}\leq\left\|L_{\mu}\left(a\right)\right\|_{\mathfrak{H}},\text{ }a\in\mathfrak{A}\\ \left\|\mu^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{\mathfrak{H}} \leq\left\|\mu x\right\|_{\mathfrak{H}}\leq\left\|\mu\right\|_{\mathfrak{A}} \left\|x\right\|_{\mathfrak{H}},\text{ }x\in\mathfrak{H}\end{array}\right.\] **Proof** 1. Let \(x\in E.\) It is clear that \[\left\|\varphi\left(x\right)\right\|_{F}\leq\left\|\varphi\right\|_{\mathcal{ B}\left(E,F\right)}\left\|x\right\|_{E}\] On the other hand, we have we can write for every \(x\in\mathfrak{A}\) \[\left\|x\right\|_{E} = \left\|\varphi^{-1}\left(\varphi\left(x\right)\right)\right\|_{E}\] \[\leq \left\|\varphi^{-1}\right\|_{\mathcal{B}\left(F,E\right)}\left\| \varphi\left(x\right)\right\|_{F}\] It follows that \[\left\|\varphi^{-1}\right\|_{\mathcal{B}\left(F,E\right)}^{-1}\left\|x\right\| _{E}\leq\left\|\varphi\left(x\right)\right\|_{F}\] Consequently we obtain \[\left\|\varphi^{-1}\right\|_{\mathcal{B}\left(E,F\right)}^{-1}\left\|x\right\| _{E}\leq\left\|\varphi\left(x\right)\right\|_{F}\leq\left\|\varphi\right\|_{ \mathcal{B}\left(E,F\right)}\left\|x\right\|_{E},\text{ }x\in E\] 2. i. Assume that \(\mu\) is a positive element of \(\mathfrak{A}.\) Let \(v,w\in\mathfrak{A}\) be such that \(v\preccurlyeq w.\) Hence \(w-v\) is a positive element of \(\mathfrak{A}\). We can then write \[L_{\mu}\left(w\right)-L_{\mu}\left(v\right) = w\mu-v\mu\] \[= \left(w-v\right)\mu\] \[= \left(w-v\right)^{\frac{1}{2}}\left(w-v\right)^{\frac{1}{2}}\mu\] \[= \left(w-v\right)^{\frac{1}{2}}\mu\left(w-v\right)^{\frac{1}{2}}\] \[= \left(\left(w-v\right)^{\frac{1}{2}}\mu^{\frac{1}{2}}\right) \left(\mu^{\frac{1}{2}}\left(w-v\right)^{\frac{1}{2}}\right)\] But the elements \(\left(w-v\right)^{\frac{1}{2}}\) and \(\mu^{\frac{1}{2}}\) are self-adjoint. It follows that \[L_{\mu}\left(w\right)-L_{\mu}\left(v\right)=\left(\left(w-v\right)^{\frac{1}{ 2}}\mu^{\frac{1}{2}}\right)\left(\left(w-v\right)^{\frac{1}{2}}\mu^{\frac{1}{ 2}}\right)^{\ast}\] Hence \(L_{\mu}\left(w\right)-L_{\mu}\left(v\right)\) is a positive element of \(\mathfrak{A}\). It follows that \(L_{\mu}\left(w\right)\preccurlyeq L_{\mu}\left(v\right)\). Consequently the mapping \(L_{\mu}\) is increasing. ii. We have for every \(a\in\mathfrak{A}\) \[\left\|a\right\|_{\mathfrak{H}} = \left\|\mu^{-1}\left(\mu a\right)\right\|_{\mathfrak{H}}\] \[\leq \left\|\mu^{-1}\right\|_{\mathfrak{A}}\left\|L_{\mu}\left(a \right)\right\|_{\mathfrak{A}}\] It follows that \[\left\|\mu^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|a\right\|_{\mathfrak{H}} \leq\left\|L_{\mu}\left(a\right)\right\|_{\mathfrak{A}},\text{ }a\in\mathfrak{A}\] We have also for every \(x\in\mathfrak{H}\) \[\left\|\mu x\right\|_{\mathfrak{H}} = \sqrt{\left\|\left\langle\mu x,\mu x\right\rangle\right\|_{\mathfrak{ A}}}\] \[= \sqrt{\left\|\mu\left\langle x,x\right\rangle\mu^{*}\right\|_{ \mathfrak{A}}}\] \[\leq \sqrt{\left\|\mu\right\|_{\mathfrak{A}}\left\|\left\langle x,x \right\rangle\right\|_{\mathfrak{A}}\left\|\mu^{*}\right\|_{\mathfrak{A}}}\] \[\leq \left\|\mu\right\|_{\mathfrak{A}}\left\|x\right\|_{\mathfrak{H}}\] Since \(\mu\) is strictly positive in \(\mathfrak{A},\) we can write for every \(x\in\mathfrak{A}\) \[\left\|x\right\|_{\mathfrak{H}} = \left\|\mu^{-1}\left(\mu x\right)\right\|_{\mathfrak{H}}\] \[\leq \left\|\mu^{-1}\right\|_{\mathfrak{A}}\left\|\mu x\right\|_{ \mathfrak{H}}\] It follows that \[\left\|\mu^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{\mathfrak{H}} \leq\left\|\mu x\right\|_{\mathfrak{H}}\] The final conclusion is that \[\left\|\mu^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{\mathfrak{H}} \leq\left\|\mu x\right\|_{\mathfrak{H}}\leq\left\|\mu\right\|_{\mathfrak{A}} \left\|x\right\|_{\mathfrak{H}},\text{ }x\in\mathfrak{H}\] Thence we achieve the proof of the proposition. \(\square\) ## 3 *-fusion frames in Hilbert \(\mathfrak{A}\)-modules ### Definitions **Definition 3.1**.: _Let \(\mathfrak{A}\) be a unital C*-algebra and \(\mathfrak{H}\) a Hilbert \(\mathfrak{A}\)-module and \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\) be a sequence of orthogonally complemented submodules of \(\mathfrak{H}.\) For each \(n\in\mathbb{N}^{*}\) we denote by \(P_{\mathfrak{H}_{n}}\) the orthogonal projection of \(\mathfrak{H}\) onto \(\mathfrak{H}_{n}.\)_ a. _A sequence \(\left(\omega_{n}\right)_{n\in\mathbb{N}^{*}}\) of central and strictly positive elements of \(\mathfrak{A}\) is called a weight of the C*-algebra \(\mathfrak{A}.\) We denote by \(\mathcal{W}\left(\mathfrak{A}\right)\) the set of all the weights of the C*-algebra \(\mathfrak{A}.\) The sequence \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is called a weighted sequence of orthogonally complemented submodules of a \(\mathfrak{H}.\)_ b. _Let \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) be a weighted sequence of orthogonally complemented submodules of a \(\mathfrak{H}.\) We say that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\) if for each \(x\in\mathfrak{H},\) the series \(\sum\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) converges in the norm \(\left\|.\right\|_{\mathfrak{H}}\) and there exist two strictly positive elements \(A\) and \(B\) of \(\mathfrak{A}\) such that_ \[\left|Ax\right|^{2}\preccurlyeq\sum_{n=1}^{+\infty}\omega_{n}^{2}\left|P_{ \mathfrak{H}_{n}}\left(x\right)\right|^{2}\preccurlyeq\left|Bx\right|^{2},\text { }x\in\mathfrak{H} \tag{2}\] _The elements \(A\) and \(B\) are then respectively called a lower and an upper bounds of \(\mathcal{G}.\) If \(A=B,\) then the *-fusion frame \(\mathcal{G}\) is said to be tight (or more precisely A-tight). If \(A=B=1\mathfrak{a},\) then the *-fusion frame \(\mathcal{G}\) is said to be a Parseval-*-fusion frame of \(\mathfrak{H}.\)_ c. _The set of all the sequences \(\left(\omega_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathcal{W}\left(\mathfrak{A}\right)\) such that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\) is called the set of multipliers of the sequence \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\) and is denoted by \(\mathfrak{m}\left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right).\)_ **Definition 3.2.** _Let \(\mathcal{G}:=\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in \mathbb{N}^{*}}\) be a *-fusion frame of \(\mathfrak{H}.\)_ a. _The mapping_ \[\begin{array}{cccc}\mathcal{S}_{\mathcal{G}}:&\mathfrak{H}&\rightarrow& \mathfrak{H}\\ x&\mapsto&\sum\limits_{n=1}^{+\infty}\omega_{n}^{2}P_{\mathfrak{H}_{n}}\left(x \right)\end{array}\] _(if it is well-defined) is then called the *-fusion frame analysis operator associated to \(\mathcal{G}.\)_ b. _The mapping_ \[\begin{array}{cccc}\mathcal{T}_{\mathcal{G}}:&\mathfrak{H}&\rightarrow&l_{2 }\left(\mathfrak{H}\right)\\ x&\mapsto&\left(\omega_{n}P_{\mathfrak{H}_{n}}\left(x\right)\right)_{n\in \mathbb{N}^{*}}\end{array}\] _is called the *-fusion frame synthesis operator associated to \(\mathcal{G}.\)_ ### Main results **Theorem 3.3.** _Let \(\mathcal{G}:=\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in \mathbb{N}^{*}}\) be a *-fusion frame of \(\mathfrak{H}.\)Then \(\mathcal{T}_{\mathcal{G}}:\mathfrak{H}\rightarrow\)\(l_{2}\left(\mathfrak{H}\right)\) is a well-defined bounded linear adjointable operator and the adjoint of \(\mathcal{T}_{\mathcal{G}}\) is the operator_ \[\begin{array}{cccc}\mathcal{T}_{\mathcal{G}}^{*}:&l_{2}\left(\mathfrak{H} \right)&\rightarrow&\mathfrak{H}\\ y:=\left(y_{n}\right)_{n\in\mathbb{N}^{*}}&\mapsto&\mathcal{T}_{\mathcal{G}}^{*} \left(y\right):=\sum\limits_{n=1}^{+\infty}\omega_{n}P_{\mathfrak{H}_{n}} \left(y_{n}\right)\end{array}\] **Proof** **1. Proof that the mapping \(\mathcal{T}_{\mathcal{G}}\) is a well-defined bounded linear operator.** Let \(x\in\mathfrak{H}.\) The series \(\sum\left|\omega_{j}P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\) is then convergent. Hence the sequence \(\left(\omega_{n}P_{\mathfrak{H}_{n}}\left(x\right)\right)_{n\in\mathbb{N}^{*}}\) belongs to \(l_{2}\left(\mathfrak{H}\right)\) and we have \[\left\|\left(\omega_{n}P_{\mathfrak{H}_{n}}\left(x\right)\right)_ {n\in\mathbb{N}^{*}}\right\|_{l_{2}\left(\mathfrak{H}\right)} = \left\|\sum_{n=1}^{+\infty}\left|\omega_{n}P_{\mathfrak{H}_{n}} \left(x\right)\right|^{2}\right\|_{\mathfrak{A}}^{\frac{1}{2}}\] \[\leq \left\|\left|Bx\right|^{2}\right\|_{\mathfrak{A}}^{\frac{1}{2}}\] \[\leq \left\|Bx\right\|_{\mathfrak{H}}\] \[\leq \left\|B\right\|_{\mathfrak{A}}\left\|x\right\|_{\mathfrak{H}}\] Thus the mapping \(\mathcal{T}_{\mathcal{G}}\) is a well-defined bounded linear operator. **2. Proof that the operator \(\mathcal{T}_{\mathcal{G}}\) is adjointable and determination of its adjoint** Let \(x\in\mathfrak{H},\)\(y:=\left(y_{j}\right)_{j\in\mathbb{N}^{*}}\in l_{2}\left(\mathfrak{H}\right),\)\(n,m\in\mathbb{N}^{*}.\) To make the things simple we set \[z_{n,m}:=\sum_{j=n}^{n+m}\omega_{j}P_{\mathfrak{H}_{j}}\left(y_{j}\right)\] We have, relying on proposition 2.5. \[\left\|z_{n,m}\right\|_{\mathfrak{H}}^{4} = \left\|\left\langle z_{n,m},\sum_{j=n}^{n+m}\omega_{j}P_{\mathfrak {H}_{j}}\left(y_{j}\right)\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[= \left\|\sum_{j=n}^{n+m}\left\langle z_{n,m},\omega_{j}P_{\mathfrak {H}_{j}}\left(y_{j}\right)\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[= \left\|\sum_{j=n}^{n+m}\left\langle\omega_{j}P_{\mathfrak{H}_{j}} \left(z_{n,m}\right),y_{j}\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[\leq \left\|\sum_{j=n}^{n+m}\left\langle\omega_{j}P_{\mathfrak{H}_{j}} \left(z_{n,m}\right),\omega_{j}P_{\mathfrak{H}_{j}}\left(z_{n,m}\right) \right\rangle\right\|_{\mathfrak{A}}\left\|\sum_{j=n}^{n+m}\left\langle y_{j},y_{j}\right\rangle\right\|_{\mathfrak{A}}\] \[\leq \left\|Bz_{n,m}\right\|_{\mathfrak{H}}^{2}\left\|\sum_{j=n}^{n+m} \left\langle y_{j},y_{j}\right\rangle\right\|_{\mathfrak{A}}\] \[\leq \left\|B\right\|_{\mathfrak{A}}^{2}\left\|z_{n,m}\right\|_{ \mathfrak{H}}^{2}\left\|\sum_{j=n}^{n+m}\left\langle y_{j},y_{j}\right\rangle \right\|_{\mathfrak{A}}\] Consequently \[\left\|z_{n,m}\right\|_{\mathfrak{H}}\leq\left\|B\right\|_{\mathfrak{A}}\sqrt{ \left\|\sum_{j=n}^{n+m}\left\langle y_{j},y_{j}\right\rangle\right\|_{\mathfrak{ A}}} \tag{3}\] But since the series \(\sum\left\langle y_{n},y_{n}\right\rangle\) is convergent, we have \[\lim_{n\rightarrow+\infty}\sup_{m\in\mathbb{N}^{*}}\left\|\sum_{j=n}^{n+m} \left\langle y_{j},y_{j}\right\rangle\right\|_{\mathfrak{A}}=0\] Hence the series \(\sum\omega_{n}P_{\mathfrak{H}_{n}}\left(y_{n}\right)\) is convergent. Thus the mapping \[\begin{array}{ccc}\mathcal{K}:&l_{2}\left(\mathfrak{H}\right)&\rightarrow& \mathfrak{H}\\ &y:=\left(y_{n}\right)_{n\in\mathbb{N}^{*}}&\mapsto&\overset{+\infty}{\sum} \omega_{n}P_{\mathfrak{H}_{n}}\left(y_{n}\right)\end{array}\] is well-defined. It is clear that \(\mathcal{K}\) is also linear. When taking \(n=1\) and tending \(m\) to infinity, the relation (3) becomes \[\left\|\mathcal{K}\left(y\right)\right\|_{l_{2}\left(\mathfrak{H}\right)} \leq\left\|B\right\|_{\mathfrak{A}}\left\|y\right\|_{l_{2}\left(\mathfrak{H} \right)},\ y\in l_{2}\left(\mathfrak{H}\right)\] Thus the mapping \(\mathcal{K}\) is a bounded linear operator. Now we have \[\left\langle\mathcal{T}_{\mathcal{G}}\left(x\right),y\right\rangle _{l_{2}\left(\mathfrak{H}\right)} = \sum_{n=1}^{+\infty}\left\langle\omega_{n}P_{\mathfrak{H}_{n}} \left(x\right),y_{n}\right\rangle_{\mathfrak{H}}\] \[= \sum_{n=1}^{+\infty}\left\langle\omega_{n}P_{\mathfrak{H}_{n}} \left(x\right),P_{\mathfrak{H}_{n}}\left(y_{n}\right)\right\rangle_{\mathfrak{ H}}\] \[= \sum_{n=1}^{+\infty}\left\langle x,\omega_{n}P_{\mathfrak{H}_{n}} \left(y_{n}\right)\right\rangle_{\mathfrak{H}}\] \[= \left\langle x,\overset{+\infty}{\sum}_{n=1}^{+\infty}\omega_{n} P_{\mathfrak{H}_{n}}\left(y_{n}\right)\right\rangle_{\mathfrak{H}}\] \[= \left\langle x,\overset{+\infty}{\sum}_{n=1}^{+\infty}\omega_{n} P_{\mathfrak{H}_{n}}\left(y_{n}\right)\right\rangle_{\mathfrak{H}}\] \[= \left\langle x,\overset{\mathcal{K}}{\left(y\right)}\right\rangle _{\mathfrak{H}}\] It follows that the operator bounded linear adjointable operator and the adjoint of \(T_{\mathcal{G}}\) is the operator \(\mathcal{K}\). That is \(T_{\mathcal{G}}^{*}=\mathcal{K}\). The proof of the proposition is achieved. \(\blacksquare\) **Theorem 3.4.** _Let \(\mathcal{G}:=\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in \mathbb{N}^{*}}\) be a *-fusion frame of \(\mathfrak{H}\).Then \(\mathcal{S}_{\mathcal{G}}\) is a well-defined bounded linear positive and self-adjoint and invertible operator. Furthermore the following reconstruction formula holds_ \[x=\overset{+\infty}{\sum}_{n=1}^{+\infty}\omega_{n}^{2}P_{\mathfrak{H}_{n}} \left(\mathcal{S}_{\mathcal{G}}^{-1}\left(x\right)\right),\ x\in\mathfrak{H}\] **Proof** **1. Proof that \(\mathcal{S}_{\mathcal{G}}\) is a well-defined positive bounded linear and self-adjoint operator.** For every \(x\in\mathfrak{H}\) we have \(\left(\omega_{n}\pi_{\mathfrak{H}_{n}}\left(x\right)\right)_{n\in\mathbb{N}^{ \ast}}\in l_{2}\left(\mathfrak{H}\right)\). Hence the series \(\sum\omega_{n}^{2}P_{\mathfrak{H}_{n}}\left(x\right)\) is convergent for every \(x\in\mathfrak{H}\) and we have \[\mathcal{T}_{\mathcal{G}}^{\ast}\mathcal{T}_{\mathcal{G}}\left(x\right) = \mathcal{T}_{\mathcal{G}}^{\ast}\left(\left(\omega_{n}P_{\mathfrak{ H}_{n}}\left(x\right)\right)_{n\in\mathbb{N}^{\ast}}\right)\] \[= \underset{n=1}{\overset{+\infty}{\sum}}\omega_{n}^{2}P_{ \mathfrak{H}_{n}}\left(x\right)\] It follows that the mapping \(\mathcal{S}_{\mathcal{G}}\) is well-defined and \(\mathcal{S}_{\mathcal{G}}=\mathcal{T}_{\mathcal{G}}^{\ast}\mathcal{T}_{ \mathcal{G}}.\) Hence \(\mathcal{S}_{\mathcal{G}}\) is a positive bounded linear and self-adjoint operator. **2. Proof that the operator \(\mathcal{S}_{\mathcal{G}}\) is invertible** Since \(\mathcal{S}_{\mathcal{G}}\) is a positive bounded linear and self-adjoint operator, the operator \(\mathcal{S}_{\mathcal{G}}\) has a square root \(\sqrt{\mathcal{S}_{\mathcal{G}}}\)which also a positive bounded linear and self-adjoint operator. Hence we have for every \(x\in\mathfrak{H}\) \[\left\|\sqrt{\mathcal{S}_{\mathcal{G}}}\left(x\right)\right\|_{ \mathfrak{H}}^{2} = \left\|\left\langle\sqrt{\mathcal{S}_{\mathcal{G}}}\left(x\right),\sqrt{\mathcal{S}_{\mathcal{G}}}\left(x\right)\right\rangle\right\|_{ \mathfrak{A}}\] \[= \left\|\left\langle\mathcal{S}_{\mathcal{G}}\left(x\right),x \right\rangle\right\|_{\mathfrak{A}}\] \[= \left\|\left\langle\overset{+\infty}{\underset{n=1}{\overset{+ \infty}{\sum}}}\omega_{n}^{2}P_{\mathfrak{H}_{n}}\left(x\right),x\right\rangle \right\|_{\mathfrak{A}}\] \[= \left\|\left\langle\overset{+\infty}{\underset{n=1}{\overset{+ \infty}{\sum}}}\omega_{n}^{2}\left(P_{\mathfrak{H}_{n}}P_{\mathfrak{H}_{n}} \right)\left(x\right),x\right\rangle\right\|_{\mathfrak{A}}\] \[= \left\|\left\langle\overset{+\infty}{\underset{n=1}{\overset{+ \infty}{\sum}}}\omega_{n}^{2}\pi_{\mathfrak{H}_{n}}\left(x\right),\pi_{ \mathfrak{H}_{n}}\left(x\right)\right\rangle\right\|_{\mathfrak{A}}\] \[= \left\|\overset{+\infty}{\underset{n=1}{\overset{+\infty}{\sum}} }\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\right\|_{ \mathfrak{A}}\] It follows that \[\left\|\sqrt{\mathcal{S}_{\mathcal{G}}}\left(x\right)\right\|_{ \mathfrak{H}} \geq \left(\left\|\overset{+\infty}{\underset{n=1}{\overset{+ \infty}{\sum}}}\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^ {2}\right\|_{\mathfrak{A}}\right)^{\frac{1}{2}}\] \[\geq \left(\left\|\left|Ax\right|^{2}\right\|_{\mathfrak{A}}\right)^{ \frac{1}{2}}\] \[\geq \left\|Ax\right\|_{\mathfrak{H}}\] But we know that \(A\) is a strictly positive element of \(\mathfrak{A}.\) It follows from the proposition 2.10. that \[\left\|Ax\right\|_{\mathfrak{H}}\geq\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1} \left\|x\right\|_{\mathfrak{H}} \tag{5}\] When combining (4) with (5) we obtain \[\left\|\sqrt{\mathcal{S}_{\mathcal{G}}}\left(x\right)\right\|_{\mathfrak{H}}\geq \left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{\mathfrak{H}}\] Hence \(\sqrt{\mathcal{S}_{\mathcal{G}}}\) is self adjoint and bounded below. It follows theorem 2.7. that the operator \(\sqrt{\mathcal{S}_{\mathcal{G}}}\) is invertible. Consequently the operator \(\mathcal{S}_{\mathcal{G}}=\left(\sqrt{\mathcal{S}_{\mathcal{G}}}\right)^{2}\) is also invertible. **3. Proof of the reconstruction formula** The operator \(\mathcal{S}_{\mathcal{G}}\) is invertible. Hence we have for every \(x\in\mathfrak{H}\) \[x = \mathcal{S}_{\mathcal{G}}\left(\mathcal{S}_{\mathcal{G}}^{-1} \left(x\right)\right)\] \[= \underset{n=1}{\overset{+\infty}{\sum}}\omega_{n}^{2}P_{ \mathfrak{H}_{n}}\left(\mathcal{S}_{\mathcal{G}}^{-1}\left(x\right)\right)\] So the reconstruction formula holds true. The proof of the proposition is achieved. The following result is a direct consequence of the reconstruction formula. **Corollary 3.5.** _Let \(\mathfrak{A}\) be a unital C*-algebra, \(\mathfrak{H}\) a Hilbert \(\mathfrak{A}\)-modules which has a *-fusion frame sequence of orthogonally complemented submodule of \(\mathfrak{H}\). Then \(\mathfrak{H}\) is countably generated._ **Theorem 3.6.** _Let \(\mathfrak{A}\) be a unital C*-algebra, \(\mathfrak{H}\) a Hilbert \(\mathfrak{A}\)-modules, \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\) a sequence of orthogonally complemented submodule of \(\mathfrak{H}\) and \(\left(\omega_{n}\right)_{n\in\mathbb{N}^{*}}\) a weight of the C*-algebra \(\mathfrak{A}\)._ \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\)_is a *-fusion frame of \(\mathfrak{H}\) if and only if the series \(\sum\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) is convergent for every \(x\in\mathfrak{H}\) and there exists a real constants \(c,\)\(d>0\) such that_ \[c\left\|x\right\|_{\mathfrak{H}}^{2}\leq\left\|\underset{n=1}{\overset{+ \infty}{\sum}}\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2 }\right\|_{\mathfrak{A}}\leq d\left\|x\right\|_{\mathfrak{H}}^{2},\text{ }x\in\mathfrak{H} \tag{6}\] **Proof** 1. Assume now that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame. Then the series \(\sum\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) is convergent for every \(x\in\mathfrak{H}\) and there exist strictly positive elements \(A\) and \(B\) of \(\mathfrak{A}\) such that \[\left|Ax\right|^{2}\preccurlyeq\underset{n=1}{\overset{+\infty}{\sum}}\omega_ {n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\preccurlyeq\left| Bx\right|^{2},\text{ }x\in\mathfrak{H}\] It follows that \[\left\|Ax\right|^{2}\right\|_{\mathfrak{A}}\leq\left\|\overset{+\infty}{\underset{n= 1}{\sum}}\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2} \right\|_{\mathfrak{A}}\leq\left\|\left|Bx\right|^{2}\right\|_{\mathfrak{A}},\ x\in\mathfrak{H}\] It follows from proposition 2.10. that \[\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}\left\|x\right\|_{\mathfrak{H}}^{2} \leq\left\|\overset{+\infty}{\underset{n=1}{\sum}}\omega_{n}^{2} \left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}}\leq \left\|B\right\|_{\mathfrak{A}}^{2}\left\|x\right\|_{\mathfrak{H}}^{2},\ x\in \mathfrak{H}\] 2. Assume now that the series \(\sum\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) is convergent for every \(x\in\mathfrak{H}\) and that there exists a real constants \(c\), \(d>0\) such that \[c\left\|x\right\|_{\mathfrak{H}}^{2}\leq\left\|\overset{+\infty}{\underset{n= 1}{\sum}}\omega_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2} \right\|_{\mathfrak{A}}\leq d\left\|x\right\|_{\mathfrak{H}}^{2},\ x\in \mathfrak{H}\] Let \(n\), \(m\in\mathbb{N}^{*}\), \(x\in\mathfrak{H}.\) We set \[u_{n,m}:=\overset{n+m}{\underset{j=n}{\sum}}\omega_{j}^{2}P_{\mathfrak{H}_{ j}}\left(x\right)\] Then, relying on proposition 2.5., we obtain \[\left\|u_{n,m}\right\|_{\mathfrak{H}}^{4} = \left\|\left\langle u_{n,m},\overset{n+m}{\underset{j=n}{\sum}} \omega_{j}P_{\mathfrak{H}_{j}}\left(\omega_{j}P_{\mathfrak{H}_{j}}\left(x \right)\right)\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[= \left\|\overset{n+m}{\underset{j=n}{\sum}}\left\langle u_{n,m}, \omega_{j}P_{\mathfrak{H}_{j}}\left(\omega_{j}P_{\mathfrak{H}_{j}}\left(x \right)\right)\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[= \left\|\overset{n+m}{\underset{j=n}{\sum}}\left\langle\omega_{j} P_{\mathfrak{H}_{j}}\left(u_{n,m}\right),\omega_{j}P_{\mathfrak{H}_{j}}\left(x \right)\right\rangle\right\|_{\mathfrak{A}}^{2}\] \[\leq d\left\|u_{n,m}\right\|_{\mathfrak{H}}^{2}\left\|\overset{n+m}{ \underset{j=n}{\sum}}\left|\omega_{j}P_{\mathfrak{H}_{j}}\left(x\right)\right| ^{2}\right\|_{\mathfrak{A}}\] Consequently \[\left\|u_{n,m}\right\|_{\mathfrak{H}}\leq\sqrt{d}\left\|\sum\limits_{j=n}^{n+m} \left|\omega_{j}P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{ A}} \tag{7}\] But since the series \(\sum\limits_{j=n}^{n+m}\left|\omega_{j}P_{\mathfrak{H}_{j}}\left(x\right) \right|^{2}\) is convergent, we have \[\lim\limits_{n\rightarrow+\infty_{m}\in\mathbb{N}^{*}}\left\|\sum\limits_{j= n}^{n+m}\left|\omega_{j}P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{ \mathfrak{A}}=0\] Hence the series \(\sum\omega_{n}^{2}P_{\mathfrak{H}_{n}}\left(x\right)\) is convergent. Thus the mapping \[\begin{array}{ccc}\mathcal{L}:&\mathfrak{H}&\rightarrow&\mathfrak{H}\\ &x&\mapsto&\sum\limits_{n=1}^{+\infty}\omega_{n}^{2}P_{\mathfrak{H}_{n}}\left( x\right)\end{array}\] is well-defined. It is clear that \(\mathcal{L}\) is also linear. When taking \(n=1\) and tending \(m\) to infinity, the relation (7) becomes \[\left\|\mathcal{L}\left(y\right)\right\|_{l_{2}\left(\mathfrak{H}\right)} \leq\sqrt{d}\left\|\sum\limits_{j=1}^{+\infty}\left|\omega_{j}P_{\mathfrak{H}_ {j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}},\ y\in l_{2}\left( \mathfrak{H}\right)\] Consequently we have \[\left\|\mathcal{L}\left(y\right)\right\|_{l_{2}\left(\mathfrak{H} \right)}\leq d\left\|x\right\|_{\mathfrak{A}},\ y\in l_{2}\left(\mathfrak{H}\right)\] Thus the linear mapping \(\mathcal{L}\) is bounded. On the other hand we have for every \(x\in\mathfrak{H}\) and \(n\in\mathbb{N}^{*}\) \[\sum\limits_{n=1}^{n}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)\right|^{2} = \sum\limits_{n=1}^{n}\omega_{j}^{2}\left\langle P_{\mathfrak{H}_ {j}}\left(x\right),P_{\mathfrak{H}_{j}}\left(x\right)\right\rangle\] \[= \left\langle\sum\limits_{j=1}^{n}\omega_{j}^{2}P_{\mathfrak{H}_{ j}}\left(x\right),x\right\rangle\] When taking \(n=1\) and tending \(m\) to infinity, the relation (8) becomes \[\sum\limits_{n=1}^{+\infty}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}}\left(x \right)\right|^{2}=\left\langle\mathcal{L}\left(x\right),x\right\rangle,\ x\in \mathfrak{H} \tag{9}\] It follows that \[c\left\|x\right\|_{\mathfrak{H}}^{2}\leq\left\|\left\langle\mathcal{L}\left(x \right),x\right\rangle\right\|_{\mathfrak{A}}\leq d\left\|x\right\|_{\mathfrak{ H}}^{2},\ x\in\mathfrak{H} \tag{10}\] We can easily prove that the operator \(\mathcal{L}\) is self-adjoint and positive. It follows that \(\mathcal{L}\) has a square root \(\sqrt{\mathcal{L}}\) which is also self-adjoint. Thence the relation (10) becomes \[c\left\|x\right\|_{\mathfrak{H}}^{2}\leq\left\|\left\langle\sqrt{\mathcal{L}} \left(x\right),\sqrt{\mathcal{L}}\left(x\right)\right\rangle\right\|_{ \mathfrak{A}}\leq d\left\|x\right\|_{\mathfrak{H}}^{2},\ x\in\mathfrak{H}\] That is \[\sqrt{c}\left\|x\right\|_{\mathfrak{H}}\leq\left\|\sqrt{\mathcal{L}}\left(x \right)\right\|_{\mathfrak{H}}\leq\sqrt{d}\left\|x\right\|_{\mathfrak{H}},\ x\in\mathfrak{H} \tag{11}\] Consequently, by virtue of theorem 2.7., there exist a real constants \(c_{1},\)\(d_{1}>0\) such that \[c_{1}\left|x\right|^{2}\leq\left\langle\sqrt{\mathcal{L}}\left(x\right),\sqrt{ \mathcal{L}}\left(x\right)\right\rangle\leq d_{1}\left|x\right|^{2},\ x\in \mathfrak{H}\] That is \[c_{1}\left|x\right|^{2}\leq\left\langle\mathcal{L}\left(x\right),x\right\rangle \leq d_{1}\left|x\right|^{2},\ x\in\mathfrak{H}\] It follows from (9) that \[c_{1}\left|x\right|^{2}\leq\overset{+\infty}{\underset{n=1}{\sum}}\omega_{j}^ {2}\left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\leq d_{1}\left|x\right| ^{2},\ x\in\mathfrak{H}\] So \[\left|\left(\sqrt{c_{1}}1_{\mathfrak{A}}\right)x\right|^{2}\leq\overset{+ \infty}{\underset{n=1}{\sum}}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}}\left(x \right)\right|^{2}\leq\left|\left(\sqrt{d_{1}}1_{\mathfrak{A}}\right)x\right| ^{2},\ x\in\mathfrak{H} \tag{12}\] Since \(\sqrt{c_{1}}1_{\mathfrak{A}}\) and \(\sqrt{d_{1}}1_{\mathfrak{A}}\) are strictly positive elements of \(\mathfrak{A},\) it follows from (12) that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion-frame. \(\blacksquare\) **Theorem 3.7.** _Let \(\mathfrak{A}\) be a unital and full C*-algebra, \(E\) and \(F\) be a Hilbert \(\mathfrak{A}\)-modules and \(\Psi:E\to F\) a bijective \(\mathfrak{A}\)-linear mapping which is orthogonality preserving. Let \(\mathcal{F}:=\left(\left(V_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) be a *-fusion frame of \(E\) of bounds \(A,B\in\mathfrak{A}.\)_ 1. \(\left(\Psi\left(V_{n}\right)\right)_{n\in\mathbb{N}^{*}}\)_is then a sequence of orthogonally complemented subspaces of \(F.\) Furthermore the following relation holds_ \[\pi_{\Psi\left(V_{n}\right)}=\Psi P_{V_{n}}\Psi^{-1} \tag{13}\] 2. \(\left(\left(\Psi\left(V_{n}\right),\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\)_is a *-fusion frame of \(F.\)_ **Proof** 1. For every \(n\in\mathbb{N}^{*},\)\(V_{n}\) is an orthogonally complemented subspace of \(E.\) Hence we have \[E=V_{n}\oplus V_{n}^{\bot}\] Thanks to theorem 2.9., it follows from the assumptions on \(\Psi\) that it is an isomorphism of Hilbert \(\mathfrak{A}\)-module from \(E\) into \(F.\) It follows then that \[F=\Psi\left(V_{n}\right)\oplus\Psi\left(V_{n}^{\bot}\right),\ n\in\mathbb{N} ^{*}\] Since \(\Psi\) is orthogonality preserving, it follows that \[\Psi\left(V_{n}^{\perp}\right)\subset\left(\Psi\left(V_{n}\right)\right)^{\perp}, \ n\in\mathbb{N}^{*}\] Hence \[F=\Psi\left(V_{n}\right)+\left(\Psi\left(V_{n}\right)\right)^{\perp},\ n\in \mathbb{N}^{*} \tag{14}\] Let \(n\in\mathbb{N}^{*}\) and \(a\in\Psi\left(V_{n}\right)\cap\left(\Psi\left(V_{n}\right)\right)^{\perp}.\) There exists then \(b\in V_{n}\) such that \(a=\Psi\left(b\right)\in\left(\Psi\left(V_{n}\right)\right)^{\perp}.\) Hence \[\left\langle\Psi\left(b\right),\Psi\left(b\right)\right\rangle=0_{\mathfrak{A}}\] But \[\left\langle\Psi\left(b\right),\Psi\left(b\right)\right\rangle=\nu\left(\Psi \right)\left\langle b,b\right\rangle\] It follows that \[\nu\left(\Psi\right)\left\langle b,b\right\rangle=0_{\mathfrak{A}}\] But \(\nu\left(\Psi\right)\) is invertible. So \(\left\langle b,b\right\rangle=0.\) Hence \(b=0_{E}.\) Consequently we have \[\Psi\left(V_{n}\right)\cap\left(\Psi\left(V_{n}\right)\right)^{\perp}=\{0_{E} \},\ n\in\mathbb{N}^{*} \tag{15}\] It follows from (14) and (15) that \[F=\Psi\left(V_{n}\right)\oplus\left(\Psi\left(V_{n}\right)\right)^{\perp},\ n\in \mathbb{N}^{*} \tag{16}\] Hence \[\Psi\left(V_{n}\right)=\left(\Psi P_{V_{n}}\Psi^{-1}\right)\left(E\right),\ F= \left(\Psi P_{V_{n}}\Psi^{-1}\right)\left(E\right)\oplus\left(\left(\Psi P_{V _{n}}\Psi^{-1}\right)\left(E\right)\right)^{\perp},\ n\in\mathbb{N}^{*}\] It follows that \[\pi_{\Psi\left(V_{n}\right)}=\Psi P_{V_{n}}\Psi^{-1},\ n\in\mathbb{N}^{*}\] 2. For every \(y\in F\) and \(n\in\mathbb{N}^{*}\), we have \[\omega_{n}^{2}\left|P_{\Psi\left(V_{n}\right)}\left(y\right) \right|^{2} = \omega_{n}^{2}\left|\Psi P_{V_{n}}\Psi^{-1}\left(y\right)\right| ^{2}\] \[= \omega_{n}^{2}\left|\Psi\left(P_{V_{n}}\left(\Psi^{-1}\left(y \right)\right)\right)\right|^{2}\] \[= \omega_{n}^{2}\left\langle\Psi\left(P_{V_{n}}\left(\Psi^{-1} \left(y\right)\right)\right),\Psi\left(P_{V_{n}}\left(\Psi^{-1}\left(y \right)\right)\right)\right\rangle\] \[= \nu\left(\Psi\right)\omega_{n}^{2}\left\langle P_{V_{n}}\left( \Psi^{-1}\left(y\right)\right),P_{V_{n}}\left(\Psi^{-1}\left(y\right)\right)\right\rangle\] \[= \nu\left(\Psi\right)\omega_{n}^{2}\left|P_{V_{n}}\left(\Psi^{-1} \left(y\right)\right)\right|^{2}\] So we obtain \[\omega_{n}^{2}\left|P_{\Psi\left(V_{n}\right)}\left(y\right)\right|^{2}=\nu \left(\Psi\right)\omega_{n}^{2}\left|P_{V_{n}}\left(\Psi^{-1}\left(y\right) \right)\right|^{2},\ n\in\mathbb{N}^{*},y\in F \tag{17}\] Since \(\mathcal{F}\) is a *-fusion frame it follows that the series \(\sum\omega_{n}^{2}\left|P_{V_{n}}\left(\Psi^{-1}\left(y\right)\right)\right|^{2}\) is convergent in \(\mathfrak{A}\) for every \(y\in F.\) Hence the series \(\sum\omega_{n}^{2}\left|P_{\Psi\left(V_{n}\right)}\left(y\right)\right|^{2}\) is also convergent in \(\mathfrak{A}\) for every \(y\in F.\) By virtue of the relatio \(\left(\ref{eq:F}\right)\) we can then write for every \(y\in F\) But we know that It follows then that \[\nu\left(\Psi\right)\left|A\Psi^{-1}\left(y\right)\right|^{2}\preccurlyeq \sum\limits_{n=1}^{+\infty}\!\!\omega_{n}^{2}\left|P_{\Psi\left(V_{n}\right)} \left(y\right)\right|^{2}\preccurlyeq\nu\left(\Psi\right)\left|B\Psi^{-1} \left(y\right)\right|^{2}\] Since \(\nu\left(\Psi\right)^{\frac{1}{2}}\) is a central and strictly positive element of \(\mathfrak{A},\) it follows that But \(A\) and \(B\) are strictly positive elements of \(\mathfrak{A}\) and \(\nu\left(\Psi\right)^{\frac{1}{2}}\) is a central and strictly positive element of \(\mathfrak{A}.\) Hence \(\nu\left(\Psi\right)^{\frac{1}{2}}A\) and \(\nu\left(\Psi\right)^{\frac{1}{2}}B\) are also strictly positive elements of \(\mathfrak{A}.\) Consequently \(\left(\left(\Psi\left(V_{n}\right),\omega_{n}\right)\right)_{n\in\mathbb{N}^ {*}}\) is a *-fusion frame of \(F.\) \(\blacksquare\) **Theorem 3.8.** _For every \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}},\)\(\left(\beta_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(\mathfrak{S}_{n} \right)_{n\in\mathbb{N}^{*}}\right)\) the sequence \(\left(\alpha_{n}+\beta_{n}\right)_{n\in\mathbb{N}^{*}}\) belongs to \(\mathfrak{m}\left(\left(\mathfrak{S}_{n}\right)_{n\in\mathbb{N}^{*}}\right).\) It follows that \(\mathfrak{m}\left(\left(\mathfrak{S}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) is a convex cone of the \(\mathbb{C}\)-vector space \(\mathfrak{A}^{\mathbb{N}^{*}}.\)_ **Proof** For each \(n\in\mathbb{N}^{*}\) the element \(\alpha_{n}^{-1}\beta_{n}\) is positif. It follows that \(1_{\mathfrak{A}}+\alpha_{n}^{-1}\beta_{n}\) is also positif. Hence we have \(\sigma\left(\alpha_{n}^{-1}\beta_{n}\right)\subset\mathbb{R}^{+}\). It follows that \(-1\notin\sigma\left(\alpha_{n}^{-1}\beta_{n}\right).\) So \(1_{\mathfrak{A}}+\alpha_{n}^{-1}\beta_{n}\) is invertible in \(\mathfrak{A}\). Hence the element \(\alpha_{n}+\beta_{n}\) is also invertible in \(\mathfrak{A}\). So \(\alpha_{n}+\beta_{n}\) is a strictly positive element of \(\mathfrak{A}.\) It is trivial that \(\alpha_{n}+\beta_{n}\) is a central element of \(\mathfrak{A}.\) Consequently \(\left(\alpha_{n}+\beta_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathcal{W}\left( \mathfrak{H}\right).\) On the other hand there exist a strictly positive elements \(A\), \(B\), \(C\), \(D\) of \(\mathfrak{A}\) such that the following relations hold for every \(x\in\mathfrak{H}\) \[\left\{\begin{array}{l}\left|Ax\right|^{2}\preccurlyeq\sum\limits_{n=1}^{+ \infty}\!\!\alpha_{n}^{2}\left|P_{\mathfrak{S}_{n}}\left(x\right)\right|^{2} \preccurlyeq\left|Bx\right|^{2}\\ \left|Cx\right|^{2}\preccurlyeq\sum\limits_{n=1}^{+\infty}\!\!\beta_{n}^{2} \left|P_{\mathfrak{S}_{n}}\left(x\right)\right|^{2}\preccurlyeq\left|Dx\right| ^{2}\end{array}\right.\] But since all the \(\alpha_{n}\), \(\beta_{n}\) are central strictly positifs elements of \(\mathfrak{A}\), it follows that \[\alpha_{n}^{2}+\beta_{n}^{2}\preccurlyeq\left(\alpha_{n}+\beta_{n}\right)^{2} \preccurlyeq 2\alpha_{n}^{2}+\beta_{n}^{2},\ n\in\mathbb{N}^{*}\] and that the two series \(\sum\alpha_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) and \(\sum\beta_{n}^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\) are convergent in \(\mathfrak{A}\). Consequently, for every \(x\in\mathfrak{H}\), the series \(\sum\left(\alpha_{n}+\beta_{n}\right)^{2}\left|P_{\mathfrak{H}_{n}}\left(x \right)\right|^{2}\) is convergent in \(\mathfrak{A}\) and we have \[\left|Ax\right|^{2}\preccurlyeq\sum_{n=1}^{+\infty}\left(\alpha_{n}+\beta_{n} \right)^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\] in addition to \[\sum_{n=1}^{+\infty}\left(\alpha_{n}+\beta_{n}\right)^{2}\left|P_{ \mathfrak{H}_{n}}\left(x\right)\right|^{2}\preccurlyeq\sum_{n=1}^{+\infty}2 \left(\alpha_{n}^{2}+\beta_{n}^{2}\right)\left|\pi_{\mathfrak{H}_{n}}\left(x \right)\right|^{2}\] \[\preccurlyeq 2\left(\left|Cx\right|^{2}+\left|Dx\right|^{2}\right)\] \[\preccurlyeq 2C\left\langle x,x\right\rangle C^{*}+2D\left\langle x,x \right\rangle D^{*}\] \[\preccurlyeq 2C\left\langle x,x\right\rangle C+2D\left\langle x,x \right\rangle D\] But we know, thanks to proposition 2.10. that \[\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{ \mathfrak{H}} \leq \left\|Ax\right\|_{\mathfrak{H}}\] \[\left\|2C\left\langle x,x\right\rangle C+2D\left\langle x,x \right\rangle D\right\|_{\mathfrak{A}} \leq 2\left(\left\|C\right\|_{\mathfrak{A}}^{2}+\left\|D \right\|_{\mathfrak{H}}^{2}\right)\left\|x\right\|_{\mathfrak{H}}^{2}\] It follows that \[\left\{\begin{array}{c}\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}\left\|x \right\|_{\mathfrak{H}}^{2}\leq\left\|\sum_{n=1}^{+\infty}\left(\alpha_{n}+ \beta_{n}\right)^{2}\left|P_{\mathfrak{H}_{n}}\left(x\right)\right|^{2}\right\| _{\mathfrak{A}}\\ \left\|\sum_{n=1}^{+\infty}\left(\alpha_{n}+\beta_{n}\right)^{2}\left|P_{ \mathfrak{H}_{n}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}}\leq 2\left( \left\|C\right\|_{\mathfrak{A}}^{2}+\left\|D\right\|_{\mathfrak{A}}^{2}\right) \left\|x\right\|_{\mathfrak{H}}^{2}\end{array}\right.\] Hence, by virtue of theorem 3.6. that \(\left(\left(\mathfrak{H}_{n},\alpha_{n}+\beta_{n}\right)\right)_{n\in\mathbb{ N}^{*}}\) is a *-fusion frame \(\mathfrak{H}\). Finally we conclude that \(\left(\alpha_{n}+\beta_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left( \left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right).\) It is also clear that \(\left(\lambda\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left( \mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) for every \(\lambda\in\mathbb{R}^{+*}.\) Consequently \(\mathfrak{m}\left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) is a convex cone of the \(\mathbb{C}\)-vector space \(\mathfrak{N}^{\mathbb{N}^{*}}.\) \(\blacksquare\) ## 4 Examples of *-fusion frames **Example 1** We denote \(\mathfrak{L}\) the \(\mathbb{C}\)-vector space of all the sequences \(z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\)\(z_{n}\in\mathbb{C}\) with \[\sum_{n=1}^{+\infty}\frac{1}{2^{n}}\left|z_{n}\right|^{2}<+\infty\] It is clear that \(\mathfrak{L}\) is a commutative C*-algebra when it is endowed with the mappings \[\begin{array}{ccccc}\cdot:&\mathfrak{L}^{2}&\rightarrow&\mathfrak{L}\\ &&\big{(}\left(a_{n,1}\right)_{n\in\mathbb{N}^{*}},\left(a_{n,2}\right)_{n\in \mathbb{N}^{*}}\big{)}&\mapsto&\left(a_{n,1}\right)_{n\in\mathbb{N}^{*}} \cdot\left(a_{n,2}\right)_{n\in\mathbb{N}^{*}}:=\left(a_{n,1}a_{n,2}\right)_{n \in\mathbb{N}^{*}}\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par It follows that \(U_{k}^{\perp}\subset Ker\left(P_{k}\right).\) It is also clear that \(Ker\left(P_{k}\right)\subset U_{k}^{\perp}\). Hence we have for every \(k\in\mathbb{N}^{\ast},U_{k}^{\perp}=Ker\left(P_{k}\right)\). We prove easily that \(P_{k}\left(\mathfrak{L}\right)\oplus Ker\left(P_{k}\right)=\mathfrak{L}.\) Consequently \[U_{k}\oplus U_{k}^{\perp}=\mathfrak{L}\] The final conclusion is that the set \(U_{k}:=P_{k}\left(\mathfrak{L}\right)\), is for every \(k\in\mathbb{N}^{\ast}\), an orthogonally complemented submodule of \(\mathfrak{L}\). \(\blacksquare\) **Proposition 4.2.** \(\mathcal{W}\left(\mathfrak{L}\right)\) _is the set of all the sequences \(\left(\left(a_{n,k}\right)_{k\in\mathbb{N}^{\ast}}\right)_{n\in\mathbb{N}^{ \ast}}\) of \(\left(\mathfrak{L}\right)^{\mathbb{N}^{\ast}}\) such that_ \[\left\{\begin{array}{c}a_{n,k}\in\mathbb{R}^{+\ast},\ n,\ k\in\mathbb{N}^{ \ast}\\ \sum\limits_{n=1}^{+\infty}\frac{1}{2^{n}a_{n,k}^{2}}<+\infty,\ k\in\mathbb{N }^{\ast}\end{array}\right.\] **Proof** A sequence \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) is a positive element of \(\mathfrak{L}\) if and only if there exists a sequence \(\left(b_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) such that \[\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}=\left(b_{n}\right)_{n\in \mathbb{N}^{\ast}}\left(\left(b_{n}\right)_{n\in\mathbb{N}^{\ast}}\right)^{\ast}\] That is \[\left\{\begin{array}{c}\alpha_{n}=\left|b_{n}\right|^{2},\ n\in\mathbb{N}^{ \ast}\\ \sum\limits_{n=1}^{+\infty}\frac{\left|b_{n}\right|^{2}}{2^{n}}<+\infty\end{array}\right.\] These conditions rewrite \[\left\{\begin{array}{c}\alpha_{n}\in\mathbb{R}^{+},\ n\in\mathbb{N}^{\ast} \\ \sum\limits_{n=1}^{+\infty}\frac{\alpha_{n}}{2^{n}}<+\infty\end{array}\right.\] But since \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) it follows that \(\alpha_{n}=\underset{n\rightarrow+\infty}{o}\left(2^{\frac{n}{2}}\right).\) Hence \(\sum\limits_{n=1}^{+\infty}\frac{\alpha_{n}}{2^{n}}<+\infty.\) Consequently \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) is a positive element of \(\mathfrak{L}\) if and only \[\alpha_{n}\in\mathbb{R}^{+},\ n\in\mathbb{N}^{\ast}\] Now let \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) be a positive element of \(\mathfrak{L}.\)\(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\) is invertible in \(\mathfrak{L}\) if and only if there exists a sequence \(\left(c_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{L}\) such that \[\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\left(c_{n}\right)_{n\in \mathbb{N}^{\ast}}=\left(1\right)_{n\in\mathbb{N}^{\ast}}\] That is \[\left\{\begin{array}{c}\alpha_{n}c_{n}=1,\ n\in\mathbb{N}^{\ast}\\ \sum\limits_{n=1}^{+\infty}\frac{1}{2^{n}}\left|c_{n}\right|^{2}<+\infty\end{array}\right.\] These conditions rewrite \[\left\{\begin{array}{c}\alpha_{n}\in\mathbb{R}^{+*},\ n\in\mathbb{N}^{*}\\ \sum\limits_{n=1}^{+\infty}\frac{1}{2^{n}\alpha_{n}^{2}}<+\infty\end{array}\right.\] The conclusion is that \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) is a strictly positive element of \(\mathfrak{L}\) if and only if \[\left\{\begin{array}{c}\alpha_{n}\in\mathbb{R}^{+*},\ n\in\mathbb{N}^{*}\\ \sum\limits_{n=1}^{+\infty}\frac{1}{2^{n}\alpha_{n}^{2}}<+\infty\end{array}\right.\] The final conclusion is that a sequence \(\left(\left(a_{n,k}\right)_{k\in\mathbb{N}^{*}}\right)_{n\in\mathbb{N}^{*}}\) of \(\left(\mathfrak{L}\right)^{\mathbb{N}^{*}}\) belongs to \(\mathcal{W}\left(\mathfrak{L}\right)\) if and only if \[\left\{\begin{array}{c}a_{n,k}\in\mathbb{R}^{+*},\ n,\ k\in\mathbb{N}^{*}\\ \sum\limits_{n=1}^{+\infty}\frac{1}{2^{n}a_{n,k}^{2}}<+\infty,\ k\in\mathbb{N} ^{*}\end{array}\right.\] The proof of the proposition is then achieved. \(\blacksquare\) We denote by \(\preccurlyeq\), as usual the partial order on the C*-algebra \(\mathfrak{L}\) defined by positive elements of \(\mathfrak{L}.\) In view of the proof of the last proposition, it is then clear that for every \(a:=\left(a_{n}\right)_{n\in\mathbb{N}^{*}},\)\(b:=\left(b_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{L},\) we have \(a\preccurlyeq b\) if and only if \(b_{n}-a_{n}\in\mathbb{R}^{+}\) for every \(n\in\mathbb{N}^{*}.\) **Proposition 4.3.** _A sequence \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\) of elements of \(\mathcal{W}\left(\mathfrak{L}\right)\) belongs to \(\mathfrak{m}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) if and only if_ \[\left\{\begin{array}{c}0<\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+ \infty,\ k\in\mathbb{N}^{*}\\ \sup\limits_{k\in\mathbb{N}^{*}}\left(2^{k}\left(\sum\limits_{n=1}^{+\infty}a _{n,k}^{2}d_{n,k}\right)\right)_{k\in\mathbb{N}^{*}}<+\infty\end{array}\right.\] _In this case \(\left(\left(U_{n}.,a_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) is a -tight *-fusion frame \(\left(\left(U_{n},a_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) of \(\mathfrak{L}.\)_ **Proof** Let \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathcal{W}\left(\mathfrak{L}\right).\) \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}:=\left(\left(a_{n,k}\right)_{k\in \mathbb{N}^{*}}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n} \right)_{n\in\mathbb{N}^{*}}\right)\) if and only if there exists two elements \(\alpha:=\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) and \(\beta:=\left(\beta_{n}\right)_{n\in\mathbb{N}^{*}}\) belonging to \(\mathcal{W}\left(\mathfrak{L}\right)\) such that i. for every \(x:=\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{L},\) the series \(\sum a_{n}^{2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle\) is convergent in \(\mathfrak{L};\) ii. the following relation holds \[\alpha^{2}\left\langle x,x\right\rangle\preccurlyeq\sum\limits_{n=1}^{+\infty }a_{n}^{2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle \preccurlyeq\beta^{2}\left\langle x,x\right\rangle\] But \[a_{n}^{2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle= \left(a_{n,k}^{2}d_{n,k}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}},\text { }n\in\mathbb{N}^{*}\] Hence \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if i'. the series \(\sum\left(a_{n,k}^{2}d_{n,k}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}}\) is convergent in \(\mathfrak{L}\) for every \(x:=\left(x_{k}\right)_{k\in\mathbb{N}^{*}}\in\mathfrak{L};\) ii'. \(\left(\alpha_{k}^{2}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}}\precsim \sum\limits_{n=1}^{+\infty}\left(a_{n,k}^{2}d_{k,n}\left|x_{k}\right|^{2} \right)_{k\in\mathbb{N}^{*}}\precsim\left(\beta_{k}^{2}\left|x_{k}\right|^{2} \right)_{k\in\mathbb{N}^{*}}.\) It follows then clearly that \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty,k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}\frac{1}{2^{k}}\left(\left(\sum\limits_{n=1}^{+ \infty}a_{n,k}^{2}d_{n,k}\right)\left|x_{k}\right|^{2}\right)^{2}<+\infty\\ \alpha_{k}^{2}\leq\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}\leq\beta_{k}^ {2},\,k\in\mathbb{N}^{*}\end{array}\right.\] holds for every \(x:=\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{L}.\) But since the mapping \[\begin{array}{ccc}\mathfrak{L}&\rightarrow&l_{2}\left(\mathbb{C}\right)\\ \left(x_{n}\right)_{n\in\mathbb{N}^{*}}&\mapsto&\left(\frac{1}{2^{n}}x_{n}^{2 }\right)_{n\in\mathbb{N}^{*}}\end{array}\] is onto, we obtain the following equivalence : \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty,\text{ }k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}2^{k}\left(\left(\sum\limits_{n=1}^{+\infty}a_{n, k}^{2}d_{n,k}\right)\right)^{2}\left|y_{k}\right|^{2}<+\infty\\ \alpha_{k}^{2}\leq\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}\leq\beta_{k}^ {2},\,k\in\mathbb{N}^{*}\end{array}\right.\] holds for every \(y:=\left(y_{n}\right)_{n\in\mathbb{N}^{*}}\in l_{1}\left(\mathbb{C}\right).\) Finally it follows easily that \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty,\text{ }k\in\mathbb{N}^{*}\\ \sup\limits_{k\in\mathbb{N}^{*}}\left(2^{k}\left(\sum\limits_{n=1}^{+\infty}a_ {n,k}^{2}d_{n,k}\right)\right)_{k\in\mathbb{N}^{*}}<+\infty\\ 0<\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n},\text{ }k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}\sum\limits_{n=1}^{+\infty}\frac{a_{n,k}^{2}d_{k,n} }{2^{k}}<+\infty\end{array}\right.\] That is \[\left\{\begin{array}{c}0<\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty, \ k\in\mathbb{N}^{*}\\ \sup\limits_{k\in\mathbb{N}^{*}}\left(2^{k}\left(\sum\limits_{n=1}^{+\infty}a_{ n,k}^{2}d_{n,k}\right)\right)_{k\in\mathbb{N}^{*}}<+\infty\end{array}\right.\] In this case we can take for \(\alpha:=\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) and \(\beta:=\left(\beta_{n}\right)_{n\in\mathbb{N}^{*}}\) the values \[\alpha=\beta=\left(\sqrt{\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}}\right) _{k\in\mathbb{N}^{*}}\] and \(\left(\left(U_{n},a_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) becomes a \(\left(\sqrt{\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}}\right)_{k\in \mathbb{N}^{*}}\)-tight *-fusion frame of \(\mathfrak{L}\). The proof of the proposition is then complete. \(\blacksquare\) **Example 2** Let \(\mathbb{H}\) be the well-known field of quaternions. We denote by \(l_{\infty}\left(\mathbb{H}\right)\) the \(\mathbb{C}\)-vector space of all the sequences \(z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\)\(z_{n}\in\mathbb{H}\) with \[\sup\limits_{n\in\mathbb{N}^{*}}\left(\left|z_{n}\right|\right)<+\infty\] and by \(\left(\mathbb{H}\right)_{l_{2}}\) the \(\mathbb{C}\)-vector space of all the sequences \(z:=\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\)\(z_{n}\in\mathbb{H}\) with \[\sum\limits_{n=1}^{+\infty}\left|z_{n}\right|^{2}<+\infty\] It is clear that \(l_{\infty}\left(\mathbb{H}\right)\) is a noncommutative C*-algebra when it is endowed with the mappings \[\begin{array}{ccccc}\cdot:&l_{\infty}\left(\mathbb{H}\right)^{2}&\to&l_{ \infty}\left(\mathbb{H}\right)\\ &\left(\left(a_{n,1}\right)_{n\in\mathbb{N}^{*}},\left(a_{n,2}\right)_{n\in \mathbb{N}^{*}}\right)&\mapsto&\left(a_{n,1}\right)_{n\in\mathbb{N}^{*}} \cdot\left(a_{n,2}\right)_{n\in\mathbb{N}^{*}}:=\left(a_{n,1}a_{n,2}\right)_{n \in\mathbb{N}^{*}}\\ \ast:&l_{\infty}\left(\mathbb{H}\right)&\to&l_{\infty}\left(\mathbb{H}\right) \\ &\left(a_{n}\right)_{n\in\mathbb{N}^{*}}&\mapsto&\left(\left(a_{n}\right)_{n \in\mathbb{N}^{*}}\right)^{*}:=\left(\overline{a_{n}}\right)_{n\in\mathbb{N}^ {*}}\end{array}\] Let us also observe that the C*-algebra \(l_{\infty}\left(\mathbb{H}\right)\) is also unital, the unit being the sequence \(1_{l_{\infty}\left(\mathbb{H}\right)}:=\left(1\right)_{n\in\mathbb{N}^{*}}.\) On the other hand, \(\left(\mathbb{H}\right)_{l_{2}}\) is a (left) \(l_{\infty}\left(\mathbb{H}\right)\)-module for the well-defined operation \[\begin{array}{ccccc}\circ:&l_{\infty}\left(\mathbb{H}\right)\times\left( \mathbb{H}\right)_{l_{2}}&\to&\left(\mathbb{H}\right)_{l_{2}}\\ &\left(\left(z_{n}\right)_{n\in\mathbb{N}^{*}},\left(u_{n}\right)_{n\in \mathbb{N}^{*}}\right)&\mapsto&\left(z_{n}\right)_{n\in\mathbb{N}^{*}}\circ \left(u_{n}\right)_{n\in\mathbb{N}^{*}}:=\left(z_{n}u_{n}\right)_{n\in\mathbb{ N}^{*}}\end{array}\] We prove also easily that \(\left(\mathbb{H}\right)_{l_{2}}\) is a (left) Hilbert \(l_{\infty}\left(\mathbb{H}\right)\)-module if it is equipped with the well-defined \(l_{\infty}\left(\mathbb{H}\right)\)-inner product \[\begin{array}{ccccc}\left\langle.,.\right\rangle:&\left(\mathbb{H}\right)_{l _{2}}\times\left(\mathbb{H}\right)_{l_{2}}&\to&l_{\infty}\left(\mathbb{H} \right)\\ &\left(\left(z_{n,1}\right)_{n\in\mathbb{N}^{*}},\left(z_{n,2}\right)_{n\in \mathbb{N}^{*}}\right)&\mapsto&\left\langle\left(z_{n,1}\right)_{n\in\mathbb{N }^{*}},\left(z_{n,2}\right)_{n\in\mathbb{N}^{*}}\right\rangle:=\left(z_{n} \overline{z_{n,2}}\right)_{n\in\mathbb{N}^{*}}\end{array}\] (The fact that the complex algebra \(\left(\mathbb{H}\right)_{l_{2}}\) is complete is based on a result stated in (([22]); page 359). Let \(\left(I_{n}\right)_{n\in\mathbb{N}^{\ast}}\) be a sequence of nonempty finite subsets of \(\mathbb{N}^{\ast}\) such that \(\underset{n\in\mathbb{N}^{\ast}}{\cup}I_{n}=\mathbb{N}^{\ast}\). Let us consider, for every \(k\in\mathbb{N}^{\ast},\) the bouded linear operator \[P_{k}:\begin{array}[c]{ccc}\left(\mathbb{H}\right)_{l_{2}}&\rightarrow&\left( \mathbb{H}\right)_{l_{2}}\\ \left(z_{n}\right)_{n\in\mathbb{N}^{\ast}}&\mapsto&\left(d_{k,n}z_{n}\right)_{ n\in\mathbb{N}^{\ast}}\end{array}\] where \[d_{k,n}:=\left\{\begin{array}[c]{c}1\text{ if }n\in I_{k}\\ 0\text{ if }n\in\mathbb{N}^{\ast}\backslash I_{k}\end{array}\right.\] We obtain the following result in a similar as for the previous example. **Proposition 4.4.** _For every \(k\in\mathbb{N}^{\ast},\) the set \(U_{k}:=P_{k}\left(\left(\mathbb{H}\right)_{l_{2}}\right)\) is an orthogonally complemented submodule of \(\left(\mathbb{H}\right)_{l_{2}}.\)Hence \(P_{k}\) is, for every \(k\in\mathbb{N}^{\ast},\) the orthogonal projection of \(\left(\mathbb{H}\right)_{l_{2}}\) onto \(U_{k}.\)_ **Proposition 4.5.** \(\mathcal{W}\left(l_{\infty}\left(\mathbb{H}\right)\right)\)_is the set of all the sequences \(\left(\left(a_{n,k}\right)_{k\in\mathbb{N}^{\ast}}\right)_{n\in\mathbb{N}^{\ast}}\) of \(\left(l_{\infty}\left(\mathbb{H}\right)\right)^{\mathbb{N}^{\ast}}\) such that_ \[\left\{\begin{array}[c]{c}a_{n,k}\in\mathbb{R}^{+\ast},\ n,\ k\in\mathbb{N}^{ \ast}\\ 0<\underset{n\in\mathbb{N}^{\ast}}{\inf}\left(a_{n,k}\right),\ k\in\mathbb{N}^{ \ast}\end{array}\right.\] **Proof** A sequence \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in l_{\infty}\left(\mathbb{H}\right)\) is a positive element of \(l_{\infty}\left(\mathbb{H}\right)\) if and only if there exists a sequence \(\left(b_{n}\right)_{n\in\mathbb{N}^{\ast}}\in l_{\infty}\left(\mathbb{H}\right)\) such that \[\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}=\left(b_{n}\right)_{n\in \mathbb{N}^{\ast}}\left(\left(b_{n}\right)_{n\in\mathbb{N}^{\ast}}\right)^{\ast}\] That is \[\left\{\begin{array}[c]{c}\alpha_{n}=\left|b_{n}\right|^{2},\ n\in\mathbb{N}^{ \ast}\\ \underset{n\in\mathbb{N}^{\ast}}{\sup}\left(\left|b_{n}\right|\right)<+\infty \end{array}\right.\] These conditions rewrite \[\alpha_{n}\in\mathbb{R}^{+},\ n\in\mathbb{N}^{\ast}\] Now let \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\in l_{\infty}\left(\mathbb{H}\right)\) be a positive element of \(l_{\infty}\left(\mathbb{H}\right).\)\(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\) is invertible in \(l_{\infty}\left(\mathbb{H}\right)\) if and only if there exists a sequence \(\left(c_{n}\right)_{n\in\mathbb{N}^{\ast}}\in l_{\infty}\left(\mathbb{H}\right)\) such that \[\left(\alpha_{n}\right)_{n\in\mathbb{N}^{\ast}}\left(c_{n}\right)_{n\in \mathbb{N}^{\ast}}=\left(1\right)_{n\in\mathbb{N}^{\ast}}\] That is \[\left\{\begin{array}[c]{c}\alpha_{n}c_{n}=1,\ n\in\mathbb{N}^{\ast}\\ \underset{n\in\mathbb{N}^{\ast}}{\sup}\left(\left|c_{n}\right|\right)<+\infty \end{array}\right.\] These conditions rewrite \[\sup\limits_{n\in\mathbb{N}^{*}}\left(\frac{1}{\alpha_{n}}\right)<+\infty\] That is \[0<\inf\limits_{n\in\mathbb{N}^{*}}\left(\alpha_{n}\right)\] The conclusion is that \(\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) is a strictly positive element of \(l_{\infty}\left(\mathbb{H}\right)\) if and only if \[\left\{\begin{array}{c}\alpha_{n}\in\mathbb{R}^{+*},\ n\in\mathbb{N}^{*}\\ 0<\inf\limits_{n\in\mathbb{N}^{*}}\left(\alpha_{n}\right)\end{array}\right.\] The final conclusion is that a sequence \(\left(\left(a_{n,k}\right)_{k\in\mathbb{N}^{*}}\right)_{n\in\mathbb{N}^{*}}\)of \(\left(l_{\infty}\left(\mathbb{H}\right)\right)^{\mathbb{N}^{*}}\) belongs to \(\mathcal{W}\left(l_{\infty}\left(\mathbb{H}\right)\right)\) if and only if \[\left\{\begin{array}{c}a_{n,k}\in\mathbb{R}^{+*},\ n,\ k\in\mathbb{N}^{*}\\ 0<\inf\limits_{n\in\mathbb{N}^{*}}\left(a_{n,k}\right),\ k\in\mathbb{N}^{*} \end{array}\right.\] The proof of the proposition is then achieved. \(\blacksquare\) We denote by \(\preccurlyeq\), as usual the partial order on the C*-algebra \(l_{\infty}\left(\mathbb{H}\right)\) defined by positive elements of \(l_{\infty}\left(\mathbb{H}\right).\) In view of the last proposition, it is then clear that for every \(a:=\left(a_{n}\right)_{n\in\mathbb{N}^{*}},\)\(b:=\left(b_{n}\right)_{n\in\mathbb{N}^{*}}\in l_{\infty}\left(\mathbb{H}\right),\) we have \(a\preccurlyeq b\) if and only if \(b_{n}-a_{n}\in\mathbb{R}^{+}\) for every \(n\in\mathbb{N}^{*}.\) **Proposition 4.6.** _A sequence \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\) of elements of \(\mathcal{W}\left(l_{\infty}\left(\mathbb{H}\right)\right)\) belongs to \(\mathfrak{m}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) if and only if_ \[\left\{\begin{array}{c}0<\overset{+\infty}{\underset{n=1}{\sum}}a_{n,k}^{2 }d_{k,n},\ k\in\mathbb{N}^{*},\ k\in\mathbb{N}^{*}\\ \overset{+\infty}{\underset{k=1}{\sum}}\overset{+\infty}{\underset{n=1}{\sum} }a_{n,k}^{2}d_{k,n}<+\infty\end{array}\right.\] _In this case \(\left(\left(U_{n},a_{n}\right)_{n\in\mathbb{N}^{*}}\right)\) is a -tight *-fusion frame of \(\left(\mathbb{H}\right)_{l_{z}}\)._ **Proof** Let \(\left(a_{n,k}\right)_{n\in\mathbb{N}^{*}}\in\mathcal{W}\left(l_{\infty}\left( \mathbb{H}\right)\right).\) \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}:=\left(\left(a_{n,k}\right)_{k\in \mathbb{N}^{*}}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n} \right)_{n\in\mathbb{N}^{*}}\right)\) if and only if there exists two elements \(\alpha:=\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) and \(\beta:=\left(\beta_{n}\right)_{n\in\mathbb{N}^{*}}\) belonging to \(\mathcal{W}\left(l_{\infty}\left(\mathbb{H}\right)\right)\) such that i. for every \(x:=\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\in\left(\mathbb{H}\right)_{l_{2}},\) the series \(\sum a_{n}^{2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle\) is convergent in \(l_{\infty}\left(\mathbb{H}\right);\) 2. the following relation holds \[\alpha^{2}\left\langle x,x\right\rangle\preccurlyeq\sum\limits_{n=1}^{+\infty}a_{n}^ {2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle \preccurlyeq\beta^{2}\left\langle x,x\right\rangle\] But \[a_{n}^{2}\left\langle P_{U_{n}}\left(x\right),P_{U_{n}}\left(x\right)\right\rangle =\left(a_{n,k}^{2}d_{n,k}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}}, \ n\in\mathbb{N}^{*}\] Hence \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n} \right)_{n\in\mathbb{N}^{*}}\right)\) if and only if i'. the series \(\sum\left(a_{n,k}^{2}d_{n,k}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}}\) is convergent in \(\left(\mathbb{H}\right)_{l_{2}}\) for every \(x:=\left(x_{k}\right)_{k\in\mathbb{N}^{*}}\in\left(\mathbb{H}\right)_{l_{2}};\) ii'. \(\left(\alpha_{k}^{2}\left|x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}} \preccurlyeq\sum\limits_{n=1}^{+\infty}\left(a_{n,k}^{2}d_{k,n}\left|x_{k} \right|^{2}\right)_{k\in\mathbb{N}^{*}}\preccurlyeq\left(\beta_{k}^{2}\left| x_{k}\right|^{2}\right)_{k\in\mathbb{N}^{*}}.\) It follows then clearly that \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n} \right)_{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty,k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}\left(\left(\sum\limits_{n=1}^{+\infty}a_{n,k}^{2} d_{n,k}\right)\left|x_{k}\right|^{2}\right)^{2}<+\infty\\ \alpha_{k}^{2}\leq\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}\leq\beta_{k}^ {2},\,k\in\mathbb{N}^{*}\end{array}\right.\] holds for every \(x:=\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\in l_{2}\left(\mathbb{H}\right).\) But since the mapping \[\begin{array}{ccc}l_{2}\left(\mathbb{C}\right)&\rightarrow&l_{1}\left( \mathbb{C}\right)\\ \left(t_{n}\right)_{n\in\mathbb{N}^{*}}&\mapsto&\left(t_{n}^{2}\right)_{n\in \mathbb{N}^{*}}\end{array}\] is onto, we obtain the following equivalence : \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+\infty,\ k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}\left(\left(\sum\limits_{n=1}^{+\infty}a_{n,k}^{2} d_{n,k}\right)\left|s_{k}\right|^{2}\right)^{2}<+\infty\\ \alpha_{k}^{2}\leq\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}\leq\beta_{k}^ {2},\,k\in\mathbb{N}^{*}\end{array}\right.\] holds for every \(s:=\left(s_{n}\right)_{n\in\mathbb{N}^{*}}\in l_{1}\left(\mathbb{C}\right).\) Finally it follows easily that \(\left(a_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{m}\left(\left(U_{n}\right) _{n\in\mathbb{N}^{*}}\right)\) if and only if \[\left\{\begin{array}{c}0<\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k}<+ \infty,\ k\in\mathbb{N}^{*}\\ \sup\limits_{k\in\mathbb{N}^{*}}\left(\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_ {n,k}\right)<+\infty\\ 0<\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{n,k},\ k\in\mathbb{N}^{*}\\ \sum\limits_{k=1}^{+\infty}\sum\limits_{n=1}^{+\infty}a_{n,k}^{2}d_{k,n}<+ \infty\end{array}\right.\] That is \[\left\{\begin{array}{c}0<\underset{n=1}{\overset{+\infty}{\sum}}a_{n,k}^{2}d_{k,n },\ k\in\mathbb{N}^{*},\ k\in\mathbb{N}^{*}\\ \underset{k=1}{\overset{+\infty}{\sum}}\underset{n=1}{\overset{+\infty}{\sum} }a_{n,k}^{2}d_{k,n}<+\infty\end{array}\right.\] In this case we can take for \(\alpha:=\left(\alpha_{n}\right)_{n\in\mathbb{N}^{*}}\) and \(\beta:=\left(\beta_{n}\right)_{n\in\mathbb{N}^{*}}\) the values \[\alpha=\beta=\left(\sqrt{\underset{n=1}{\overset{+\infty}{\sum}}a_{n,k}^{2}d _{k,n}}\right)_{k\in\mathbb{N}^{*}}\] and \(\left(\left(U_{n},a_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) becomes a \(\left(\sqrt{\underset{n=1}{\overset{+\infty}{\sum}}a_{n,k}^{2}d_{k,n}}\right) _{k\in\mathbb{N}^{*}}\)-tight *-fusion frame of \(\left(\mathbb{H}\right)_{l_{2}}.\) The proof of the proposition is then complete. ## 5 Perturbation of a *-fusion frame A distance in the set of all orthogonally complemented subspaces of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\) We denote by \(\mathfrak{Comp}\left(\mathfrak{H}\right)\) the set of all orthogonally complemented subspaces of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\). Let \(\mathfrak{d}\) be the mapping \[\begin{array}{cccc}\mathfrak{d}:&\mathfrak{Comp}\left(\mathfrak{H}\right)^{2 }&\rightarrow&\mathbb{R}\\ &\left(U,V\right)&\mapsto&\mathfrak{d}\left(U,V\right):=\left\|\pi_{U}-\pi_{V }\right\|_{End_{\mathfrak{A}}^{*}\left(\mathfrak{H}\right)}\end{array}\] which was already introduced by D. S. Djordjevic in ([13]). It is clear that \(\mathfrak{d}\) is a well-defined mapping and that \[\left\{\begin{array}{c}\mathfrak{d}\left(U,V\right)\in\mathbb{R}^{+},\ U,\ V\in\mathfrak{Comp}\left(\mathfrak{H}\right)\\ \mathfrak{d}\left(U,V\right)=\mathfrak{d}\left(V,U\right),\ U,\ V\in\mathfrak{ Comp}\left(\mathfrak{H}\right)\\ \mathfrak{d}\left(U,W\right)\leq\mathfrak{d}\left(U,V\right)+\mathfrak{d} \left(V,W\right),\ U,\ V,\ W\in\mathfrak{Comp}\left(\mathfrak{H}\right).\end{array}\right.\] Furthermore we have for every \(U,\,V\in\mathfrak{Comp}\left(\mathcal{H}\right)\) \[\begin{array}{cccc}\mathfrak{d}\left(U,V\right)&=&0\implies\pi_{U}=\pi_{V} \\ &\Longrightarrow&\pi_{U}\left(\mathfrak{H}\right)=\pi_{V}\left(\mathfrak{H} \right)\\ &\Longrightarrow&U=V\end{array}\] The conclusion is that \(\mathfrak{d}\) is a distance on the set \(\mathfrak{Comp}\left(\mathfrak{H}\right).\) The angle between two orthogonally complemented subspaces of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\). In the mentionned paper ([13]), the author proved that \[\mathfrak{d}\left(U,V\right)=\left\|\pi_{U}-\pi_{V}\right\|_{End_{\mathfrak{A} }^{*}\left(\mathfrak{H}\right)}\leq 1,\ U,\ V\in\mathfrak{Comp}\left( \mathfrak{H}\right) \tag{18}\] We can then set for every \(U,\)\(V\in\mathfrak{Comp}\left(\mathfrak{H}\right)\) \[\widehat{\left(U,V\right)}:=\arcsin\left(\mathfrak{d}\left(U,V\right)\right)\] The number \(\widehat{\left(U,V\right)}\) which belongs to the intervall \(\left[0;\frac{\pi}{2}\right]\) is called the angle between the orthogonally complemented subspaces \(U\) and \(V.\) **Remark** _Let \(U,\)\(V\in\mathfrak{Comp}\left(\mathfrak{H}\right)\) be orthogonal_, _then \(\widehat{\left(U,V\right)}=\frac{\pi}{2}\) but the converse is in general false._ **Proof** 1. Assume that \(U,\)\(V\in\mathfrak{Comp}\left(\mathfrak{H}\right)\) are orthogonal. Then \[\left|\pi_{U}\left(x\right)-\pi_{V}\left(x\right)\right|^{2}=\left|\pi_{U} \left(x\right)\right|^{2}+\left|\pi_{V}\left(x\right)\right|^{2},\ x\in \mathfrak{H}\] So we have for each \(x\in U\) \[\left|\pi_{U}\left(x\right)-\pi_{V}\left(x\right)\right|^{2} = \left|\pi_{U}\left(x\right)\right|^{2}\] \[= \left|x\right|^{2}\] It follows that \[\left\|\left(\pi_{U}-\pi_{V}\right)\left(x\right)\right\|_{\mathfrak{H}}= \left\|x\right\|_{\mathfrak{H}},\ x\in U\] Hence we have \[1\leq\left\|\pi_{U}-\pi_{V}\right\|_{End_{\mathfrak{A}}^{*}\left(\mathfrak{H} \right)}\] Consequently, in view of (18), we obtain \[\left\|\pi_{U}-\pi_{V}\right\|_{End_{\mathfrak{A}}^{*}\left(\mathfrak{H} \right)}=1\] So \[\widehat{\left(U,V\right)}=\frac{\pi}{2}\] 2. Let us consider the classical Hilbert space \(l_{2}\left(\mathbb{C}\right)\) which is a Hilbert \(\mathbb{C}\)-module, \(\mathbb{C}\) being viewed as a C*-algebra. Let us set \[\left\{\begin{array}{l}U_{0}:=\left\{\left(x_{n}\right)_{n\in\mathbb{N}^{* }}\in l_{2}\left(\mathbb{C}\right):x_{n}=0\text{ if }n\in\mathbb{N}^{*}\backslash 4 \mathbb{N}\right\}\\ V_{0}:=\left\{\left(x_{n}\right)_{n\in\mathbb{N}^{*}}\in l_{2}\left(\mathbb{C }\right):x_{n}=0\text{ if }n\in\mathbb{N}^{*}\backslash 2\mathbb{N}\right\}\end{array}\right.\] It is clear that \(U_{0}\) and \(V_{0}\) are an orthogonally Hilbert \(\mathbb{C}\)-submodules of the Hilbert \(\mathbb{C}\)-module \(l_{2}\left(\mathbb{C}\right)\) which are not orthogonal and that \[\left(P_{U_{0}}-P_{V_{0}}\right)\left(\left(0,1,0,0,....\right)\right)=\left(0,1,0,0,....\right)\] Hence \[\left\|P_{U_{0}}-P_{V_{0}}\right\|_{End^{\ast}_{\mathbb{C}}\left(l_{2}\left( \mathbb{C}\right)\right)}=1\] It follows that \[\widehat{\left(U_{0},V_{0}\right)}=\frac{\pi}{2}\] Consequently \(U_{0},V_{0}\in\mathfrak{Comp}\left(l_{2}\left(\mathbb{C}\right)\right)\) are not orthogonal but \(\widehat{\left(U_{0},V_{0}\right)}=\frac{\pi}{2}.\square\) Ecart on the set \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) of all the sequences of orthogonally complemented subspaces of a Hilbert \(\mathfrak{A}\)-module \(\mathfrak{H}\) Let \(w:=\left(w_{n}\right)_{n\in\mathbb{N}^{\ast}}\) be a sequence of strictly positive real numbers. We consider the mapping \[\begin{array}{lcl}\mathfrak{d}:&&\left(\mathfrak{Comp}\left(\mathfrak{H} \right)^{\mathbb{N}^{\ast}}\right)^{2}&\rightarrow&\mathbb{R}\cup\left\{+ \infty\right\}\\ &&\left(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(V_{n}\right)_{n\in \mathbb{N}^{\ast}}\right)&\mapsto&\sqrt{\sum\limits_{n=1}^{+ \infty}}w_{n}\mathfrak{d}\left(U_{n},V_{n}\right)^{2}\end{array}\] The mapping \(\mathfrak{d}\) is well-defined and is an ecart on the set \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) (([9]), pages 61-64). Indeed it is easy to prove that \(\mathfrak{d}\) fullfiles, for every \(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},\) (\(V_{n}\))\({}_{n\in\mathbb{N}^{\ast}},\) \(\left(W_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{Comp}\left(\mathfrak{H }\right)^{\mathbb{N}^{\ast}},\) the following properties \[\left\{\begin{array}{c}\mathfrak{d}\left(\left(U_{n}\right)_{n\in\mathbb{N }^{\ast}},\left(V_{n}\right)_{n\in\mathbb{N}^{\ast}}\right)\in\mathbb{R}^{+} \cup\left\{+\infty\right\}\\ \mathfrak{d}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(V_{n}\right) _{n\in\mathbb{N}^{\ast}}\right)=\mathfrak{d}\left(\left(V_{n}\right)_{n\in \mathbb{N}^{\ast}},\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}}\right)\\ \mathfrak{d}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(W_{n}\right) _{n\in\mathbb{N}^{\ast}}\right)\leq\mathfrak{d}\left(\left(U_{n}\right)_{n\in \mathbb{N}^{\ast}},\left(V_{n}\right)_{n\in\mathbb{N}^{\ast}}\right)+\mathfrak{d }\left(\left(V_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(W_{n}\right)_{n\in \mathbb{N}^{\ast}}\right)\end{array}\right.\] It follows that \(\mathfrak{d}\) defines on the set \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) a topology \(T_{w}\) in the same way as for a distance : i. we define the open ball \(B_{\mathfrak{d}}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},r\right)\) of center \(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{Comp}\left(\mathfrak{H }\right)^{\mathbb{N}^{\ast}}\) and radius \(r\in\mathbb{R}^{+}\) \[B_{\mathfrak{d}}\left(\left(U_{n}\right)_{n\in\mathbb{N}^{\ast}},r\right)= \left\{\left(V_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{Comp}\left( \mathfrak{H}\right)^{\mathbb{N}^{\ast}}:\mathfrak{d}\left(\left(U_{n}\right)_{n \in\mathbb{N}^{\ast}},\left(V_{n}\right)\right)<r\right\}\] ii. we define the open sets of \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) as the unions of arbitrary families of open balls. We can easily prove that the topology \(\underset{w}{T}\) satisfies the Hausdorff separation axiom. Hence \(\left(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}},\underset{w}{T}\right)\) is a Hausdorff topological space. Let us observe that if \(w\in l_{2}\left(\mathbb{C}\right)\) then \(\underset{w}{\mathfrak{d}}\) is real-valued, so the restriction \(\underset{w}{\mathfrak{d}}\right|_{\mathfrak{Comp}\left(\mathfrak{H}\right)^{ \mathbb{N}^{\ast}}}^{\mathbb{R}}\) is a distance on \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) and \(\underset{w}{T}\) becomes a metric space topology on \(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}.\) Let \(\omega:=\left(\omega_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathcal{W}\left( \mathfrak{A}\right).\) We denote by \(\mathfrak{q}\left(\omega\right)\) the sequence \(\left(\left\|\omega_{n}\right\|_{\mathfrak{A}}^{2}\right)_{n\in\mathbb{N}^{ \ast}}.\) We represent the set of all \(\left(\mathfrak{K}_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{Comp}\left( \mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) such that \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{\ast}}\) is a *-fusion frame by the notation \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}.\) ### Results on the perturbation of *-fusion frames **Theorem 5.1**.: 1. _Let \(\omega:=\left(\omega_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathcal{W}\left( \mathfrak{A}\right)\) and \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{\ast}},\)\(\left(\mathfrak{K}_{n}\right)_{n\in\mathbb{N}^{\ast}}\in\mathfrak{Comp}\left( \mathfrak{H}\right)^{\mathbb{N}^{\ast}}.\)_ _If \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{\ast}}\) is a *-fusion frame of \(\mathfrak{H}\) of lower bound \(A\in\mathfrak{A},\) and the following condition holds_ \[\underset{\mathfrak{q}\left(\omega\right)}{\mathfrak{q}\left(\omega\right)} \left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(\mathfrak{K} _{n}\right)_{n\in\mathbb{N}^{\ast}}\right)<\left\|A^{-1}\right\|_{\mathfrak{A} }^{-1}\] _then \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{\ast}}\) will be a *-fusion frame of \(\mathfrak{H}.\)_ 2. \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}}\) is an open set for the topological space \(\left(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{\ast}},\underset{ \mathfrak{q}\left(\omega\right)}{T}\right).\) **Proof** 1. Assume that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{\ast}}\) is a *-fusion frame of \(\mathfrak{H}\) of lower bound \(A\in\mathfrak{A}\) and upper bound \(B\in\mathfrak{A},\) and the following condition holds \[\underset{\mathfrak{q}\left(\omega\right)}{\mathfrak{q}\left(\omega\right)} \left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{\ast}},\left(\mathfrak{K} _{n}\right)_{n\in\mathbb{N}^{\ast}}\right)<\left\|A^{-1}\right\|_{\mathfrak{A}}\] For each \(n,m\in\mathbb{N}^{\ast},\)\(x\in\mathfrak{H}\) we have \[\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{K}_{j}}\left(x \right)\right|^{2}\] \[= \sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{K}_{j}}\left(x \right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}+\sum_{j=n}^{n+m}\omega_{ j}^{2}\left(\left\langle P_{\mathfrak{H}_{j}}\left(x\right),P_{\mathfrak{K}_{j}} \left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right)\right\rangle\right)+\] \[+\sum_{j=n}^{n+m}\omega_{j}^{2}\left(\left\langle P_{\mathfrak{K} _{j}}\left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right),P_{\mathfrak{H}_{j}} \left(x\right)\right\rangle\right)+\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{ \mathfrak{H}_{j}}\left(x\right)\right|^{2}\] But we know, thanks to proposition, that \[\max\left(\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left(\left\langle P _{\mathfrak{H}_{j}}\left(x\right),P_{\mathfrak{H}_{j}}\left(x\right)-P_{ \mathfrak{H}_{j}}\left(x\right)\right\rangle\right)\right\|_{\mathfrak{A}}, \left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left(\left\langle P_{\mathfrak{H}_{j}} \left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right),P_{\mathfrak{H}_{j}}\left(x \right)\right\rangle\right)\right\|_{\mathfrak{A}}\] \[\leq \left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{ \mathfrak{A}}\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)\right|^{2}\right\|_{\mathfrak{A}}\] It follows that \[\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)\right|^{2}\right\|_{\mathfrak{A}}\] \[\leq \left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{ \mathfrak{A}}\] \[+2\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)\right|^{2}\right\|_{\mathfrak{A}}\left\|\sum_{j=n}^{n+m}\omega_ {j}^{2}\left\langle P_{\mathfrak{H}_{j}}\left(x\right),P_{\mathfrak{H}_{j}} \left(x\right)\right\rangle\right\|_{\mathfrak{A}}+\] \[+\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}} \left(x\right)\right|^{2}\right\|\] Hence we obtain \[\leq \sqrt{\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_ {j}}\left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{ \mathfrak{A}}}\] So \[\sqrt{\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_ {j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}}}\] \[\leq \sqrt{\left\|\sum_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_ {j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}}}+\sqrt{\left\|\sum_{j=n} ^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}}\left(x\right)-P_{\mathfrak{H}_ {j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}}}\] Since the sequences \(\left(\mathfrak{H}_{j}\right)_{j\in\mathbb{N}^{*}}\) and \(\left(\mathfrak{H}_{j}\right)_{j\in\mathbb{N}^{*}}\) play the same role in the previous reasonning, we have also \[\sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{ \mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}}\] \[\leq \sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} +\sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{ \mathfrak{H}}_{j}}\left(x\right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2} \right\|}_{\mathfrak{A}}\] It follows that \[\left|\sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} -\sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{ \mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}}\] \[\leq \sqrt{\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)-P_{\mathfrak{H}_{j}}\left(x \right)\right|^{2}\right\|}_{\mathfrak{A}}\] But we know that the series \(\sum\omega_{j}^{2}\left|P_{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^ {2}\) and \(\sum\omega_{j}^{2}\left|P_{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)-P_{ \mathfrak{H}_{j}}\left(x\right)\right|^{2}\) are convergent. It follows that \[\left\{\begin{array}{c}\lim\limits_{n\rightarrow+\infty}\sup \limits_{m\in\mathbb{N}^{*}}\left\|\sum\limits_{j=n}^{n+m}\omega_{j}^{2}\left|P _{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}} =0\\ \lim\limits_{n\rightarrow+\infty}\sup\limits_{m\in\mathbb{N}^{*}}\left\|\sum \limits_{j=n}^{n+m}\omega_{j}^{2}\left|P_{\mathfrak{\mathfrak{H}}_{j}}\left(x \right)-P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}} =0\end{array}\right.\] Consequently the series \(\sum\omega_{j}^{2}\left|P_{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^ {2}\) is convergent in \(\mathfrak{A}\). Furthermore if we take \(n=1\) and tend \(m\) to infinity then the relation (19) becomes \[\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} -\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2}\left|P_{\mathfrak{ \mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}}\] \[\leq \mathfrak{d}_{\mathfrak{q}\left(\omega\right)}\left(\left( \mathfrak{\mathfrak{H}}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{ \mathfrak{H}}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\left\|x\right\|_{ \mathfrak{H}}\] It follows that \[\left\{\begin{array}{c}\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2 }\left|P_{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{ \mathfrak{A}}-\mathfrak{d}_{\mathfrak{q}\left(\omega\right)}\left(\left( \mathfrak{\mathfrak{H}}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{ \mathfrak{H}}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\left\|x\right\|_{ \mathfrak{H}}\leq\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2}\left|P _{\mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} \\ \sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} \leq\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2}\left|P_{ \mathfrak{\mathfrak{H}}_{j}}\left(x\right)\right|^{2}\right\|}_{\mathfrak{A}} +\mathfrak{d}_{\mathfrak{q}\left(\omega\right)}\left(\left(\mathfrak{\mathfrak{ H}}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{\mathfrak{H}}_{n}\right)_{n\in \mathbb{N}^{*}}\right)\left\|x\right\|_{\mathfrak{H}}\end{array}\right.\] But we have \[\left\{\begin{array}{l}\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2} \left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|}\right\|_{\mathfrak{ A}}\leq\left\||Bx\right\|\leq\left\||Ax\right\|\leq\left\||B\right\|_{\mathfrak{ A}}\left\|x\right\|_{\mathfrak{H}}\\ \\ \left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\left\|x\right\|_{\mathfrak{H}} \leq\left\||Ax\right\|\leq\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^ {2}\left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|}\end{array}\] Hence \[\left\{\begin{array}{l}\sqrt{\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2} \left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|}\right\|_{\mathfrak{ A}}\leq\left\||Bx\right\|\leq\left(\left\|B\right\|_{\mathfrak{A}}+\underset{ \mathfrak{q}\left(\omega\right)}{\mathfrak{q}\left(\mathfrak{H}_{n}\right)_{ n\in\mathbb{N}^{*}}},\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right) \right)\left\|x\right\|_{\mathfrak{H}}\\ \\ \left(\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}-\underset{\mathfrak{q}\left( \omega\right)}{\mathfrak{q}\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}},\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\right)\left\|x \right\|_{\mathfrak{H}}\leq\left\||Ax\right\|\leq\sqrt{\left\|\sum\limits_{j=1}^ {+\infty}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2} \right\|}\right\|_{\mathfrak{A}}\end{array}\] But \(C:=\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}-\underset{\mathfrak{q}\left( \omega\right)}{\mathfrak{q}\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}},\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right)>0.\) It follows that \[\left\{\begin{array}{l}\left\|\sum\limits_{j=1}^{+\infty}\omega_{j}^{2} \left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2}\right\|_{\mathfrak{A}} \leq\left(\left\|B\right\|_{\mathfrak{A}}+\underset{\mathfrak{q}\left(\omega \right)}{\mathfrak{q}\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}},\left( \mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\right)\right)^{2}\left\|x\right\|_ {\mathfrak{H}}^{2}\\ C^{2}\left\|x\right\|_{\mathfrak{H}}^{2}\leq\left\|\sum\limits_{j=1}^{+ \infty}\omega_{j}^{2}\left|P_{\mathfrak{H}_{j}}\left(x\right)\right|^{2} \right\|_{\mathfrak{A}}\end{array}\] According to theorem 3.6., \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}.\) 2. If \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}}=\emptyset,\) then \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}}\) is an open set in the topological space \(\left(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}},\)\(\underset{\mathfrak{q}\left(\omega\right)}{T}\right).\) Assume that \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}}\neq\emptyset\) and that \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{Comp}_{\omega} \left(\mathfrak{H}\right)^{\mathbb{N}^{*}}.\) Let \(A\) be a lower bound of the *-fusion frame \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}.\) The fact that we have obtained in the first part of the proof can be expressed by the inclusion \[B_{\underset{\mathfrak{q}\left(\omega\right)}{\mathfrak{q}\left(\omega \right)}}\left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}},\left\|A^{-1 }\right\|_{\mathfrak{A}}^{-1}\right)\subset\mathfrak{Comp}_{\omega}\left( \mathfrak{H}\right)^{\mathbb{N}^{*}}\] Hence \(\mathfrak{Comp}_{\omega}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}}\) is an open set in the topological space \(\left(\mathfrak{Comp}\left(\mathfrak{H}\right)^{\mathbb{N}^{*}},\)\(\underset{\mathfrak{q}\left(\omega\right)}{T}\right).\) The proof of the theorem is then complete. **Corollary 5.2.** _Let \(\omega:=\left(\omega_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathcal{W}\left( \mathfrak{A}\right)\) and \(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}},\)\(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}}\in\mathfrak{Comp}\left( \mathfrak{H}\right)^{\mathbb{N}^{*}}.\)_ 1. _If_ \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) _is a *-fusion frame of_ \(\mathfrak{H}\) _of lower bound_ \(A\in\mathfrak{A},\) _and the following condition holds_ \[\sum\limits_{n=1}^{+\infty}\left\|\omega_{n}\right\|_{\mathfrak{A}}^{2}\left( \widehat{U_{n},V_{n}}\right)^{2}<\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}\] _then \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) will be a *-fusion frame of \(\mathfrak{H}.\)_ 2. _Assume that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\) of lower bound \(A\in\mathfrak{A},\) and that the sequence \(\omega\) is bounded. Then if the following condition holds_ \[\sum_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^{ 2}<\left(\frac{\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}}{\left\|\omega\right\| _{\infty}}\right)^{2}\] _then \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) will be a *-fusion frame of \(\mathfrak{H}.\)_ 3. _Assume that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\) of lower bound \(A\in\mathfrak{A},\) and that there is a real constant \(p\in]1;+\infty[\) such that the sequence \(\mathfrak{q}\left(\omega\right)\) belongs to \(l_{p}\left(\mathbb{C}\right)\). Then if the following condition holds_ \[\sum_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^ {\frac{2p}{p-1}}<\left(\frac{\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}}{\left\| \mathfrak{q}\left(\omega\right)\right\|_{p}}\right)^{\frac{p}{p-1}}\] _then \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) will be a *-fusion frame of \(\mathfrak{H}.\)_ **Proof** Assume that \(\left(\left(\mathfrak{H}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\) of lower bound \(A\in\mathfrak{A}.\) We have \[\mathfrak{Q}_{\mathfrak{q}\left(\omega\right)}\left(\left( \mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{K}_{n}\right)_{ n\in\mathbb{N}^{*}}\right)\] \[= \sqrt{\sum_{n=1}^{+\infty}\omega_{n}^{2}\mathfrak{Q}\left( \mathfrak{H}_{n},\mathfrak{K}_{n}\right)^{2}}\] \[= \sqrt{\sum_{n=1}^{+\infty}\omega_{n}^{2}\sin^{2}\widehat{\left( \mathfrak{H}_{n},\mathfrak{K}_{n}\right)}}\] \[\leq \sqrt{\sum_{n=1}^{+\infty}\omega_{n}^{2}\widehat{\left( \mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^{2}}\] 1. Assume that \[\sum_{n=1}^{+\infty}\omega_{n}^{2}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K }_{n}\right)}^{2}<\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}\] It follows that \[\mathfrak{Q}_{\mathfrak{q}\left(\omega\right)}\left(\left(\mathfrak{H}_{n} \right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{K}_{n}\right)_{n\in\mathbb{N}^{*} }\right)<\left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\] Hence, by virtue of theorem, \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}.\) 2. Assume that the sequence \(\mathfrak{q}\left(\omega\right)\) is bounded and that \[\sum_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^ {2}<\frac{\left\|A^{-1}\right\|_{\mathfrak{A}}^{-2}}{\left\|\omega\right\|_{ \infty}^{2}}\] Hence \[\mathop{\mathfrak{d}}\limits_{\mathfrak{q}\left(\omega\right)} \left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{K}_{n }\right)_{n\in\mathbb{N}^{*}}\right)\] \[\leq \sqrt{\sum\limits_{n=1}^{+\infty}\omega_{n}^{2}\widehat{\left( \mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^{2}}\] \[\leq \left\|\omega\right\|_{\infty}\sqrt{\sum\limits_{n=1}^{+\infty} \widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^{2}}\] \[< \left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\] Consequently \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\). 3. Assume that there is a real constant \(p\in]1;+\infty[\) such that the sequence \(\mathfrak{q}\left(\omega\right)\) belongs to \(l_{p}\left(\mathbb{C}\right)\) and that \[\sum\limits_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n} \right)}^{\frac{2p}{p-1}}<\left(\frac{\left\|A^{-1}\right\|_{\mathfrak{A}}^{- 2}}{\left\|\mathfrak{q}\left(\omega\right)\right\|_{p}}\right)^{\frac{p}{p-1}}\] Hence, applying the well-known Holder's inequality, we obtain \[\mathop{\mathfrak{d}}\limits_{\mathfrak{q}\left(\omega\right)} \left(\left(\mathfrak{H}_{n}\right)_{n\in\mathbb{N}^{*}},\left(\mathfrak{K}_{n }\right)_{n\in\mathbb{N}^{*}}\right)\] \[\leq \sqrt{\sum\limits_{n=1}^{+\infty}\omega_{n}^{2}\widehat{\left( \mathfrak{H}_{n},\mathfrak{K}_{n}\right)}^{2}}\] \[\leq \sqrt{\left(\sum\limits_{n=1}^{+\infty}\omega_{n}^{2p}\right)^{ \frac{1}{p}}\left(\sum\limits_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n}, \mathfrak{K}_{n}\right)}^{\frac{2p}{p-1}}\right)^{\frac{p-1}{p}}}\] \[\leq \sqrt{\left\|\mathfrak{q}\left(\omega\right)\right\|_{p}\left( \sum\limits_{n=1}^{+\infty}\widehat{\left(\mathfrak{H}_{n},\mathfrak{K}_{n} \right)}^{\frac{2p}{p-1}}\right)^{\frac{p-1}{p}}}\] \[< \left\|A^{-1}\right\|_{\mathfrak{A}}^{-1}\] Consequently \(\left(\left(\mathfrak{K}_{n},\omega_{n}\right)\right)_{n\in\mathbb{N}^{*}}\) is a *-fusion frame of \(\mathfrak{H}\). \(\blacksquare\)
2308.08658
A Data-Theoretic Approach to Identifying Violent Facial Expressions in Social Crime Contexts
Human Facial Expressions plays an important role in identifying human actions or intention. Facial expressions can represent any specific action of any person and the pattern of violent behavior of any person strongly depends on the geographic region. Here we have designed an automated system by using a Convolutional Neural Network which can detect whether a person has any intention to commit any crime or not. Here we proposed a new method that can identify criminal intentions or violent behavior of any person before executing crimes more efficiently by using very little data on facial expressions before executing a crime or any violent tasks. Instead of using image features which is a time-consuming and faulty method we used an automated feature selector Convolutional Neural Network model which can capture exact facial expressions for training and then can predict that target facial expressions more accurately. Here we used only the facial data of a specific geographic region which can represent the violent and before-crime before-crime facial patterns of the people of the whole region.
Arindam Kumar Paul
2023-08-16T20:12:43Z
http://arxiv.org/abs/2308.08658v2
# A New Data-Driven Method to Identify Violent Facial Expression ###### Abstract Human Facial Expressions plays an important role for identifying human action or intention. Facial expressions can represent any specific action of any person and the pattern of violent behavior of any person strongly depends on the geographic region. Here we've designed an automated system by using a Convolutional Neural Network which can detect weather a person has any intention to commit any crime or not. Here we proposed a new method that can identify crime intrusions or violent behavior of any person before executing crimes more efficiently by using a very little data of facial expressions before executing crime or any violent tasks. Instead of using image features which is a time-consuming and faulty method we used an automated feature selector CNN (Convolutional Neural Network) model which can capture exact facial expressions for training and then can predict that target facial expressions more accurately. Here we used only the facial data of a specific geographic region which can represent the violent and before crime facial patterns of the people of whole region. Bioinformatics, Humanitarian Challenges, Humanitarian Apps & Opportunities ## I Introduction From the literature of violence detection, we can observe that no Machine learning or any other models are not being used by any industry or any individuals or any institution currently. But many scientists and processes are trying to build an Excel learning model for detecting violence since a long time ago. Actually final deduction is one of the most valuable and hardest tasks nowadays. [1] According to the Report of US ministry of Media Control and The influence of the cinema on children and adolescents, an annotated international bibliography by UNESCO media Actor's and highlighted Small Number of Negative Characters Influence on Human Psychology Media Actors influences the psychology, attitude, behaviors and other patterns of Childs and young population mostly. According to this study, in general the heroic characters are portrayed by movies and media as a Gangster especially they all are very efficient in committing violence. The fact is movies and media portray all of their works and attitudes positively. It means a negative violent attitude or behavior of the heroic character is justified by the portrayal of the director in the media. [2] Media actors influences the psychology, attitude, behaviors and other patterns of Childs and young population. Portrayal of minorities as a positive character influences the psychology attitude and behavior of children and youths mostly. The pattern of violence activities of a certain geographical region can be represented by their movie actors. A large portion of the total population and negatively affected by the justified and positively portrayed violence activities of the movie actors. Since the children and young people are mostly infected by this media and movies so they will reflect these types of patterns in their future life though. [3] ## II Proposed Methodology Since we can see that crime patterns as well as facial expressions and facial image properties varies very dynamically with changing situation, region, time, crime type and many more. Analyzing all of these is must needed in order to prevent the crime activities. As we know that, behavior and psychology of human beings of most of the regions or cultures are heavily influenced by their media and surroundings. Especially all the people of any certain region tries to follow the patterns they all see or observe on media and news. Almost all population of any region commonly try to copy the most common strategies they all are seeing on media and news. Media is covered by most of the population and the strategies they see on it influence their mind for executing any types of non-professional crimes in a same manner. Even sometimes the positive characters who is portrayed as racist, criminal or as a role model become the real role model for non-professional criminals. Their behavior, attitude, facial expressions, movements, strategies all are copied by maximum non-professional criminals at almost all crime activities. For these reasons the patterns of crimes become similar at a certain geographic area and it makes it easy to find out the crime activities [4]. We find out such some characters like media actors and they are: * Media actors of a certain geographic region * Negative Characters on television of a certain geographic region. * Very Professional role model criminals of a certain geographic region. * Criminal but public hero type characters of a certain geographic region. * Such characters who become the news headline for their tactful criminal activities. * Social media. activities of peoples of a certain geographic region. If we analyze above mentioned character's facial expressions before they're executing the crime, some of the similar expressions should be found over the non-professional criminal's facial expressions before executing crimes at a same geographical region. But for gaining very accurate result from these types of data, analyzing more properties like body movement, voice, and characteristics should bring better result. [5] ## III Collecting Images As we mentioned above, at first, we collected some pictures which contains all the properties of human faces who are attempting to commit a crime of those categories of people. These are our targeted pictures which we want to identify as they are attempting to do a crime. All of our collected photos are the faces of criminals just before the situation of doing a crime or planning for a crime. These images are used as samples of human faces before committing a crime. Next, we have collected some general facial photos seems normal human faces who are not suspected as criminals or have no intensions to commit any crimes of same geographical areas. These photos included smiley faces, talking or communicating people, working, playing, sitting, gaming and some other normal working time captures. All of these photos are collected from various websites and image search engines. Because it's quite hard to find an image database which contains the facial expressions of various peoples just before committing the crime, we selected the faces of various actors acting on a crime scene on various TV shows or movies. We selected the best matching photos of faces represents the pre-crime expressions perfectly on various situations. We collected the face images from various directions with different light intensity levels for our experiments. We included the faces of humans between age range 20-70 for both targeted sampling and negative sampling. We covered almost all categories of images for our experiment. We made two sets of images labeled as suspect and general face. Figure 1 and 2 represents some of our training data of facial images from both positive and negative samples [6]. ## IV Description of Model and Parameters We used a **Convolutional Neural Network** model for prediction purposes. We plotted the performance of convolutional neural network models with different loss functions. For the lack of computational power and high quality data within a short time we have used 650 images as training samples and 100 images as validation samples of the model. Made prediction on known data and got a very accurate prediction in most of the cases. Figure 1: Positive Samples Figure 2: Negative Samples We evaluated the performance of the model in terms of Accuracy and loss of the model. actually we tried to find out the optional model configuration such as optimizer, loss function, the number of hidden layers, fully connected layers, activation functions etc. that can model our small data set perfectly. * A low accuracy and huge loss means huge errors on a lot of data. * A low accuracy but low loss means little errors on a lot of data. * Objective Situation: A great accuracy with low loss means low errors on a few data (best case). We have classified the images into two classes using a convolutional neural network but we have designed the prediction procedure in a different way. Instead of taking a class like violent and nonviolent, we calculated the probability of an individual being violent. This measurement gives us an accuracy of another dimension with which we can predict images more accurately. We have calculated all the outcomes and determine the outcome based on this probability. Firstly, we have set a scale for the Violent attitude. This can be considered as a situation all approach for solving this problem. Since our data is not cleaned and true accurate so we implemented this technique for gaining accurate results [8]. We resized images as 100\(\times\)100 pixels before training. We've used the RELU and LeakyRELU as activation function for the 1\({}^{\mathrm{st}}\) layer, RELU for fully connected layers and sigmoid for the output layer. We used ADAM and RMSPROP as optimizer functions and selected the final one by evaluating the performance. Binary cross entropy was used as loss function and evaluation metric was accuracy. * Current Situation: A great accuracy but a huge loss, means huge errors on a few data. Since dataset is small and the images can be considered as not well balanced and noisy, we focused on the structure of the model capable of modelling award image data set and classify it well. All the images were resized to the dimension 100*100 pixels. Then we prepared our training and test data set by splitting it by the ratio 70% for training and 30% for validating. we have visualized our model structure below in Figure 3[7]. ### _Relu Activation Function_ If x is the input vector, then we can define the RELU activation function as \[R(x)=\begin{bmatrix}x&x>0\\ 0&x<=0\end{bmatrix}\] It avoids and rectifies vanishing gradient problem. ReLu is less computationally expensive than Tanh and sigmoid because it involves simpler mathematical operations. ### _Adam Optimizer_ Adaptive Moment Estimation (Adam) combines ideas from both RMSProp and Momentum. The Mathematical expression is: Figure 3: CNN Model Architecture \[V_{sw}=\beta_{i}v_{sw}+(1-\beta_{i})\frac{\partial\mathcal{J}}{\partial W},S_{sw}= \beta_{i}s_{sw}+(1-\beta_{i})(\frac{\partial\mathcal{J}}{\partial W})^{2}\] \[v_{sw}^{corrected}=\frac{v_{sw}}{1-(\beta_{i})^{J}},s_{sw}^{corrected}=\frac{s_{ sw}}{1-(\beta_{i})^{J}},\] \[W=W-\alpha\frac{v_{sw}^{corrected}}{\sqrt{s_{sw}^{corrected}+\varepsilon}}\] Where, \(V_{sw}=\) The exponentially weighted average of past gradients. \(S_{sw}=\) The exponentially weighted average of past squares of gradients. \(\beta 1\), \(\beta 2\) = Hyper parameter to be tuned. \(\frac{\partial\mathcal{J}}{\partial W}=\) Cost gradient with respect to current layer. W= The weight matrix (parameter to be updated). \(\alpha=\) The learning rate. \(\mathcal{E}=\) Very small value to avoid dividing by zero. ### _Rmsprop Optimizer_ Root Mean Square Prop (RMSProp) works by keeping an exponentially weighted average of the squares of past gradients. RMSProp then divides the learning rate by this average to speed up convergence. \[s_{sw}=\beta s_{sw}+(1-\beta)(\frac{\partial\mathcal{J}}{\partial W})^{2}\] \[W=W-\alpha\frac{\partial\mathcal{J}}{\sqrt{s_{sw}^{corrected}+\varepsilon}}\] Where meaning of the symbols are same as of ADAM. [9] ## V Training and Testing the Model We collected images and then preprocessed them accordingly. After that we trained 4 CNN models with different configurations for achieving the best accuracy with minimum loss. We fitted the dataset to CNN and plotted the accuracy and loss value with respect to the epochs. By evaluating those figures, we selected the best model for our dataset and made prediction on new data. The figures 4-11 is the accuracy and loss functional values of CNN model with different optimizers, activation, preprocessing and etc [10]. We trained 4 CNN model with our data set using different activation functions and optimizers. we used the LeakyRELU and RELU both in the hidden layers and Sigmoid for the output layer. We have trained models after doing some image processing and without any kind of processing. We have used to different optimizer and they are RMSPROP and ADAM. In the figure three we can see that the RMSPROP optimizer performed well in case of training. the accuracy was 100% and loss function converged well. but when we take a look at the validation curve then we can see that there is a difference between the training performance and validation performance and validation performance is not a satisfactory as the training performance. it indicates that the model is well balanced, the error rate is very low, the model was able to understand the underline pattern of the data but the model failed in case of predicting new data. The same thing goes for the combination of ADAM without any image preprocessing. ADAM was able to optimize the loss function video 1 while training the data set but it failed to perform accordingly in the case of new data. When we used images with a little preprocessing with Figure 11: Model loss with random zooming images with ADAM optimizer Figure 8: Model Accuracy with ADAM and LeakyRelu Figure 10: Model Accuracy with random zooming images with ADAM optimizer Figure 7: Model loss with ADAM optimizer Figure 9: Model loss with ADAM and LeakyRelu zooming the images and then Training it using the optimizer ADAM, from the figure we can see that the training accuracy was fluctuating and the loss function value was also did not converged very well here but the performance of training and validation was almost the same in this case. in this case the training is higher and the error is also lowest. Specifically, the difference between the training performance and the validation performance I almost same. It indicates that, this model can predict new data ## VI Discussion From the above figures and tables, it's clear that our proposed method is an effective and working method. We have predicted on new data with the trained model by an accuracy of 90%. So, there is no confusion that, in case of data unavailability, such methods can deal and produce the data successfully. For the better performance, more data should be collected and more preprocessing may provide better accuracy. We predicted new images using our trained CNN model and the outcome is shown in Figure 13. From Figure 13, it's clear that our model successfully captured the underlying pattern of the data and classified well. From the Figure 12 we can tell that, the facial expression should represent violent are predicted as violent and normal facial expressions are predicted as normal. We predicted total 48 images (21 violent, 27 normal) and 21 violent images was predicted as violent and 24 normal images was predicted as normal. Total 4 normal images were predicted as violent. Our model predicted 4 violent facial images as normal. From Figure 12, the confusion matrix tells us the CNN model predicts well in case of both. It predicts the violent facial expressions very accurately and normal facial expressions too. Some of normal facial expressions was predicted as violent because the model counts all the features and sometimes in a same region some features are same for most of the human. In Figure 13, we tested some images making some people acting as violent and others as normal. The model predicted all the violent acts and normal acts perfectly. ## VII Conclusion Here, we tried to develop a new strategy for solving with problems where the data is minimum or unavailable. We have connected the problem with a socio-behavioral theory to generate the dataset. We found our approach effective in practical. For the lack of computational power, we were unable to perform the experiment with sufficient data. More data should be gathered and processed wisely to make some real application from this method. We just modified the model to make the prediction more accurate but some data can solve the task easily. Image feature based approaches can give better outcome as well as making the process interpretable. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Model** & **Training** & **Validation** & **Training Loss** & **Validation** & **Extra properties** \\ & **Accuracy** & **Accuracy** & & **Loss** & \\ \hline _Model 1_ & _100\%_ & _100\%_ & _0.0014_ & _0.1058_ & \\ \hline **Model 2** & 99.26\% & 86\% & 0.0092 & 0.8600 & \\ \hline **Model 3** & 100\% & 86\% & 0.000002\({}^{\text{t}}\) & 0.8253 & LeakyRelu in 1st layer \\ \hline **Model 4** & 100\% & 100\% & 9.52\(\times\)10\({}^{-4}\) & 0.0095 & Preprocessed image with a 0.2\% zooming \\ \hline \end{tabular} \end{table} Table 1: Model analysis Figure 12: Confusion Matrix Figure 13: Prediction on new images
2304.07866
A New Flexible Modified Impedance Network Converter
One of the popular impedance-network converters are Y-source converters which along with their essential characteristics such as reducing the size of converter components, single-stage power transferring, fault tolerance, and wide voltage gain capabilities there are also some drawbacks that one of the most widespread is high leakage inductances which affect performance negatively. This paper introduces a new configuration based on coupled inductors as a power electronic converter in three case studies to verify the high reliability of the proposed converter by using just a simple controller that is substantial for recycling brake energy in the propulsion system of Electric Vehicles and grid-following inverters. This topology with a straightforward structure, computation, and wide voltage gain, provides a proper connection between the components of its network that obtains appropriate paths for leakage energy, and likewise helps soft-switching in some conditions. Additionally, the performance of other previously related constructions is compared with the suggested topology. Simulations based on MATLAB/ SIMULINK have been carried out, and correspondingly experiment results have been also presented to substantiate the theoretical outcomes.
Shirin Besati, Somasundaram Essakiappan, Madhav Manjrekar
2023-04-16T19:16:21Z
http://arxiv.org/abs/2304.07866v2
# A New Flexible Modified Impedance Network Converter ###### Abstract One of the popular impedance-network converters are Y-source converters which along with their essential characteristics such as reducing the size of converter components, single-stage power transferring, fault tolerance, and wide voltage gain capabilities there are also some drawbacks that one of the most widespread is high leakage inductances which affect performance negatively. This paper introduces a new configuration based on coupled inductors as a power electronic converter in three case studies to verify the high reliability of the proposed converter by using just a simple controller that is substantial for recycling brake energy in the propulsion system of Electric Vehicles and grid-following inverters. This topology with a straightforward structure, computation, and wide voltage gain, provides a proper connection between the components of its network that obtains appropriate paths for leakage energy, and likewise helps soft-switching in some conditions. Additionally, the performance of other previously related constructions is compared with the suggested topology. Simulations based on MATLAB/SIMLINK have been carried out, and correspondingly experiment results have been also presented to substantiate the theoretical outcomes. impedance-network converters, coupled inductors, Y-source converters, Electric Vehicles, grid-following inverters ## I Introduction Considering the growth in usage and application of power electronic converters in Electric Vehicles (EV), Robotics, Energy Storage, Smart Grid Technology, islanded systems, and so many other applications in industry, we are attracted in optimizing the performance of the power electronic converters [1-5]. To the best of our knowledge, the classical impedance-network converter (ZSC) was first demonstrated by F.Z. Peng in 2003 to overcome the limitations of conventional inverters: Voltage Source Converter (VSC) and Current Source Converter (ISC) [6]. Nowadays, a huge number of highly capable power converter topologies for power quality improvement applications have been illustrated. There are unique features for impedance network topologies that are not seen in regular converters. The impedance network with a buck/boost (BF) capability, provides single-stage DC-AC, AC-DC, DC-DC, and AC-AC conversions. In ZSI both power switches of a leg can be turned on simultaneously, therefore, dead time is not a concern anymore. This significantly improves reliability and reduces output waveform distortion. Consequently, the operation period of these converters, unlike conventional structures, consists of two parts; active state (NST) and shoot-through state (ST) [6, 7]. Along with their prominent specifications, a classical ZSC also has its weaknesses such as (1) a basic ZSC uses two capacitors in its network and causes inrush current at startup; (2) they regulate gain factor only by adjusting the shoot-through duty ratio, this is while numerous different impedance-network converters have been purposefully proposed to solve these disadvantages [8-12]. In this paper, two previous ZSCs which are improved Y-source and modified Y-source inverters (YZSI), are briefly introduced and then they will be compared with the proposed topology. [13, 14]. In sum, the first topology in Fig. 1 has been proposed with greater flexibility of the number of coupled inductors in boost facto to improve the gain voltage (it is considered BF = 2). So, the boost factor has been changed from equation (1) pertains to a Classical Y-source converter (classical YZSI) to equation (2). Following upgrading this converter, Modified YZSI with the same boost factor, and by adding two capacitors, one diode, and another inductor to the impedance network for creating Additional pathways in order to diminish leakage energy, therefore, it has been able to reduce power losses. This topology also is able to eliminate the high voltage and current values at start-up to enhance efficiency and produce high reliability for switches and components within the network [14]. Likewise, it has complex calculations to providing boost factor and also, increasing the number of network components of this modified converter rises the cost. Fig. 2 shows the structure of Modified YZSI and the equations of it is presented in equation (3). The main contributions of this paper are: (i) we designed a new ZSC converter with its distinctive features and compared with the previous ones, (ii) we analytically computed the output/input voltage ratio (BF), (iii) we simulated the proposed converter's performance in SIMULINK/MATLAB, (iv) we derived experimental results and compared with the simulation results. Fig. 1: Improved Y-source inverter. ## II Proposed Topology ### _A. Circuit Introduction_ The proposed topology is indicated in Fig. 3, the number of impedance network components is equal to the first introduced structure and less than the second one. Here, a coupled inductor Y-source and a simple inductor are used for designing the impedance construction. The foremost objective for the new YZSI is expanding the boost factor by simple calculation, and improving efficiency along with saving continuous input current and the other properties of an impedance network converter. As mentioned before, the unique feature of this topology compared to the two previous converters is due to the particular design of the coupled inductors, it has been able to maintain the same number of improved Y-Source components, create proper pathways for the leakage energy and consequently increase efficiency. In this paper, the gain voltage has been considered as 5. Another exclusivity of the topology is the inductor presence in two branches connected to the switching bridge that contributes to the soft switching during inductive loads. ### _B. Computation Principle_ Fig. 4 shows the proposed topology equivalent circuit in the NST functional mode. By using the volt-second balance principle in equations (4), and (5), as the result, equations (6) and (7) are the voltages of L\({}_{\epsilon}\) and L\({}_{\text{i}}\) in terms of capacitor voltage and DC input voltage. Anew, the calculation is applied according to the equivalent circuit of the desired impedance network in ST mode in Fig.5 to obtain voltages of the network inductors in this term. In the manner of the routes in the ST equivalent circuit and equations (8), and (9), the network inductors are charging by discharging voltages of the capacitors as well as the DC input voltage, while diode 1 (D\({}_{1}\)) is in reverse. \[\int_{0}^{Tx}\text{VL.dt}=0\to\int_{0}^{Tx}\text{VL.dt}+\int_{0}^{ Trast}\text{VL.dt}=0 \tag{4}\] \[\int_{0}^{Tx}\text{VL.dt}=0\to\int_{0}^{Tx}\text{VL.dt}+\int_{0}^{ Trast}\text{VL.dt}=0\] (5) \[\text{K=}\frac{\pi\pi^{2}}{\pi^{4}_{1}}\] \[\text{V}_{\text{L}}{}^{\text{NST}}_{\text{L}}=\frac{1}{1+K}\text {V}_{\text{DC}}-\frac{1}{1+K}\text{V}_{\text{C2}}\] (6) \[\text{V}_{\text{L}}{}^{\text{NST}}_{\text{L}}=\frac{P-K}{1+K}\text {V}_{\text{DC}}-\frac{P-K}{1+K}\text{V}_{\text{C2}}-\text{V}_{\text{C1}} \tag{7}\] The voltages across C1 and C2 are obtained as follows, \[\text{V}_{\text{C1}}=\frac{\text{d}(1+\Phi)}{(1-\Phi)(1+\Phi)-\text{d}(1+\Phi)} \text{V}_{\text{DC}} \tag{8}\] \[\text{V}_{\text{C2}}=\frac{(1-\Phi)(1+\Phi)}{(1-\Phi)(1+\Phi)-\text {d}(1+\Phi)}\text{V}_{\text{DC}} \tag{9}\] Finally, DC link voltage and BF by replacing the voltage of the capacitor in the calculated equation of in NST mode, are obtained as the following equations. \[\text{V}_{\text{pn}}=\text{V}_{\text{C1}}+\frac{1+\Phi}{1+K}\text {V}_{\text{C2}}-\frac{\text{P-K}}{1+K}\text{V}_{\text{DC}} \tag{10}\] \[\text{V}_{\text{pn}}=\frac{(1+\Phi)}{(1-\Phi)(1+\Phi)-\text{d}(1+ \Phi)}\text{V}_{\text{DC}}\] (11) \[\text{B}=\frac{(1+\Phi)}{(1-\Phi)(1+\Phi)-\text{d}(1+\Phi)} \tag{12}\] Fig. 4: Equivalent circuit of the proposed topology in NST mode. Fig. 5: Equivalent Circuit of the proposed converter in ST mode. Fig. 3: Configuration of the proposed topology. As well, the AC output voltage with M modulation coefficient for the proposed topology is equal to equation (13). As can be seen from BF equation (12), the turn rations of three coupled inductors are present in the voltage gain equation. Concisely, Table 1 compares the proposed structure and previous Y-Source converters. \[\text{V}_{\text{AC}}\!=\!\frac{\text{M}}{2}\!\!\frac{(1+\text{P})}{(1-\text{d}) (1+\text{P})-\text{d}(1+\text{K})}\,\text{V}_{\text{DC}} \tag{13}\] ## III Performance Evaluation In this section, the proposed converter is simulated in SIMULINK/ MATLAB software for three different loads that are inductive motor, inductive generator, and resistive load correspondingly by designing a simple unipolar pulse-width modulation (PWM). Three case studies have been presented to verify the high reliability of this converter by using just a simple controller, and additionally, to validate the theoretical results. Finally, the proposed topology with a 20 kHz switching frequency has been prepared as a lab sample for DC-DC conversion. The simulation characteristics for three cases are provided in TABLE 2. ### _A. Case 1: Induction motor_ Fig. 6 presents the simulation results for the first case study with an induction motor as an inductive load and the input DC voltage is considered 80 V as Fig. 6(a). According to Figs. 6(b, c), the waveform curves are illustrating a continuous input current and current through the diode, severally where inrush currents at the beginning exist due to the inductors and the capacitors within the network. Fig. 6(d) shows the link voltage of the inverter with BF\(=5\), and by looking at the switch current in Fig. 6(e) Soft-switching performance has been proved. Moreover, Fig. 6(f) is the output results belonging to the motor with a positive torque. ### _B. Case 2: Induction generator_ Case 2 is produced in an opposite direction (bidirectional path) for charging a battery by an induction generator with quantities exactly equal to case 1 and furthermore, the simulation results in this case are the same as the first episode. This similarity confirms the appropriate behavior and stability of the topology in different conditions and for different loads with only a simple control system that. This aspect is important for recycling brake energy in EVs and grid-following inverters. The outcomes of this case study are presented in Fig. 7. \begin{table} \begin{tabular}{c c c c} & \multicolumn{1}{c}{\multirow{2}{*}{**\begin{tabular}{} \end{tabular}**}} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} \\ _Transversal_ & & & \\ \hline _Number of_ & 2 & 2 & 4 \\ _capacity_ & & & \\ \hline _Number of_ & \begin{tabular}{c} \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ }}}}}}} {}{}}{}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{{}{{}{{}{{}{}{{}{{}{}{{}{{}{}{{}{{}{{}{}{{}{{}{}{{}{{}{{}{{}{{}{}{{}{{}{{}{{}{{}{{}{}{{}{{{}{{}{{}{{}{{}{{{}{{{{}{{{{{}{{}{{{{{}{{{{{}{{{}{}{{}{{{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{ 0 }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} ### _Case 3: Resistive Load and Laboratory Results (DC-DC Converter)_ Due to the specification of impedance network converters that can be used in all power conversion modes, the proposed impedance network has been designed for a DC-DC converter with a resistive load. TABLE III and Fig. 8 determine the component quantities and experimental results in case study 3. Fig. 8(a) shows the input DC voltage is considered 20 volts, and the objective is to have a DC output of 100 volts which incomes the boost factor is 5, as deemed previously. Fig. 8(b) shows the continuous input current, and the other simulation results of the components have been presented in Fig. 8(c, d). As it can be seen from Fig. 8(e) a monolithic voltage waveform goes through C\({}_{2}\), because currents through both branches connected to this capacitor first pass through the inductors and then enters into the C\({}_{2}\). Correspondingly, The converter's efficiency is 98% in the results of the simulations and reality. ## IV Conclusion One way to produce a flexible voltage gain, reduce the size of converter components, and achieve better short-circuit fault tolerance is using Y-source converters. These features are critical to operational performance and safety in the propulsion systems of EVs and integration of distributed energy storage, especially as it pertains to battery safety. A novel proposed Fig. 8: Experimental results of the proposed converter for the resistive load. Fig. 7: Simulation results for induction generator. \begin{table} \begin{tabular}{c c} _Parameters_ & _Amount_ \\ \hline _INPUT VOLTAGE_ & 20 volts \\ \hline _Output voltage_ & 100 volts \\ \hline _L\({}_{e}\)_ & 330 mH \\ \hline _Turn ratio of Y-Source_ & n:1:2:n:s: 1:2:2 \\ \hline _C\({}_{l}\)_ & 220 \(\mu\)F \\ \hline _C\({}_{l}\)_ & 680 \(\mu\)F \\ \hline _R\({}_{Load}\)_ & 245 \(\mu\) \\ \hline _f\({}_{reducing}\)_ & 20 \(\mu\)k \\ \hline _d\({}_{d}\)_ & 40\% \\ \end{tabular} \end{table} TABLE III: parameters of the experimental sample. topology with low component count, simpler construct, and high voltage boost ratio was presented in this paper. Analytical and experimental results validate the performance of this converter for various types of loads to prove its high flexibility and resiliency with performance such as continuous input current, soft-switching during inductive loads and decreasing startup inrush current.
2308.10771
Beating one bit of communication with and without quantum pseudo-telepathy
According to Bell's theorem, certain entangled states cannot be simulated classically using local hidden variables (LHV). But if can we augment LHV by classical communication, how many bits are needed to simulate them? There is a strong evidence that a single bit of communication is powerful enough to simulate projective measurements on any two-qubit entangled state. In this study, we present Bell-like scenarios where bipartite correlations resulting from projective measurements on higher dimensional states cannot be simulated with a single bit of communication. These include a three-input, a four-input, a seven-input, and a 63-input bipartite Bell-like inequality with 80089, 64, 16, and 2 outputs, respectively. Two copies of emblematic Bell expressions, such as the Magic square pseudo-telepathy game, prove to be particularly powerful, requiring a $16\times 16$ state to beat the one-bit classical bound, and look a promising candidate for implementation on an optical platform.
István Márton, Erika Bene, Péter Diviánszky, Tamás Vértesi
2023-08-21T15:01:27Z
http://arxiv.org/abs/2308.10771v1
# Beating one bit of communication with and without quantum pseudo-telepathy ###### Abstract According to Bell's theorem, certain entangled states cannot be simulated classically using local hidden variables (LHV). But if can we augment LHV by classical communication, how many bits are needed to simulate them? There is a strong evidence that a single bit of communication is powerful enough to simulate projective measurements on any two-qubit entangled state. In this study, we present Bell-like scenarios where bipartite correlations resulting from projective measurements on higher dimensional states cannot be simulated with a single bit of communication. These include a three-input, a four-input, a seven-input, and a 63-input bipartite Bell-like inequality with 80089, 64, 16, and 2 outputs, respectively. Two copies of emblematic Bell expressions, such as the Magic square pseudo-telepathy game, prove to be particularly powerful, requiring a \(16\times 16\) state to beat the one-bit classical bound, and look a promising candidate for implementation on an optical platform. ## I Introduction Certain mutipartite quantum correlations cannot be simulated by local hidden variables (LHV), also known as shared random variables. This forms the core of Bell's theorem [1; 2]. When a quantum correlation cannot be simulated by LHV models, it is referred to Bell nonlocal [3]. One question that arises is: Which resources are needed on top of LHVs to simulate quantum correlations? The most obvious resource is LHV, augmented by classical communication [4; 5; 6; 7; 8]. In particular, the question can be turned into a quantitative one: at least how many bits of classical communication are required to reproduce Bell nonlocal correlations arising from any number of measurements on \(d\times d\) quantum states. It is worth noting that Bell nonlocal correlations have also found a crucial role in applied physics, e.g. they can be used for the device-independent certification of the correct functioning of quantum key distribution [9], random number generators [10] and other devices (see e.g. Ref [11] for a thorough review of the field). However, there exist less strict frameworks that include partially characterized devices and the classical communication costs of these protocols have also been studied (see, for example, references for bipartite systems [12; 13; 14] and for single systems [6; 15]). First, the communication cost of simulating maximally entangled states was addressed. Following initial results [5; 8], it has been proven that projective measurements on a two-qubit maximally entangled state can be simulated with LHV augmented by one bit of classical communication (let us call it the one-bit classical model) [16]. What if the qubits are partially entangled? Research has been shown that projective measurements on all partially entangled two-qubit states can be classically simulated with at most two bits of communication [16]. However, Gisin has posed the question whether one bit is sufficient [17]. This problem can be systematically approached, as the one-bit classical resources are contained within a Bell-type polytope [18; 19; 15]. However, the size of the one-bit classical polytope grows rapidly with the number of inputs and outputs. Currently, the largest completely characterized one-bit classical polytope has three measurements for one party, two measurements for the other party, and binary outputs [19]. No quantum violation has been found, even for three inputs per party on both sides [20]. Recently, the problem was approached from a different viewpoint and aimed to simulate two-qubit states with an arbitrary number of projective measurements. In particular, Renner and Quintino [21] devised a one-bit classical protocol that perfectly simulates projective measurements on weakly entangled two-qubit states. Sidayaja et al.'s [22] recent numerical study using neural networks has gathered strong evidence that projective measurements on all two-qubit states can be simulated with a one-bit classical model. The strategy in this study is to identify Bell-like inequalities that are satisfied by all LHV models supplemented with one bit of communication (i.e., one-bit classical models), and then search for a quantum violation of these inequalities. As the two-qubit scenario has been widely studied without violation of the one-bit classical model, here we turn to higher-dimensional bipartite systems. What are the perspectives of solving this problem? On one hand, complexity arguments show that one bit of communication is not sufficient to simulate all bipartite quantum correlations classically [5]. On the other hand, it is an open problem to identify such Bell-type scenarios with a modest number of inputs and outputs (see e.g. Sidajaya et al. [22]) For instance, it is known that correlations resulting from two-output measurements on arbitrary high-dimensional maximally entangled states can be simulated classically by using a mere two bits of communication [23]. In fact, this bound is tight, since there exist \(4\times 4\) dimensional quantum states that cannot be simulated with a single bit of communication. However, the proof involves an infinite number of measurement inputs [24]. This paper presents several examples that beat quantumly the one-bit classical bound with a finite and typically a modest number of measurement inputs and outputs. Our tools are based on four bipartite Bell inequalities: the CHSH inequality [2], the Magic square game [25; 26; 27], the family of CGLMP inequalities [28], and Platonic Bell inequalities [29; 30; 31]. We use them as building blocks to our Bell-type constructions. Table 1 collects the Bell-type constructions of this paper. We present the setup involving the input cardinality (\(m_{A}\) and \(m_{B}\)), output cardinality (\(o_{A}\) and \(o_{B}\)), and the \(d\times d\) quantum state. Additionally, we provide the one-bit classical bound (\(L1bit\)), the quantum value (\(Q\)) of the Bell-type inequalities, and \(D_{P}=m_{A}m_{B}o_{A}o_{B}\). We use this as a measure of complexity of the given Bell-type construction. Note that \(D_{P}>24\) represents a lower bound to surpass the one-bit classical bound quantumly. This is due to the fact that the non-trivial probability distributions that have at most dimension \(D_{P}=24\) are given by the scenarios \((2,3,2,2)\), \((3,2,2,2)\), \((2,2,3,2)\), and \((2,2,2,3)\), and all of them can be simulated classically with one bit of bidirectional communication (i.e. communication either from Alice to Bob or vice versa). It should be noted that the \(L1\)bit results presented in Table 1 are the result of rigorous computation except for the case of CHSH\({}^{\otimes 4}\), for which the \(L1\)bit bound is obtained from heuristics. As we see, all \(D_{P}\) values are much greater than 24, demonstrating the power of a single bit of classical communication, or alternatively, our lack of success in finding constructions that exceed the one-bit bound with lower complexity. We propose it as an open problem to shrink the gap between 24 and 12544. Table 1 reveals that the smallest \(D_{P}\) is given by the \([\text{Magic}^{\otimes 2}]_{s}\) inequality. The inequality features seven inputs and sixteen outputs. We propose it as a candidate to experimentally violate the one-bit classical bound. If we aim to violate quantumly the fixed-directional (e.g., from Alice to Bob) one-bit bound, then the most suitable candidate appears to be the \([\text{Magic}^{\otimes 2}]_{a}\) inequality. The inequality comprises seven inputs on Alice's side and three inputs on Bob's side, with sixteen outputs per measurement on each side. Furthermore, we provide Bell-like inequalities augmented by \(c\) bits of one-way classical communication. These examples are based on multiple copies of the CHSH expression (as discussed in section II.3), as well as a truncated version of multicopy CGLMP\({}_{d}\) inequalities (as discussed in section IV.4). We examine the scaling of the input and output cardinality and the Hilbert space dimension to beat the one-way \(c\)-bit classical bound of the aforementioned inequalities. Crucially, we find a Bell inequality with \(c\) bits of one-way classical communication that has \(2^{c}+1\) inputs and can be violated with high-dimensional quantum systems. However, we were unable to provide an explicit lower bound on the dimension required to exceed the \(c\)-bit bound. Note that this is a minimal scenario with respect to the number of inputs, otherwise the inequality cannot be violated. The paper is structured as follows. Section II provides notation, defines the Bell-like scenario augmented by one bit and \(c\) bits of communication, and demonstrates the usefulness of multiple copies of the quantum CHSH game in this problem. Section III considers the double Magic square game and its different truncated versions. It also provides the one-bit classical bound of the associated Bell-type inequalities. In particular, this section shows a quantum violation of the one-bit classical bound of a 7-setting and 16-outcome Bell-type inequality. Section IV examines the CGLMP\({}_{d}\) inequalities and their truncated versions, and we demonstrate quantum violation of a three-input \(283^{2}\)-output Bell-like inequality with one bit of communication. Section V investigates a different construction based on the so-called Platonic Bell inequalities that belong to a class of correlation-type Bell inequalities. We compute the one-bit classical bound for such Bell-type inequalities with 63 inputs numerically and provide an analytical upper bound as well. We then find genuine quantum violation of this bound with \(8\times 8\) dimensional quantum states. Section VI discusses the results obtained and future research directions. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Design & Section & \((m_{A},m_{B},o_{A},o_{B})\) & Direction & \(L1\)bit & \(Q\) & \(d\) & \(D_{P}\) \\ \hline \hline CHSH\({}^{\otimes 4}\) & II B & \((16,16,16,16)\) & bi & \(132^{*}\) & \(135.8822\) & \(16\) & \(65536\) \\ Magic\({}^{\otimes 2}\) & III A & \((9,9,16,16)\) & bi & \(75\) & \(81\) & \(16\) & \(20736\) \\ \([\text{Magic}^{\otimes 2}]_{s}\) & III B & \((7,7,16,16)\) & bi & \(48\) & \(49\) & \(16\) & \(12544\) \\ CGLMP\({}^{\otimes 2}_{823}\) & IV A & \((4,4,64,64)\) & bi & \(12\) & \(12.1230\) & \(64\) & \(65536\) \\ \([\text{CGLMP}^{\otimes 2}_{283}]_{s}\) & IV B & \((3,3,283^{2},283^{2})\) & bi & \(8\) & \(8.0002059\) & \(80089\) & \(240267^{2}\) \\ \([\text{Magic}^{\otimes 2}]_{a}\) & III C & \((7,3,16,16)\) & fixed & \(20\) & \(21\) & \(16\) & \(5376\) \\ \([\text{CGLMP}^{\otimes 2}_{38}]_{a}\) & IV C & \((3,2,38^{2},38^{2})\) & fixed & \(5\) & \(5.0005456\) & \(1444\) & \(12510816\) \\ \(\text{Plato}_{E7}\) & V & \((63,63,2,2)\) & bi & \(<565\) & \(567\) & \(2\) & \(15876\) \\ \hline \end{tabular} \end{table} Table 1: The term “Design” refers to the construction, denoted together with a section number. The scenario is denoted by \((m_{A},m_{B},o_{A},o_{B})\). “Direction” refers to the direction of the one-bit communication and can either be bi-directional or a fixed one-directional. In addition, the table shows for each construction the one-bit classical bound (\(L1\)bit), the quantum value \(Q\), the dimension \(d\) of the component space and the dimension \(D_{P}\) of the full probability space. The entry marked by (\({}^{*}\)) indicates that the one-bit bound comes from an extensive, though not rigorous, computation. All other entries are exact. The Power of Multiple Copies of the Chsh Expression ### Notation and the one-bit Bell-type scenario In this subsection we establish notation and introduce the concept of Bell-like inequalities that are valid for all correlations that can be simulated classically with a single bit of communication. Consider a scenario where two non-communicating parties, named Alice and Bob, produce outcomes (alternatively outputs) \(a\in\{0,\ldots,o_{A}-1\}\) and \(b\in\{0,\ldots,o_{B}-1\}\) for settings (alternatively inputs) \(x\in\{0,\ldots,m_{A}-1\}\) and \(y\in\{0,\ldots,m_{B}-1\}\). In such a scenario, a generic bipartite Bell inequality can be expressed as \[\mathcal{B}=\sum_{a,b,x,y}S_{abxy}P(ab|xy)\leq L, \tag{1}\] where \(P(ab|xy)\) represents the conditional probabilities and we assume that \(S_{abxy}\geq 0\). Writing a Bell inequality in this form, one can also view it as a bipartite Bell nonlocal game [32]. What is the local bound \(L\), which appears on the right-hand side of Eq. (1)? It is the maximal value of the Bell expression \(\mathcal{B}\) when probabilities \(P(ab|xy)\) admit an LHV model. In this case, \(P(ab|xy)\) can be explained using a common past history and local operations by Alice and Bob, and can be written as follows: \[P(ab|xy)=\int q(\lambda)P_{A}(a|x\lambda)P_{B}(b|y\lambda). \tag{2}\] Here \(\lambda\) represents a local variable, \(q(\lambda)\) denotes a probability distribution, and \(P_{A}\) and \(P_{B}\) refer to Alice's and Bob's respective marginals. Let us now allow one bit of classical communication, say from Alice to Bob, in addition to LHV operations. In that case, the protocol follows these steps. First, Alice and Bob receive their inputs \(x\) and \(y\). Then, Alice is allowed to send one bit of classical communication, \(l=0,1\), to Bob. Afterward, Alice and Bob produce the respective outputs \(a\) and \(b\). In this way, Alice and Bob can simulate all \(P(ab|xy)\) that satisfy: \[P(ab|xy)=\int q(\lambda)P_{A}(a|x\lambda)P_{B}(b|yl\lambda), \tag{3}\] where the marginal of Bob (\(P_{B}\)) also depends on the value of the classical bit \(l=l(x,\lambda)\), \(l\) being either \(0\) or \(1\). The maximum on \(\mathcal{B}\) in Eq. (1) achieved by these strategies is referred to as \(L1\)bit. When the probabilities are obtained from quantum mechanics instead, the maximum value of \(\mathcal{B}\) in Eq. (1) is called the Tsirelson bound [33]. In such a case, \[P(ab|xy)=\mathrm{tr}\big{(}\rho A_{a|x}\otimes B_{b|y}\big{)}, \tag{4}\] where \(\rho\) is a density matrix on the space \(\mathbb{C}^{d}\otimes\mathbb{C}^{d}\), and \(A\) and \(B\) are \(d\)-dimensional projective matrices. These matrices add up to the identity, \(\sum_{a}A_{a|x}=\sum_{b}B_{b|y}=\mathds{1}_{d}\). Let us write the CHSH inequality [2] in the following form (see e.g. [34; 28]): \[\text{CHSH}= P(a=b|00)+P(a=b|01)\] \[+P(a=b|10)+P(a\neq b|11)\leq 3, \tag{5}\] where \(x,y\) and \(a,b\) are assumed to have values of \(0\) and \(1\). On the right-hand side of Eq. (5), \(L=3\) is the local bound, which can be attained by suitable local deterministic strategies. An appropriate strategy is for Alice to output \(a=1\) for \(x=0,1\), while Bob outputs \(b=1\) for \(y=0,1\). Thus, the correlations within the local set (2) can be expressed as \(P_{L}(ab|xy)=\delta_{a,1}\delta_{b,1}\) for any \(x,y\). On the other hand, \(L1\)bit(CHSH) \(=4\), which can be achieved when Alice sends \(l=x\) to Bob, with Alice outputting \(a=1\) for \(x=0,1\) and Bob outputting \(b=1\) for \(y=0\) and \(b=1-l\) for \(y=1\). It is worth noting that \(4\) is also the algebraic bound that can be achieved with \(P(ab|xy)\), solely respecting positivity. In the quantum case, using a two-qubit maximally entangled state and mutually unbiased measurements, the following statistics can be obtained by (4): \[P_{Q}(a,b|x,y)=\frac{1+(\sqrt{2}/2)(-1)^{a\oplus b}(-1)^{xy}}{4}. \tag{6}\] By substituting these values into the CHSH inequality (5), one obtains \(Q(\text{CHSH})=2+\sqrt{2}\). When given \(n\) instances of a Bell nonlocal game \(\mathcal{B}\), a straightforward way is to play them in parallel. For example, when presented with two copies (\(n=2\)) and \(\mathcal{B}=\text{CHSH}\), the resulting double CHSH expression [34] (see also Refs. [35; 36]) is: \[\text{CHSH}^{\otimes 2}=\text{CHSH}_{A,B}\otimes\text{CHSH}_{A^{\prime},B^{ \prime}}, \tag{7}\] where \(\text{CHSH}_{A,B}\) acts on the first copy, while \(\text{CHSH}_{A^{\prime},B^{\prime}}\) acts on the second copy of the space of input-output variables. The formula for \(n\geq 2\) copies is as follows \[\text{CHSH}^{\otimes n}=\bigotimes_{i=1}^{n}\text{CHSH}_{A^{(i)},B^{(i)}}. \tag{8}\] While computing the Tsirelson bound of \(\mathcal{B}^{\otimes n}\) for a generic \(\mathcal{B}\) is difficult, we can often obtain a good enough lower bound by playing each instance of \(\mathcal{B}\) independently with the optimal quantum strategy for the single copy case. In the case of \(n\) copies, we then have \[P(ab|xy)=\prod_{i=1}^{n}P(a_{i}b_{i}|x_{i}y_{i}). \tag{9}\] In general, for the quantum maximum of the \(n\)-copy Bell functional \(\mathcal{B}\) we only obtain a lower bound: \(Q(\mathcal{B}^{\otimes n})\geq Q(\mathcal{B})^{n}\). However, in the particular case of \(n\)-copy CHSH, the Tsirelson bound saturates the lower bound [37], and we have \[Q(\text{CHSH}^{\otimes n})=(2+\sqrt{2})^{n}. \tag{10}\] What is the local bound of CHSH\({}^{\otimes n}\)? One obvious lower bound is \(L\leq 3^{n}\), which can be achieved with independent classical deterministic strategies between the copies. However, exploiting joint strategies enables better performance. In such a case, Alice's output \(a_{i}\) depends not only on input \(x_{i}\), but also on input \(x_{i^{\prime}}\), where \(i^{\prime}\neq i\). For two and three copies, we respectively have the bounds \(L(\text{CHSH}^{\otimes 2})=10\) and \(L(\text{CHSH}^{\otimes 3})=31\), which were obtained independently by S. Aaronson and B. Toner. However, only empirical values can be found in the literature for \(n>3\)[38]. On the other hand, the following upper bound was found in 2014 by A. Ambainis [39], building upon Ref. [40]: \[L(\text{CHSH}^{\otimes n})\leq(1+\sqrt{5})^{n}, \tag{11}\] which holds for any \(n\geq 1\). ### One-bit classical bound for the multi-copy CHSH scenario We show analytically that a certain number of \(n\) copies exist for which quantum correlations exceed the one-bit classical bound, \(L1\text{bit}\). Two ingredients are required for the proof. The first one corresponds to the bound in Eq. (11). The second one is the relation \[L1\text{bit}(\mathcal{B})\leq 2L(\mathcal{B}), \tag{12}\] which holds true for any bipartite Bell expression \(\mathcal{B}\). By choosing \(\mathcal{B}=\text{CHSH}^{\otimes 2}\) and combining the two relations (11,12), the following upper bound is reached: \[L1\text{bit}(\text{CHSH}^{\otimes n})\leq 2(1+\sqrt{5})^{n}. \tag{13}\] On the other hand, the Tsirelson bound of the \(n\)-copy CHSH expression is given by Eq. (10). Applying Eq. (13) results in the value \(n=13\) at which the quantum value \(Q\) exceeds the one-bit bound of \(L1\text{bit}\). This calculation relied on applying analytical upper bounds. However, is the value \(n=13\) tight? We used heuristic search to compute \(L1\text{bit}(\text{CHSH}^{\otimes n})\) for small \(n\) and we found that \(n=4\) is the critical value at which \(L1\text{bit}(\text{CHSH}^{\otimes n})<Q(\text{CHSH}^{\otimes n})\). In this Bell-type scenario, each party has \(m=2^{n}=16\) measurement inputs, \(o=16\) measurement outputs, and \((16\times 16)\)-dimensional states. See Table 2. The entries without stars in the table are obtained through exact enumeration, while those marked with stars (\({}^{*}\)) are based on heuristics. The heuristics for the one-bit classical bound is a modified version of the see-saw procedure, where iteration is also performed for the optimal strategies for the \(l(x)=0,1\) bit message, as used e.g. in Refs. [38; 41]. Note that, since the CHSH\({}^{\otimes n}\) expression is symmetric for party exchange, all the above findings apply to the bidirectional case. ### One-way \(c\)-bit classical bound for the \(n\)-copy CHSH scenario We generalize the one-bit classical result from Sec. II.2 to the exchange of \(l=(2^{c})\)-level classical messages, where \(c\) is the number of bits. It should be noted that we are considering only a fixed amount of one-way classical communication between the two parties, but our findings can be extended to a fixed amount of two-way communication as well. The set of possible classical protocols that use at most \(c\) bits of communication is the subject of the field of communication complexity [42]. We inquire about the number of \(n\) copies of CHSH expressions required to surpass the one-way \(c\)-bit classical bound with the quantum value (10). In particular, we give an upper bound on the setup parameters, including the number of inputs \(m\), outputs \(o\), and the dimension \(d\) per party needed to beat the \(c\)-bit classical bound. Let us extend the results obtained in section II.2 on one bit to \(c\) bits. Firstly \[L\text{bit}(\mathcal{B})\leq 2^{c}L(\mathcal{B}) \tag{14}\] and specifically for \(\mathcal{B}=\text{CHSH}^{\otimes n}\) and having applied the upper bound (11) to a given \(c\) and \(n\), we obtain \[L\text{bit}(\text{CHSH}^{\otimes n})\leq 2^{c}(1+\sqrt{5})^{n}. \tag{15}\] Furthermore, given our condition to exceed the \(L\)cbit bound: \[2^{c}(1+\sqrt{5})^{n}<(2+\sqrt{2})^{n} \tag{16}\] we arrive at the upper bound for the \(n_{\text{crit}}(c)\) value: \[n_{\text{crit}}(c)\leq 13\times 2^{c}. \tag{17}\] Since for the \(n\)-copy CHSH expression, the number of inputs, outputs, and dimensionality of the component space is the same (i.e., \(2^{n}\)), we obtain the following upper bounds: \[m_{\text{crit}}=o_{\text{crit}}=d_{\text{crit}}\leq 2^{13\times 2^{c}}. \tag{18}\] \begin{table} \begin{tabular}{|c|r|r|r|} \hline \(n\) & \(L(\text{CHSH}^{\otimes n})\) & \(L1\text{bit}(\text{CHSH}^{n})\) & \(Q(\text{CHSH}^{\otimes n})\) \\ \hline \hline 1 & 3 & 4 & 3.41421 \\ 2 & 10 & 16 & 11.65685 \\ 3 & 31 & 40 & 39.79898 \\ 4 & \(100^{*}\) & \(132^{*}\) & 135.88225 \\ 5 & \(310^{*}\) & \(408^{*}\) & 463.93102 \\ 6 & \(1000^{*}\) & \(1332^{*}\) & 1583.95959 \\ \hline \hline \end{tabular} \end{table} Table 2: The table displays the local bound \(L\), the one-bit bound \(L1\text{bit}\), and the quantum value \(Q\) of the \(n\)-copy CHSH inequality up to \(n=6\). The computation of the one-bit bounds is due to this work. Araújo et al. [38] computed the local bound for \(n>3\). All the entries marked with (\({}^{*}\)) are based on heuristics. The quantum value \(Q\) is given analytically by the formula \((1+\sqrt{2})^{n}\) shown in Eq. (10). Note that a lower bound of \(m_{\rm crit}(c)>2^{c}\) follows from Alice simply communicating her own input to Bob using an \(l=(2^{c})\)-level classical message. Regarding the one-bit classical bound, we ask about the possibility of finding more economical Bell-like inequalities augmented by one bit of communication that can be violated quantumly with fewer inputs, outputs and dimensions. In the next sections, we will improve on the \(n\)-copy CHSH inequality in all the aforementioned bounds (18). Nonetheless, we cannot improve of all three parameters simultaneously. Still is it possible to find tighter upper bounds for \(m_{\rm crit}(c)\), \(o_{\rm crit}(c)\) or \(d_{\rm crit}(c)\) for \(c>1\) by using other Bell-like inequalities? We shall present an optimal solution to \(m_{\rm crit}(c)\) based on a truncated version of the \(2^{c}\)-copies of the CGLMP\({}_{d}\) inequality. A tight lower bound of \(m_{\rm crit}(c)>2^{c}\) can be achieved by using a (possibly huge) \(D\times D\) quantum state, where \(D=d^{2^{c}}\) and \(d\) is large enough (see section IV.4 for the details). In the following sections, we will examine the double Magic square game (Sec. III), the double CGLMP\({}_{d}\) inequalities (Sec. IV), and a Platonic Bell-type inequality (Sec. V). All of these examples are shown to beat the one-bit classical bound quantumly. ## III Two copies of the magic square and pseudo-telepatthy games ### One-bit bound on the double Magic square game First let us give a brief description of the single-copy Magic square game [25; 26; 27]. This is a nonlocal game for which the Tsirelson bound achieves the algebraic maximum. This game is within the class of quantum pseudo-telepathy games [43]. Each party has three inputs and four outputs. The Bell functional "Magic" with the local bound of 8 is written as follows: \[{\rm Magic}=\sum_{x,y=0,1,2}P(a_{y}=b_{x}|xy)\leq 8, \tag{19}\] where the parties produce three bits each, which are represented as \(a=(a_{0}a_{1}a_{2})\) and \(b=(b_{0}b_{1}b_{2})\). Additionally, it is assumed that the following conditions hold true: \(a_{0}\oplus a_{1}\oplus a_{2}=0\,{\rm mod}\,2\) and \(b_{0}\oplus b_{1}\oplus b_{2}=1\,{\rm mod}\,2\). Due to this parity constraint, the third output becomes unnecessary and each party only requires four outputs: \(a=a_{0}a_{1}\in\{00,01,10,11\}\) and similarly \(b=b_{0}b_{1}\in\{00,01,10,11\}\). In the game terminology, the existence of a winning quantum strategy means that quantum physics violates this inequality up to the maximal algebraic value of 9. This violation can be obtained with a \(4\times 4\) dimensional maximally entangled state. It is known that \(L({\rm Magic})=8\), \(L1{\rm bit}({\rm Magic})=Q({\rm Magic})=9\) (see Ref. [44]). Therefore, there is no advantage of using quantum strategies over the optimal one-bit classical protocol. Table 3 summarizes the different bounds for the \(n\)-qubit Magic square game up to \(n=3\). As we can see, two copies are sufficient to beat the one-bit bound (\(L1{\rm bit}({\rm Magic}^{\otimes 2}=75)\)) quantumly (\(Q({\rm Magic}^{\otimes 2}=81)\)). The value of 75 has been verified using the branch-and-bound algorithm (see Ref. [41]) adapted to the one-bit problem. We implemented this algorithm, which gives the exact one-bit bound, using CPU parallel computation, and reproduced the value of 75 in 11 seconds on our workstation. However, this value can also be proved using analytical arguments. Below, we only prove that \(L1{\rm bit}({\rm Magic}^{\otimes 2})\leq 80\), which is strictly less than the algebraic bound of 81. Proof.: Denote the set of nine inputs on Alice's side as \(X(x,x^{\prime})=\{0,1,2\}^{2}\), and use the same set of inputs on Bob's side \(Y(y,y^{\prime})=\{0,1,2\}^{2}\). In order to prove that the one-bit classical bound for \({\rm Magic}^{\otimes 2}\) is less than the algebraic bound, we make use of the definition (3) for the one-bit classical set. We have to consider all possible bipartitions of \(X\) according to the classical message \(l=0,1\), add up the local bound for each partition, and choose the partition with the greatest sum to obtain the \(L1{\rm bit}\) bound. Since we are only concerned whether \(L1{\rm bit}\) can attain the algebraic maximum or not, it is enough to show that the local bound of one of the partitions cannot attain the algebraic maximum that corresponds to that particular partition. In this context, it is useful to introduce a coarse-graining of the joint probability distribution \(P(aa^{\prime}bb^{\prime}|xx^{\prime}yy^{\prime})\) for a given \(x^{\prime}\in\{0,1,2\}\) on Alice's side and \(y^{\prime}\in\{0,1,2\}\) on Bob's side: \[P(ab|xy)=\sum_{a^{\prime},b^{\prime}}P(aa^{\prime}bb^{\prime}|xx^{\prime}yy^{ \prime}), \tag{20}\] where summation is over all \(a^{\prime}=\{0,1\}^{2}\) and \(b^{\prime}=\{0,1\}^{2}\) outputs. Let us observe that a probability distribution \(P(ab|xy)\) in Eq. (20) that corresponds to the algebraic maximum of a single-copy pseudo-telepathy game does not allow the original two-copy distribution \(P(aa^{\prime}bb^{\prime}|xx^{\prime}yy^{\prime})\) to be achieved via a local strategy (2). Otherwise, it would be possible to obtain a nonlocal \begin{table} \begin{tabular}{|l|l|l|l|} \hline \(n\) & \(L({\rm Magic}^{\otimes n})\) & \(L1{\rm bit}({\rm Magic}^{\rm n})\) & \(Q({\rm Magic}^{\otimes n})\) \\ \hline \hline 1 & 8 [27] & 9 [44] & 9 [27] \\ 2 & 66 [38] & 75 & 81 \\ 3 & 528\({}^{*}\)[38] & 621\({}^{*}\) & 729 \\ \hline \hline \end{tabular} \end{table} Table 3: The table lists the local bound \(L\), the one-bit bound \(L1{\rm bit}\), and the quantum value \(Q\) for the \(n\)-copy Magic square Bell-type inequality for values up to \(n=3\). The computation of the local bound is noted in the references, whereas the computation of the one-bit bound for \(n=2,3\) is due to the present work. Entries for \(L\) and \(L1{\rm bit}\) without stars are calculated by exact enumeration, whereas entries with a star (\({}^{*}\)) are based on heuristics. The quantum value \(Q=9^{n}\) defines a lower bound to the Tsirelson bound for any value of \(n\). distribution from local operations, which would contradict (2). Let us divide the set \(X\) into two arbitrary subsets named \(X_{1}\) and \(X_{2}\). Then let us use the following notation to represent these subsets: \[X_{R} =\{(x_{0},0),(x_{1},1),(x_{2},2)\},\] \[X_{L} =\{(0,x_{0}^{\prime}),(1,x_{1}^{\prime}),(2,x_{2}^{\prime})\}, \tag{21}\] where \(x_{i}\) and \(x_{i}^{\prime}\) can take values in \(\{0,1,2\}\). Through the grouping of \(X\) into \(X_{1}\) and \(X_{2}\) we see that one of them will contain either \(X_{L}\) or \(X_{R}\) or both of them. It is worthy to note that Bob's nine-input set \(Y(y,y^{\prime})=\{0,1,2\}^{2}\) has not been partitioned. Consequently, if either subset \(X_{1}\) or \(X_{2}\) contains \(X_{L}\) (\(X_{R}\)), a coarse-grained distribution \(P(ab|xy)\) on the first copy (\(P(a^{\prime}b^{\prime}|x^{\prime}y^{\prime})\) on the second copy) will certainly correspond to the algebraic maximum of the Magic square game. Since the probability distribution corresponding to \(Q(\text{Magic})=9\) cannot be obtained with local strategies (where \(L(\text{Magic})=8\)), thus, an upper bound of \(L1\text{bit}(\text{Magic}^{\otimes 2})<81\) is established. Do note that the one-bit classical bound can take only positive integers, resulting in a maximum upper bound of 80. \(\blacksquare\) Using more detailed analytical arguments, it is possible to show that \(L1\text{bit}(\text{Magic}^{\otimes 2})\leq 75\). As the bound can be attained with a specific one-bit strategy, the bound is strict, meaning \(L1\text{bit}(\text{Magic}^{\otimes 2})=75\). ### One-bit classical bound for the truncated double Magic square game The Magic\({}^{\otimes 2}\) inequality consists of 9 inputs and 16 outputs. Let us now remove two inputs from the input set \(\{0,1,2\}^{2}\). Specifically, we remove \(\{21,22\}\) on both sides. By doing so, we obtain a Bell inequality with 7 inputs and 16 outputs, which we call \([\text{Magic}^{\otimes 2}]_{s}\). This game still remains a pseudo-telepathy game, since the algebraic bound matches the quantum bound, \(Q([\text{Magic}^{\otimes 2}]_{s})=49\), and the local bound is \(L([\text{Magic}^{\otimes 2}]_{s})=44\) due to an exhaustive search made by enumerating all deterministic strategies. However, \(L1\text{bit}([\text{Magic}^{\otimes 2}]_{s})=48\). The proof is analogous to that of the double Magic square game in Sec. III.1. We observe that for any partitioning of the 7-element set \(X(x,x^{\prime})=\{00,01,02,10,11,12,20\}\) into two disjoint subsets \(X_{1}\) and \(X_{2}\), one of the subsets will necessarily contain either \(X_{L}\) or \(X_{R}\). The remaining part of the proof is similar to that of double Magic square game. Notice that since the Bell functional \([\text{Magic}^{\otimes 2}]_{s}\) is symmetric with respect to party exchange, the quantum value of 49 exceeds the bidirectional one-bit classical bound. ### Fixed directional one-bit classical bound for an asymmetrically truncated double Magic square game Let us now consider the case, when the communication direction is fixed, and Alice is allowed to communicate one bit to Bob. Once again, we begin with the double Magic square game, but this time, inputs on Alice and Bob respective sides are as follows \[X(x,x^{\prime}) =\{00,01,02,10,11,12,20\},\] \[Y(y,y^{\prime}) =\{00,11,22\}. \tag{22}\] This provides us with a Bell functional having seven inputs on the Alice side and three inputs on Bob's side, with 16 outputs per measurement on both sides. The Bell functional is denoted as \([\text{Magic}^{\otimes 2}]_{a}\). In this case, the proof follows the same line of reasoning as that for proving the one-bit classical bound \([\text{Magic}^{\otimes 2}]_{s}\) in Sec. III.2. Here, we give different bounds for the expression \([\text{Magic}^{\otimes 2}]_{a}\), namely, \(L1\text{bit}=20\) (where communication is fixed directional), \(Q=21\) and \(L=18\). Note that a lower bound to the input cardinality in the fixed directional one-bit scenario involves three inputs on Alice's side and two inputs on Bob's side. The question that arises is whether such a Bell-like inequality with quantum violation exists. The answer is affirmative. For this purpose, in Section IV, we will recap another family of bipartite Bell inequalities known as the CGLMP\({}_{d}\) family [28]. ### One-bit classical bound for two copies of generic pseudo-telepathy games In section III.1, we observed that by considering a pseudo-telepathy game, specifically the Magic square game, and using two copies of them, we can beat the one-bit classical bound by allowing quantum resources. We ask if it is a generic property of bipartite pseudo-telepathy games. This turns out to be the case. This is because for any bipartition \(X_{1}\) and \(X_{2}\) of Alice's set, \(X(x,x^{\prime})=\{0,...,m_{A}-1\}^{2}\), one of the bipartitions will involve \(X_{L}=\{(0,x_{0}^{\prime}),...,(m_{A}-1,x_{m_{A}-1}^{\prime})\}\) or \(X_{R}=\{(x_{0},0),...,(x_{m_{A}-1},m_{A}-1)\}\), or even both, which can be subsequently coarse-grained to the original single-copy pseudo-telepathy game. This property can be shown as follows. Proof.: Let us consider partition \(X_{1}\). There are two options. Either it includes \(X_{L}\) or it doesn't. If it does, we have completed the proof. Let us assume that \(X_{1}\) does not include \(X_{L}\). In this case, there is a missing entry \(e\not\in\{0,...,m_{A}-1\}\) on copy \(x\). Therefore, \(X_{2}\) needs to contain \(e\) on the same copy \(x\). Hence, all the pairs \(\{(e,0),(e,1),...,(e,m_{A}-1)\}\) are included in \(X_{2}\). Consequently, \(X_{2}\) contains \(X_{R}\), where \(x_{i}=e\). \(\blacksquare\) What is known about two-party pseudo-telepathy games? A number of them have been discussed in the literature [32; 43]. According to the above proof, the one-bit classical bound can also be violated quantumly with any two-copy pseudo-telepathy game. Let us examine the Impossible coloring game [45; 46] and for a modern formulation see Ref. [32]. In this game, Alice has three outputs (\(o_{A}=3\)), and Bob has two outputs (\(o_{B}=2\)). The shared state is a \(3\times 3\) maximally entangled state. If two copies of the Impossible coloring game are played, the output cardinalities are squared, resulting in \(o_{A}=9\), \(o_{B}=4\), and a state space of \(9\times 9\). Note that this inequality is not symmetric for party exchange. Since pseudo-telepathy is a property which is symmetric for party exchange, the one-bit classical bound for the double Impossible coloring game is violated quantumly in both directions. Although the dimensionality of the problem (i.e., \(d=9\)) is less than that of the double Magic square game (which is \(d=16\)), it requires many more inputs. It is worth noting that this is the smallest output cardinality and dimensionality of the state space that a two-copy bipartite pseudo-telepathy game can have [32]. It should also be noted that there is a correlation Bell inequality with only one bit of communication, which is more economical in terms of dimensionality. In Sec. V, we will present the construction for \(d=8\) with 63 inputs and two outputs per party. There is also a construction in the literature based on correlation Bell inequalities for \(4\times 4\) systems, which uses an infinite number of inputs [24]. ## IV Multiple copies of the Cglmp inequality In the following subsections, we aim to violate quantumly the one-bit and c-bit bounds with minimum input cardinality using two or more copies of the family of CGLMP inequalities. ### One-bit bound for the double Cglmp inequalities The family of CGLMP inequalities, which was introduced in Ref. [28], forms a one-parameter family of bipartite Bell inequalities. Each party has two inputs labelled as \(0\) and \(1\), and \(d\geq 2\) outputs per party, which are labelled from \(0\) to \(d-1\). Just like in the CHSH and Magic square games, we will use them as basic building blocks of multi-copy inequalities. For \(d=2\), they reduce to the CHSH inequality (in an equivalent form) and are tight for all \(d\)[47]. For \(d=3\), the original three-output CGLMP inequality is recovered [48]. We use the form of the inequalities that is presented in Ref. [49]. With a slight modification to the notation in Ref. [49], we have \[\text{CGLMP}_{d}= P(A_{0}\geq B_{0})+P(A_{0}\leq B_{1})\] \[+P(A_{1}<B_{0})+P(A_{1}\geq B_{1})\leq 3, \tag{23}\] where \(P(A_{x}<B_{y})=\sum_{a\leq b}P(ab|xy)\), and \(L(\text{CGLMP}_{d})=3\) for any \(d\geq 2\). Since all coefficients in the inequality (23) are positive, this form of CGLMP\({}_{d}\) can be interpreted as a Bell nonlocal game. The quantum value \(Q(\text{CGLMP}_{d})\) up to \(d=10^{6}\) has been computed by Zohren and Gill [50], which is believed to be the maximum quantum value, that is, the Tsirelson bound of the inequality for any \(d\). The optimal conjectured bipartite quantum state for CGLMP\({}_{d}\) has dimensions \(d\times d\). In the second column of Table 4, we reproduce the quantum values up to \(d=10\). For any \(d\geq 2\), the algebraic maximum of the single-copy inequality is 4 and the local bound is 3. It has been proven in Ref. [50] that \(Q(\text{CGLMP}_{d})\) tends to 4 when \(d\rightarrow\infty\). As a result, in the limit of \(d\rightarrow\infty\), it provides us with a pseudo-telepathy game. However, as a two-input Bell inequality, the one-bit bound always saturates the algebraic bound, meaning \(L1\text{bit}(\text{CGLMP}_{d})=4\). This value is greater than \(Q(\text{CGLMP}_{d})\) for any finite \(d\geq 2\). Let us now consider playing two or more instances of the CGLMP\({}_{d}\) game in parallel. Table 4 (third column) presents the quantum values for the double CGLMP inequalities, where the lower bound \(Q(\text{CGLMP}_{d}^{\otimes 2})=Q(\text{CGLMP}_{d})^{2}\) is used. When \(d=2\), this formula defines the exact Tsirelson bound since CGLMP\({}_{2}\) is equivalent to CHSH, which is a type of XOR game [37]. The one-bit bound \(L1\text{bit}(\text{CGLMP}_{d}^{\otimes 2})=12\) in the last column is verified by the branch-and-bound algorithm up to \(d=10\). We conjecture that this is the exact bound for any \(d\geq 2\). We also computed the local bound up to \(d=10\) and obtained \(L(\text{CGLMP}_{d}^{\otimes 2})=10\). According to the results in Table 4, \(Q(\text{CGLMP}_{8}^{\otimes 2})>L1\text{bit}(\text{CGLMP}_{8}^{\otimes 2})\), therefore we have an example of a four-input, 64-output Bell-like inequality with one bit of communication that can be violated with a \(64\times 64\) quantum state. To achieve \(Q(\text{CGLMP}_{d})\) in Table 4, non-maximally entangled states of two \(d\)-dimensional quantum systems \begin{table} \begin{tabular}{|c|r|r|r|} \hline \(d\) & \(Q(\text{CGLMP}_{d})\) & \(Q(\text{CGLMP}_{d}^{\otimes 2})\) & \(L1\text{bit}(\text{CGLMP}_{d}^{\otimes 2})\) \\ \hline \hline 2 & 3.2071 & 10.2855 & 12 \\ 3 & 3.3050 & 10.9227 & 12 \\ 4 & 3.3648 & 11.3216 & 12 \\ 5 & 3.4063 & 11.6028 & 12 \\ 6 & 3.4374 & 11.8155 & 12 \\ 7 & 3.4618 & 11.9844 & 12 \\ 8 & 3.4818 & 12.1230 & 12 \\ 9 & 3.4985 & 12.2397 & 12 \\ 10 & 3.5128 & 12.3399 & 12 \\ \(\infty\) & 4 & 16 & \(12^{*}\) \\ \hline \hline \end{tabular} \end{table} Table 4: The table shows the parameter \(d\) of the CGLMP\({}_{d}\) inequality, as well as the quantum value \(Q\) of the one-copy and two-copy CGLMP\({}_{d}\) inequalities, alongside the one-bit bound \(L1\text{bit}\) of the two-copy CGLMP\({}_{d}\), from \(d=2\) to \(d=10\). The quantum value for \(d\rightarrow\infty\) is also shown, which is due to Ref. [50]. The entry for \(L1\text{bit}\) for \(d\rightarrow\infty\) is a conjectured value. are required, except for \(d=2\)[48]. Let us denote the quantum value by \(\tilde{Q}(\mathrm{CGLMP}_{d})\) that can be attained with conjectured optimal measurements and \(d\times d\) maximally entangled states [28; 49]. The obtained values are \(\tilde{Q}(\mathrm{CGLMP}_{31})=3.6345\) and \(\tilde{Q}(\mathrm{CGLMP}_{31}^{\otimes 2})=\tilde{Q}(\mathrm{CGLMP}_{31})^{2}=12.0031\). Thus, a \((961\times 961)\)-dimensional maximally entangled state allows us to exceed the conjectured value of \(L1\mathrm{bit}(\mathrm{CGLMP}_{31}^{\otimes 2})=12\). At this point, an interesting question arises: What is the minimum number of inputs required to exceed the one-bit bound of a Bell-like inequality with quantum systems? The previous example consists of four inputs. Is there any three-input Bell inequality with one bit of communication violated by quantum systems? We provide such a construction in the next subsection. ### One-bit classical bound for truncated double CGLMP inequalities Let us consider the double CGLMP\({}_{d}\) inequality of the previous subsection, where we label the four inputs by \(X(x,x^{\prime})=\{0,1\}^{2}\) and \(Y(y,y^{\prime})=\{0,1\}^{2}\) on the respective sides of Alice and Bob. To obtain the settings, let us remove setting \((1,0)\) from both \(X\) and \(Y\): \[X(x,x^{\prime}) =\{00,01,11\}\] \[Y(y,y^{\prime}) =\{00,01,11\}. \tag{24}\] Let us denote this three-input inequality by \([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s}\). Note that the algebraic maximum is \(9\) for any \(d\geq 2\). This value can be attained quantumly in the limiting case of \(d\to\infty\). On the other hand, it can be proven that \(L1\mathrm{bit}([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})=8\) for any \(d\geq 2\). The proof follows the same line of reasoning as the proof of \(L1\mathrm{bit}([\mathrm{Magic}^{\otimes 2}]_{s})=48\) in Sec. III.2. In particular, we show that for any bipartition of the three-element set \(X(x,x^{\prime})=\{00,01,11\}\), the two-setting CGLMP\({}_{d}\) will appear on one of the partitions. This cannot be played perfectly using local strategies, hence the bound has to be smaller than the algebraic maximum of \(9\). As all deterministic one-bit strategies produce an integer value, an upper bound for \(L1\mathrm{bit}([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})\) is \(8\), which is tight since it can attained with a specific one-bit classical strategy. On the other hand, the conjectured local bound is \(L([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})=7\), which we verified up to \(d=20\). We expect to exceed the bound of \(L1\mathrm{bit}([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})=8\) with a potentially large but finite value of \(d\). Why is that? This is due to the fact that \([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s}\) can be played perfectly when \(d\) is infinite, and its quantum value tends to \(9\) as \(d\) becomes large. For this reason, there must be a threshold value for which \(Q([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})\) exceeds the one-bit bound of \(8\). Indeed, using the specific settings stated in Ref. [50] and the same quantum states, we obtain \(Q([\mathrm{CGLMP}_{d}^{\otimes 2}]_{s})=8.0002059\) for \(d=283\), exceeding the one-bit bound \(8\). Hence, we can conclude that a \((283^{2}\times 283^{2})\)-dimensional quantum state with well-chosen measurements violates the three-input and \(283^{2}\) output Bell inequality with one bit of communication. ### Fixed directional one-bit classical bound for truncated double CGLMP inequalities We begin with the four-setting \(X=Y=\{0,1\}^{2}\) double CGLMP inequalities by keeping the settings \(X(x,x^{\prime})=\{00,01,11\}\) and \(Y(y,y^{\prime})=\{00,11\}\). We call this expression as \([CGLMP_{d}^{\otimes 2}]_{a}\). We allow a single bit of communication from Alice to Bob. Here we find that the one-bit classical bound is \(5\), and we know that for large \(d\) the quantum value converges to \(6\). The \(L1\mathrm{bit}=5\) value is argued similarly to the proof presented in Section IV.2, while the conjectured local bound \(L=4\), which we verified up to \(d=20\). What is the threshold parameter \(d\) at which the quantum value exceeds the one-bit classical bound? It turns out that \(Q([CGLMP_{d}^{\otimes 2}]_{a})=5.0005455617\) for \(d=38\). Therefore, if we consider one bit of classical communication in a fixed direction from Alice to Bob, there is a Bell-like inequality augmented by one bit of communication with three inputs on Alice's side and two inputs on Bob's side that can be violated by a \((38^{2}\times 38^{2})\)-dimensional quantum state. ### The one-way \(c\)-bit bound for truncated multicopy CGLMP inequalities Here we generalize the construction described in section IV.2, from one bit of communication to \(c\) bits of communication. We begin with \(l=2^{c}\) copies of the CGLMP\({}_{d}\) expression, and keep the following \((l+1)\) inputs on the respective sides of Alice and Bob: \[X=Y=\{0\ldots 00,0\ldots 01,0\ldots 11,\ldots,1\ldots 11\}. \tag{25}\] Let us allow \(c\) bits of classical communication from Alice to Bob. With this message, we can make any \(l=2^{c}\)-partition of the \((l+1)\)-element input set on Alice's side. This will lead to one of the partitions having two inputs. As all \(l+1\) strings in (25) are different, these will be at least one index, let's say \(i\), where these two strings differ. Let us select the same two strings on Bob's side as well. Let us coarse-grain on all the indices, except for index \(i\), on both Alice's and Bob's side. This way, we obtain a CGLMP\({}_{d}\) inequality that cannot be played perfectly with only local resources. Therefore, the \(c\)-bit bound of the truncated \(l\)-copy CGLMP\({}_{d}\) inequality with \(c\) bits of communication from Alice to Bob, is at most \(Lc\mathrm{bit}=(l+1)^{2}-1\), given that the algebraic maximum is \((l+1)^{2}\). Since the quantum bound of this truncated single copy CGLMP\({}_{d}\) inequality approaches the algebraic maximum when \(d\to\infty\), there must be a critical \(d\) for any \(l=2^{c}\), where the quantum bound exceeds \(Lc\mathrm{bit}\). Note, however, that this critical \(d\) can be quite large even for moderate \(c\). As we already found in section IV.2, for \(c=1\) the critical \(d\) is \(283\). It is worth noting that since the inequality is symmetric for party exchange, our findings regarding exceeding the \(c\)-bit bound quantumly are still valid when Bob communicates \(c\) classical bits to Alice. For a given \(c\) number of bits, the number of inputs per party is \(2^{c}+1\) defining a minimal scenario, and both parties have \(o={d^{2^{c}}}\) outputs per measurement. Similar results regarding input cardinality have been obtained by Maxwell and Chitambar [19] concerning the one-way communication cost of simulating no-signalling distributions. In contrast to our case, however, binary outputs could be chosen in that scenario. It remains an open question on how to decrease the dimension and number of outputs by considering alternative Bell inequalities with \(c\) bits of communication, where the number of settings is the minimal \(2^{c}+1\). ## V Platonic correlation-type Bell inequalities with one bit of communication How do we define a Platonic Bell inequality? Consider an \(m\)-vertex solid in Euclidean dimension \(n\), where the unit vectors pointing towards the vertices of this solid are denoted by \(\{\vec{V}_{i}\}_{i}\) with \(i=1,...,m\). We suppose that the columns of the matrix, with elements \(V_{ij}=(\vec{V}_{i})_{j}\), are orthogonal to each other and have an equal norm. This property applies to all Platonic and Archimedean solids [31]. We shall define the coefficients of the \(m\)-input two-output correlation Bell inequality \[\text{Plato}=\sum_{x=1}^{m}\sum_{y=1}^{m}M_{xy}E_{xy}\leq L, \tag{26}\] by \(M_{x,y}=\vec{V}_{x}\cdot\vec{V}_{y}\), where \(L\) denotes the local bound, and \(E_{xy}=P(00|xy)+P(11|xy)-P(01|xy)-P(10|xy)\) is the two-party correlation between inputs \(x\) and \(y\). The maximum quantum value of the Bell inequality (26) is \(Q(\text{Plato})=m^{2}/D\), which defines the Tsirelson bound of the Bell inequality [30; 31]. All such Bell inequalities are referred to as Platonic Bell inequalities. One such Platonic construction follows from halving the \(126\) minimal vectors in the \(E_{7}\) lattice (see the database [51]). It results in \(63\) unit vectors \(\{V_{i}\}_{i}\) in dimension \(7\), which obey the aforementioned semi-orthogonality property. Hence, the maximum quantum value of this Bell expression is given by \(Q(\text{Plato}_{E7})=m^{2}/n=567\). On the other hand, the local bound \(L(\text{Plato}_{E7})=399\) is obtained through a branch-and-bound search over all local deterministic strategies (see Ref. [41]. The computation gives the exact local bound. A latest implementation on GPU can be found in [52]. The ratio of \(Q/L=567/399=1.421052\) exceeds the ratio of the maximum quantum violation of the CHSH inequality, which is \(\sqrt{2}\). In contrast, calculating the \(L1\)bit bound is more demanding. For this purpose, we used a heuristic see-saw type iterative algorithm similar to the algorithms described in Ref. [38; 41]. This iteratively searches within the set of deterministic one-bit strategies. In this way, we find the best possible lower bound of \(563\) to \(L1\text{bit}(\text{Plato}_{E7})\), which is smaller than \(Q(\text{Plato}_{E7})=567\). We can actually establish an upper bound of \(565\) on \(L1\text{bit}(\text{Plato}_{E7})\), which conclusively proves that the quantum value exceeds the one-bit classical bound. Due to Tsirelson's work [53] (see also [54]), then it is possible to construct \(63\) projective measurements in dimension \(d=2^{\lfloor n/2\rfloor}=8\) for \(n=7\), along with an \(8\times 8\) maximally entangled state state. The analytical bound of \(565\) can be shown by using the following observation. **Observation 1**.: _The relation_ \[L1\text{bit}(\text{Plato})\leq\sqrt{2}L(\text{Plato}) \tag{27}\] _is valid for any Platonic Bell inequality._ The local and one-bit bounds on the left- and right-hand sides are defined by their respective formulas \[L(\text{Plato})=\left(\max_{\vec{u}\in S^{n-1}}\sum_{i=1}^{m}\Big{|}\vec{u} \cdot\vec{V}_{i}\Big{|}\right)^{2}. \tag{28}\] and \[L1\text{bit}(\text{Plato})=\sqrt{L(\text{Plato})}\] \[\times\max_{\vec{u}_{1}\vec{u}_{2}\in S^{n-1}}\sum_{i=1}^{m}\max \left(\Big{|}\vec{u}_{1}\cdot\vec{V}_{i}\Big{|},\Big{|}\vec{u}_{2}\cdot\vec{V} _{i}\Big{|}\right), \tag{29}\] where \(\vec{V}_{i}\in\mathbb{R}^{n}\), \(i=1,\ldots,m\) are the construction vectors that define the Platonic Bell coefficients as \(M_{xy}=\vec{V}_{x}\cdot\vec{V}_{y}\). One can show the validity of Obs. 1 by using geometrical arguments based on formulas (28,29). Let us now apply Obs. 1 to the \(\text{Plato}_{E7}\) Bell expression to obtain an upper bound of \(L1\text{bit}\leq\sqrt{2}\times 399<565\). As \(\text{Plato}_{E7}\) is constructed to be symmetric for party exchange, the one-bit classical bound is the same for both communication directions. Therefore, we conclude that the maximum quantum value of the Platonic Bell inequality \(\text{Plato}_{E7}\) above cannot be achieved with a bidirectional one-bit classical communication model. In this case, the dimension of the full probability space is \(D_{P}=63^{2}\times 2^{2}=15876\), which is larger than \(D_{P}=7^{2}\times 16^{2}=12544\) corresponding to the truncated double Magic square game discussed in Sec. III.2. ## VI Discussion We used diverse techniques to prove that a classical model with one bit of classical communication cannot simulate measurements performed on higher dimensional bipartite quantum systems. Table 1 highlights our main findings for the different constructions. We defined a measure of hardness for one-bit classical simulation by the dimension of the full probability space of the bipartite correlations \(D_{P}\). This is the dimension that is required by quantum correlations to refute classical models with one bit of communication. We placed the upper bound for the value of \(D_{P}\) at \(12544\). However, this number is quite far from the best lower bound of \(D_{P}>24\). This suggests that there is still a lot of room for further improvement. We leave it as an open problem to reduce the gap described above. However, it is possible that our attempts to find a smaller upper bound for \(D_{P}\) failed, as its true value might be closer to our upper bound of \(12544\). If that is the case, we can argue that how surprisingly powerful local hidden variables models plus a single bit of classical communication are, when the goal is to simulate bipartite quantum correlations. From an experimental point of view, it is also crucial to find violations of the one-bit bound using the smallest possible dimensional bipartite states. In this regard, it is known that the double pseudo-telepathy games require at least \(9\times 9\) dimensional states. Also, our best construction with a finite number of inputs in terms of dimensionality is based on a Platonic Bell inequality and involves \(8\times 8\) dimensional states. On the other hand, there is strong evidence that \(2\times 2\) quantum states can be simulated classically with a single bit of communication. The question arises whether one can rule out one-bit classical simulation with a component space dimension less than \(8\) (and possibly a modest number of inputs) by considering other Bell-like constructions. ## Acknowledgements T.V. thanks Antonio Acin and Jonatan Bohr Brask for inspiring conversations. We acknowledge the support of the EU (QuantERA eDICT) and the National Research, Development and Innovation Office NKFIH (No. 2019-2.1.7-ERA-NET-2020-00003).
2305.01680
The Sommerfeld enhancement at NLO and the dark matter unitarity bound
We reexamine the consequences of perturbative unitarity on dark matter freeze-out when both Sommerfeld enhancement and bound state formation affect dark matter annihilations. At leading order (LO) the annihilation cross-section is infrared dominated and the connection between the unitarity bound and the upper bound on the dark matter mass depends only on how the different partial waves are populated. We compute how this picture is modified at next-to-leading order (NLO) with the goal of assigning a reliable theory uncertainty to the freeze-out predictions. We explicitly compute NLO corrections in a simple model with abelian gauge interactions and provide an estimate of the theoretical uncertainty for the thermal masses of heavy electroweak $n$-plets. Along the way, we clarify the regularization and matching procedure necessary to deal with singular potentials in quantum mechanics with a calculable, relativistic UV completion.
Salvatore Bottaro, Diego Redigolo
2023-05-02T18:00:02Z
http://arxiv.org/abs/2305.01680v2
# The dark matter unitarity bound at NLO ###### Abstract We reexamine the consequences of perturbative unitarity on dark matter freeze-out when both Sommerfeld enhancement and bound state formation affect dark matter annihilations. At leading order (LO) the annihilation cross-section is infrared dominated and the connection between the unitarity bound and the upper bound on the dark matter mass depends only on how the different partial waves are populated. We compute how this picture is modified at next-to-leading order (NLO) with the goal of assigning a reliable theory uncertainty to the freeze-out predictions. We explicitly compute NLO corrections in a simple model with abelian gauge interactions and provide an estimate of the theoretical uncertainty for the thermal masses of heavy electroweak \(n\)-plets. Along the way, we clarify the regularization and matching procedure necessary to deal with singular potentials in quantum mechanics with a calculable relativistic UV completion. ## I Introduction Among the plethora of dark matter (DM) production mechanisms, a minimal and predictive setup is DM thermal freeze-out where the DM is in thermal contact with the Standard Model (SM) bath in the early Universe, and its abundance today is set by 2-2 annihilations into SM or dark sector states. This simple framework makes it possible to derive an upper bound on the DM mass from the perturbative unitarity of the annihilation cross-section as was first done in Ref. [1]. The unitarity of the S-matrix bounds from above every single partial wave contributing to the annihilation cross-section. At a given order in the perturbative expansion, this bound can be recast into a maximal value of the gauge coupling. The latter can then be translated into an upper bound on the DM mass through the requirement that the total annihilation cross-section should deplete the DM abundance to match the measured relic density today. When long-range interactions are at work, non-relativistic (NR) quantum mechanical effects significantly alter the DM annihilation cross-section inducing an overall enhancement which is dubbed Sommerfeld enhancement (SE) in the literature [2; 3; 4; 5; 6; 7] typically accompanied by the bound state formation (BSF) during the annihilation process [8; 9; 10; 11]. Since these effects dominate the annihilation cross-section it is crucial to understand their behavior once we approach the perturbative unitarity bound (PUB). This question is the main focus of this paper. Approaching the PUB, NLO corrections should become important and their relative size compared to the LO contributions gives a reliable estimate of the expected theory uncertainty. We then study the behavior of NLO corrections to the non-relativistic potential making use of the general results from Ref. [12; 13; 14]. We systematically include corrections generated both by the infrared (IR) and ultraviolet (UV) dynamics. Our analysis allows us to reliably assign a theoretical uncertainty to the freeze-out predictions. Even though our results are general, we use a simple dark QED model to illustrate the impact of NLO corrections. We will comment on how this analysis allows us to assign a more reliable theory error to the electroweak WIMPs thermal masses derived in Ref. [15; 16]. Our paper is structured as follows. In Sec. II we summarize the relevant ingredients for the LO freeze-out computation and discuss the PUB at LO. In Sec. III we develop the tools to account for both IR (Sec. III.1) and UV NLO corrections (Sec. III.2). In Sec. IV we then illustrate the importance of our corrections in a simple dark QED model. In Sec. V we conclude. In Appendix A we illustrate the general strcture of UV NLO potentials while in Appendix B we illustrate our regularization and matching procedure in the simple case of the Coulomb potential. In Appendix C we collect useful formulas about bound state formation. ## II The unitarity bound at LO We first discuss DM annihilation at LO. We illustrate SE in Sec. II.1 and BSF in Sec. II.2. The DM PUB at LO and its consequences are considered in Sec. II.3. We decompose the annihilation channels in eigenvalues of the total angular momentum \(\vec{J}=\vec{L}+\vec{S}\), where \(\vec{L}\) is the angular momentum and \(\vec{S}\) is the internal spin.1 Footnote 1: The Casimir operators are defined in the standard way: \(\vec{J}^{2}=j(j+1)\mathds{1}\), \(L^{2}=l(l+1)\mathds{1}\), \(S^{2}=s(s+1)\mathds{1}\). Since the freeze-out happens at NR velocities the annihilation channels with \(\vec{L}\neq 0\) are velocity suppressed. We can then focus on s-wave annihilation and set \(\vec{L}=0\) and \(\vec{J}=\vec{S}\). We consider the DM to be a scalar or a fermion, where in the latter case both \(j=1\) and \(j=0\) channels contribute to the s-wave annihilation. For simplicity, we take the DM to be in thermal contact with the SM in the early Universe so that the freeze-out dynamics is controlled by a thermal bath with a number of light degrees of freedom at the freeze-out temperature \(g_{*}^{\rm f.o.}\equiv g_{*}(T_{\rm f.o.})\) possibly different than the SM one. ### Hard cross-section and Sommerfeld enhancement In the NR limit the dynamics of annihilation is captured by the Schroedinger equation for the system of the pair of annihilating particles \(\phi_{j,i}(r)\). Here \(j=s\) selects a given total angular momentum channel at \(l=0\) while the index \(i\) stands for other possible internal degrees of freedom. The Schroedinger equation reads \[-\frac{\nabla^{2}\phi_{j,i}}{M_{\chi}}+\left[V_{i}^{\rm LO}-i\frac{\sigma_{j,i }^{h}v}{2}\delta^{3}(\vec{r})\right]\phi_{j,i}=\frac{M_{\chi}v^{2}}{4}\phi_{j,i}\,, \tag{1}\] the imaginary part of the potential in the squared brackets is related through the optical theorem to the "hard" annihilation cross-sections \(\sigma_{j,i}^{h}\) which describes the processes whose decay products carry most of the DM energy as shown in Fig. 1. Upon projecting into radial wave functions, the kinetic term will also generate the usual centrifugal barrier \(1/r^{2}\). At small velocities, \(v\ll\alpha\), the dynamics of DM is affected by the long-range interactions encoded in the non-relativistic potentials \(V_{i}^{\rm LO}\). At LO in \(\alpha\) and \(v\), the potential takes the standard Coulomb form \[V_{i}^{\rm LO}(r)=\lambda_{i}\frac{\alpha}{r}\, \tag{2}\] where \(\lambda_{i}\) is a channel-dependent number (which we assume to be spin independent) that can be negative (positive) for attractive (repulsive) potentials.2 Footnote 2: We refer to Ref. [17] for a complete classification of the possible Coulomb interaction from the effective field theory perspective. We also assumed the mass \(M_{\chi}\) to be the same for every internal degree of freedom \(i\). This is easily generalized in the case where mass-splittings among the different internal degrees of freedom play an important role (see for instance Ref. [15]). The full, non-perturbative annihilation cross-section can be computed by solving Eq. (1). The contribution of the modified wave function to the annihilation cross-section can be read from the divergence of the probability current \(\widetilde{j}_{j,i}(\vec{r})=\frac{2}{M_{\chi}}\Im[\phi_{j,i}^{\dagger} \vec{\nabla}\phi_{j,i}]\) \[\sigma_{j,i}v=-\int\mathrm{d}^{3}r\vec{\nabla}\cdot\widetilde{j}_{j,i}(\vec{r })\simeq\sigma_{j,i}^{h}v|\phi_{i}(0)|^{2}\, \tag{3}\] where \(S_{E}^{i}\equiv|\phi_{i}(0)|^{2}\) is the so-called Sommerfeld enhancement (SE) and \(\phi_{i}(x)\) is the spin-independent solution of the Schroedinger equation Eq. 1 neglecting the contribution from the hard process.3 The boundary conditions for \(\phi_{i}\) are requiring regularity at the origin and to recover the asymptotic scattering waves away from the potential. Footnote 3: Going beyond the approximation in Eq. (3) including the effect of the hard process on the wave function introduces corrections to the SE of order \(\mathcal{O}(\alpha^{3})\) which are negligible for DM freeze-out (see Ref. [18] for a discussion of the importance of these corrections in the context of indirect detection). A simple way to compute the SE is to rewrite Eq. (1) as a first order differential equation for \(h_{j,i}(r)=\phi_{j,i}^{\prime}(r)/\phi_{j,i}(r)\) \[\frac{h_{j,i}^{\prime}}{M_{\chi}}+h_{j,i}^{2}-\left[V_{i}^{\rm LO}-i\frac{ \sigma_{j,i}^{h}v}{2}\delta^{3}(\vec{r})\right]=\frac{M_{\chi}v^{2}}{4}\,, \tag{4}\] with boundary condition \(h(r\to\infty)=iM_{\chi}v/2\). In terms of the new variable the SE in Eq. (3) can be written as \(S_{E}^{i}=2\Im[h_{i}(0)]/(M_{\chi}v)\) under the same approximation of Eq. (3). We will use this formulation in Sec. III.1. Another nice way of writing the SE is to introduce the reduced radial wave function for \(l=0\), \(\chi_{j,i}(r)=r\phi_{j,i}(r)\) and define the dimensionless variable \(x=rM_{\chi}\) and the rescaled potential \(V_{i}^{\rm LO}=M_{\chi}V_{i}^{\rm LO}\) so that Eq. (1) becomes \[-\chi_{j,i}^{\prime\prime}+\left[\mathcal{V}_{i}^{\rm LO}-i\frac{\sigma_{j,i} ^{h}v}{2M_{\chi}}\delta^{3}(\vec{x})\right]\chi_{j,i}=\frac{v^{2}}{4}\chi_{j, i}. \tag{5}\] Choosing carefully the boundary conditions we can get equivalent expressions of the SE in terms of the reduced wave function. For instance taking \(\chi(0)\propto x\) and \(\chi_{i}(\infty)=\sin(xv/2+\delta_{0}^{i})\) we get \(S_{E}^{i}=|2\chi_{i}^{\prime}(0)/v|^{2}\). Alternatively, imposing \(\chi(0)=xv/2\) we get \(\chi_{i}(\infty)=\sin(xv/2+\delta_{0}^{i})/\sqrt{S_{E}}\). This latter formulation will be useful in Sec. III.2. For a Coulomb potential, the \(S_{E}\) takes the simple analytic form \[S_{E}^{i}|_{\rm LO}=\frac{2\pi\lambda_{i}\alpha}{v}\frac{1}{1-e^{-\frac{2\pi \lambda_{i}\alpha}{v}}}\, \tag{6}\] which strongly enhances (or suppresses) the \(l=0\) annihilation cross-section when \(v<\lambda_{i}\alpha\). The annihilation cross-section for a single \(j\) wave can then be parametrically written at LO as \[\sigma_{\rm ann}^{j}v=\sum_{i}\sigma_{j,i}v=\frac{2\pi^{2}\alpha^{3}}{vM_{ \chi}^{2}}c_{\rm ann}^{j}|_{\rm LO}\, \tag{7}\] Figure 1: Sketch of the annihilation rate process. The **red** diagram shows the hard annihilation cross-section where the particles in the final state carry most of the DM energy. The **blue** diagram includes soft exchanges with momentum \(p=M_{\chi}v\) which get resummed by the non-relativistic potential. where we defined \(c^{j}_{\rm ann}|_{\rm LO}=\sum_{i}c^{h}_{j,i}\lambda_{i}\) and \(c^{h}_{j,i}\) are \(\mathcal{O}(1)\) coefficients encoding the LO contribution of the hard cross-section to the different channels and \(\lambda_{i}\) are defined in Eq. 2. ### Bound State Formation In the NR regime typical of freeze-out, long-range interactions lead to a significant rate of BSF. Since these BS are not stable, BSF helps in depleting the DM abundance and thus affects the prediction of the DM mass. At leading order in \(\alpha\) and \(v\), BSF occurs via emission of a single gauge boson through the process \(\chi_{1,i}+\chi_{2,i^{\prime}}\to\text{BS}_{ff^{\prime}}+V^{a}\) described by a generalized dipole Hamiltonian derived in Ref. [10; 19; 20]. We can write the final BSF cross-section as \[\sigma^{i,i_{b}}_{\rm BS}v=S^{i_{i},i_{b}}_{\rm BS}\left(S^{i}_{E}\frac{\pi \alpha^{2}}{M_{\chi}^{2}}\right)\,, \tag{8}\] where \(S^{i}_{E}\simeq 2\pi\lambda_{i_{z}}\alpha/v\) is the SE corresponding to the incoming scattering state, while \(S^{i_{i},i_{b}}_{\rm BS}\) encodes the non-trivial coupling and velocity dependence coming from the overlap integral between the incoming scattering state \(i_{s}\) and the outcoming bound state BS. We leave a detailed discussion of the overlap integrals to Appendix C. ### Unitarity bound at LO As a direct consequence of the unitarity of the \(S\)-matrix, the 2-2 annihilation cross-section in a given partial wave with total angular momentum \(j\) is bounded from above by \(\sigma_{j}\leq\pi(2j+1)/p^{2}\), where \(p\) is the initial momentum of the annihilating particles. In the non-relativistic limit, \(p^{2}=M_{\chi}^{2}v^{2}/4\) and the PUB can be written as \[\sigma_{j}v\leq\frac{4\pi(2j+1)}{M_{\chi}^{2}v}\, \tag{9}\] where this inequality does not require the existence of free asymptotic states and it is also satisfied for scattering processes in a Coulomb potential [21]. The annihilation cross-section at LO and in the limit \(v\ll\alpha\) can be written as \[\sigma_{j}v|_{\rm LO}\simeq\frac{2\pi^{2}\alpha^{3}}{vM_{\chi}^{2}}\left[c^{j }_{\rm ann}|_{\rm LO}+c^{j}_{\rm BSF}|_{\rm LO}\right]\ \, \tag{10}\] where the coefficients \(c^{j}_{\rm ann}|_{\rm LO}\) and \(c^{j}_{\rm BSF}|_{\rm LO}\) encode model dependent \(\mathcal{O}(1)\) numbers controlling the annihilation cross-section and the BSF respectively. In particular, \(c^{j}_{\rm BSF}\) can be obtained from Eq. (8) by projecting onto a state with total angular momentum \(j\). The maximal coupling allowed by PUB at LO is \[\alpha_{\rm LO}^{\rm max}=\text{Max}_{j}\left[\frac{2(2j+1)}{(c^{j}_{\rm ann }|_{\rm LO}+c^{j}_{\rm BSF}|_{\rm LO})\pi}\right]^{1/3}\, \tag{11}\] where it is interesting to notice that the presence of the SE in the non-relativistic limit reduces the maximally allowed coupling by roughly one order of magnitude compared to the usual bound from the relativistic power counting. To a good approximation, the solution of the DM Boltzmann equation is equivalent to require the freeze-out yield to be \(Y^{\rm LO}_{\chi}\simeq\frac{H}{s\sum_{j}(\sigma_{j}v)}\), where \(s\) is the entropy density. Substituting \(\alpha_{\rm LO}^{\rm max}\) in the freeze-out condition and requiring \(\chi\) to account for the DM relic density today we get the maximal DM mass allowed by the PUB \[M_{\rm max}\simeq 131.7\ \text{TeV}\left(\alpha_{\rm LO}^{\rm max}\right)^{3/2} \left(\sum_{j}(c^{j}_{\rm ann}|_{\rm LO}+c^{j}_{\rm BSF}|_{\rm LO})\right)^{1 /2}\left(\frac{g_{*}^{\rm f.o.}}{x_{\rm f.o.}}\right)^{1/4}\, \tag{12}\] Figure 2: Sketch of the bound state formation process. The **red** diagram shows the fundamental emission process, where a vector with energy given approximately by the binding energy of the BS is emitted from the DM line (dipole) or from the internal vector line (non-abelian). The **blue (green)** ladder diagram includes soft photon exchanges with momentum \(p=M_{\chi}v\) (\(p=M_{\chi}\alpha\)) which get resummed by the non-relativistic potential at LO into the scattering (bound state) wave function. where we substituted the high temperature value of the SM degrees of freedom \(g_{*}^{\rm SM}\simeq 106.75\) and \(x_{\rm f.o}\equiv M_{\chi}/T_{\rm f.o.}\simeq 29\) (see Ref. [22] for details about the dependence of \(x_{\rm f.o}\) on the model parameters). If the annihilation is dominated by a single \(j\)-wave then Eq. (12) simplifies and the mass upper bound becomes independent on the model as derived in Ref. [1]. For example if the \(j=0\) wave dominates we get \(M_{\rm max}^{J=0}=137\) TeV. In practice, the dominance of a single \(j\)-wave is an irrealistic assumption in most of the freeze-out scenarios. The main reason is that BSF tends to equally distribute the cross-section in the lower \(j\) channels. As a consequence, accounting for BSF generically makes \(M_{\rm max}\) larger with respect to the naive estimate because the contribution from BSF in Eq. (12) overcomes the tightening of the PUB on \(\alpha\) in Eq. (11).4 Footnote 4: In previous works on the subject [23; 24] the upper bound on the DM mass is typically the one obtained assuming the dominance of the \(j=0\) partial wave (see however the discussion in Ref. [25]). Even for this simplified setup in Ref. [23] no decomposition in partial waves of the _total_ angular momentum was made, and the total annihilation cross-section, including BSF, was compared to the PUB on the \(j=0\) wave, thus significantly underestimating the upper bound on the coupling. This error was later amended in Ref. [25] which agrees with the number derived here. ## III NLO corrections We study the NLO correction to the non-relativistic potential. These are expected to be the dominant NLO corrections approaching the unitarity bound at non-relativistic velocities \(v\ll\alpha\) typical of freeze-out.5 Footnote 5: In Sec. II we focused on \(s\)-wave annihilation processes (\(\vec{L}=0\)) claiming that higher \(L\) waves would be velocity suppressed. While this is certainly true for the hard cross-section, including SE for a general wave gives \(S_{E}^{j}\times\sigma_{j}^{n}v\sim\alpha^{2l}\times\sigma_{0}^{n}v\), [7]. As a consequence \(p\)-wave contributions to the hard annihilation process are only \(\alpha^{2}\) suppressed with respect to the \(j=0\) channel and should be included approaching the unitarity bound. NLO correction will also affect the hard annihilation cross-section but are expected to be subleading with respect to the corrections to the non-relativistic potentials for \(v\ll\alpha\). NLO corrections can arise from both IR and UV dynamics. The former originate from loops where only light degrees of freedom are involved and modify both the long-range and the short-range behavior of the Coulomb potential. On the contrary, UV contributions modify the short-range behavior of the Coulomb potential and are induced by the diagrams where also the heavy DM field \(\chi\) flows in the loops. We schematically show both contributions in Fig. 3. The dominant IR corrections can be approximated by the including the running coupling \(\alpha(r)\) in the LO Coulomb potential [13; 14; 26] \[V_{\rm NLO}^{\rm IR}=-\frac{\alpha(r)}{r}\,\quad\alpha(r)=\frac{\alpha}{1+ \frac{\alpha}{2\pi}b^{\rm IR}\log(M_{\chi}r)}\, \tag{13}\] where \(b^{\rm IR}\) is the beta-function coefficient due to the states lighter than the DM mass running in the loops and we defined \(\alpha\) to be the fine structure constant at the DM threshold \(r=1/M_{\chi}\). The UV corrections can be systematically encoded in the one-loop matching to the DM potential in Non-Relativistic QFT [12]. These thresholds can be organized as a series in powers of \(1/M_{\chi}\) whose general form in Fourier space was derived in Ref. [12] up to order \(\mathcal{O}(1/M_{\chi}^{2})\). In position space, the NLO potential reads \[\Delta V_{\rm NLO}^{\rm UV}=-\frac{\alpha}{r}\left[\frac{V_{2}}{M_{\chi}r}+ \frac{V_{3}}{M_{\chi}^{2}r^{2}}\right]+\frac{V_{D}\alpha}{M_{\chi}^{2}}\delta ^{3}(\vec{r}). \tag{14}\] where \(V_{2}\), \(V_{3}\), and \(V_{D}\) are dimensionless coefficients that might depend on the annihilation channel (including the spin). \(V_{3}\) and \(V_{D}\) contain in general spin-dependent terms like spin-spin and spin-orbit interactions. These terms would induce transitions between states with different \(l\) and \(s\). In general, computing SE and BSF would then require solving a system of coupled Schroedinger equations. In the following, we consider the simple case of scalar DM, where these terms are zero. We provide more details on the general structure of the NLO potential in Appendix A. Taking \(r\sim 1/M_{\chi}v\) the size of the NLO corrections can be estimated as \(\Delta V_{\rm NLO}\sim\left[V_{2}v+(V_{3}+V_{D})v^{2}\right]V_{\rm LO}\) with \(V_{\rm LO}\sim\alpha M_{\chi}v\) being the scaling of the Coulomb potential in Eq. (2). This estimate indicates that the inverse square potential dominates over the other NLO corrections as long as \(v<V_{2}^{i}/(V_{3}^{i}+V_{D}^{i})\). The leading relativistic correction to the kinetic energy \(M_{\chi}^{3}\Delta E_{\rm kin}=p^{4}/8\) scales as \(\Delta E_{\rm kin}\sim M_{\chi}v^{4}\) and are negligible with respect to the correction to the potential as long as \(v<(V_{2}\alpha)^{1/2}\) and \(v<(V_{3}+V_{D})\alpha\) respectively. This shows that at low enough velocities, the leading NLO corrections can be encoded solely in the modifications of the potential in the Schroedinger equation. We now present a systematic procedure to account for both IR and UV NLO corrections in the SE and in BSF. ### IR contributions at NLO The IR behavior of the NLO potential crucially depends on the sign of the IR beta-function coefficient \(b_{\rm IR}\). Here we focus on IR free theories with \(b_{\rm IR}>0\) whose NLO potential features a Landau pole at a position \(r_{*}M_{\chi}=\exp(-\frac{2\pi}{b_{\rm IR}\alpha})\). The presence of a Landau pole prevents fixing boundary conditions at distances arbitrarily close to the origin. The boundary condition is then set at a distance \(r_{p}\) defined as \(2\Im[h(r_{p})]/M_{\chi}v=(1-p)S_{E}\), where \(p\) is an arbitrary parameter that controls the theoretical accuracy of our approximation. By inspecting the Schroedinger equation we can also estimate the value of \(r_{p}\) which corresponds to the onset of the plateau of the SE: \(p/r_{p}\approx 2(1+\pi)M_{\chi}\alpha\). The value of \(r_{p}\) is velocity independent, as can be understood by noticing that at small distances the potential energy dominates over the kinetic energy. The larger is \(p\) the further away we are from the saturation of the SE and the larger is our theoretical error. As long as \(r_{p}>r_{*}\), we can solve the Schroedinger equation with the resummed potential in Eq. (13), and compute the SE. For \(\alpha_{\rm LO}^{\rm max}\) defined in Eq. (11) \(r_{*}\) is the largest: \(r_{*}(\alpha_{\rm LO}^{\rm max})=r_{*}|_{\rm max}\). We can then fix \(p\) to minimize the theoretical error at the PUB by defining \(r_{\bar{p}}=r_{*}|_{\rm max}\), which fixes the value of \(p=\bar{p}\). This choice ensures the calculability of the SE up to the PUB. We can then estimate the theoretical error due to IR NLO effects by taking the difference of the SE between \(r_{\bar{p}}\) and \(2r_{\bar{p}}\) for any coupling: \(\Delta S_{E}^{\rm IR}=S_{E}(r_{\bar{p}})-S_{E}(2r_{\bar{p}})\). In Sec. IV we will apply this recipe to a simple toy model with abelian gauge interactions. We now briefly comment on the impact of IR NLO corrections on BSF. The BS wave functions and binding energies can be computed at NLO using as an unperturbed basis the Coulombian BS and diagonalizing the matrix elements of the Hamiltonian \(\mathcal{H}_{\rm NLO}=\frac{p^{2}}{M_{\chi}}+V_{\rm NLO}^{\rm IR}(r)\), where \(\mathcal{H}_{ij}^{l}\equiv\langle\Psi_{il}^{C}|\mathcal{H}|\Psi_{jl}^{C}\rangle\), with \(|\Psi_{il}^{C}\rangle\) the Coulombian BS with principal quantum number \(i\) and angular momentum \(l\). The BS wave function and its binding energy are mostly sensitive to scales larger than the Bohr radius, \(r\gtrsim\frac{1}{M_{\chi}\alpha}\). For this reason, the BS dynamics is insensitive to NLO corrections in theories with \(\beta_{\rm IR}>0\), where the deviations from the Coulomb behavior are larger at short scales. Conversely, we expect large NLO corrections to the BS dynamics in UV-free theories with \(\beta_{\rm IR}<0\). We leave a detailed study of this case for the future. For \(\beta_{\rm IR}>0\), the BSF cross-section is mostly affected by NLO modifications of the scattering wave function. These correction affects mostly the formation of \(p\)-wave BS from a \(s\)-wave initial scattering state because scattering waves with angular momentum \(l>0\) are screened from short-scale NLO corrections by the centrifugal potential. More details are given in Appendix C. ### UV contributions at NLO The inclusion of the UV NLO corrections in Eq. (14) makes the Hamiltonian no longer bounded from below. Equivalently, the Schroedinger equation with \(V_{\rm NLO}^{\rm UV}=V_{\rm LO}+\Delta V_{\rm NLO}^{\rm UV}\) cannot be solved with normalizable solutions with boundary conditions at the origin. This poses the challenge of defining the SE in the presence of UV singular potential. Following the approach of Ref. [17] (see also Ref. [27; 28] for similar techniques applied to nuclear physics), we regularize the full potential close to the origin with a well potential at distances \(r<r_{\rm cut}\equiv x_{\rm cut}/M_{\chi}\). In terms of the dimensionless variable \(x=rM_{\chi}\) we can define a dimensionless regularized potential \[\mathcal{V}_{\rm reg}(x)=\begin{cases}-\mathcal{V}_{\rm cut}(v,x_{\rm cut}), &x<x_{\rm cut}\\ \mathcal{V}_{\rm NLO}^{\rm UV}(x),&x>x_{\rm cut}\end{cases}\, \tag{15}\] where \(V_{\rm NLO}^{\rm UV}=M_{\chi}\mathcal{V}_{\rm NLO}^{\rm UV}\). The Schroedinger equation in Eq. (5) can be written by replacing the potential with the regularized one (\(\mathcal{V}_{\rm LO}\to\mathcal{V}_{\rm reg}\)) and the kinetic term with an arbitrary wave function renormalization (\(\chi^{\prime\prime}\to Z_{\chi}\chi^{\prime\prime}\)). In general, the depth of the potential well \(-\mathcal{V}_{\rm cut}\) and the wave function renormalization \(Z_{\chi}\) are functions of \(x_{\rm cut}\) and of \(v\) and they should be fixed with appropriate renormalization conditions in order for the SE to be well defined. Since the relativistic UV theory is calculable, the scattering phase can be explicitly computed and matched to the non-relativistic EFT as an input (see Ref. [29; 30] for a similar discussion in the context of scattering processes). At distances \(x<x_{\rm cut}\), the \(s\)-wave scattering phase can be computed for a generic central potential in the Born approximation \[\sin\bar{\delta}_{0}^{\rm NLO}=-v\int_{0}^{x_{\rm cut}}{\rm d}xx^{2}j_{0}^{2 }(vx)\mathcal{V}_{\rm NLO}(x)\, \tag{16}\] Figure 3: Sketchy representation of the IR contributions (**top**) and UV contributions (**bottom**) to the non-relativistic potentials at NLO. **Red** lines corresponds to IR degrees of freedom: fermions (**solid**) or gauge bosons (**wiggly**). **Blue** lines are instead heavy DM lines which we take to be scalar in this paper. where \(j_{0}(x)\) is the regular, spherical Bessel function and \(\mathcal{V}_{\rm NLO}\) is the full NLO potential. The UV scattering wave function can then be written as \(\sin(vx+\widetilde{\delta}_{0})\) and it is then matched at \(x=x_{\rm cut}\) to the solution of the Schroedinger equation for \(x<x_{\rm cut}\) inside the potential well. Matching the logarithmic derivatives we get \[\tan\left(\frac{vx_{\rm cut}}{2}+\widetilde{\delta}_{0}^{\rm NLO}\right)=\frac {v}{2\Phi_{v}}\tan\left(x_{\rm cut}\Phi_{v}\right)\, \tag{17}\] where we defined \(\Phi_{v}\equiv\left(\frac{4\mathcal{V}_{\rm cut}+v^{2}}{4Z_{\chi}}\right)^{1/2}\). At fixed \(x_{\rm cut}=1\) and \(v\), this matching condition fixes \(\Phi_{v}\) in terms of \(\widetilde{\delta}_{0}^{\rm NLO}\) so that the solution inside the potential well is \[\chi_{v}^{\rm in}(x)=\frac{v}{2\Phi_{v}}\sin(\Phi_{v}x)\quad\text{for}\quad x <x_{\rm cut} \tag{18}\] and can be matched at \(x=x_{\rm cut}\) to \(\chi_{v}^{\rm out}(x)\) which solves the Schroedinger equation for \(x>x_{\rm cut}\) with the potential in Eq. (15). This boundary condition alone is not sufficient to disentangle the SE from the asymptotic behavior of \(\chi_{v}^{\rm out}(x)\) which also depends on the wave function renormalization \(Z_{\chi}\): \[\lim_{x\to\infty}\chi_{v}^{\rm out}(x)=\frac{\sin(xv+\delta)}{\sqrt{S_{E}(v) Z_{\chi}}}\, \tag{19}\] where the phase \(\delta\) depends on the long-range dynamics and has to be distinguished from the short-distance one in Eq. (16). An independent relation is then obtained by requiring the SE to approach 1 at large velocities as suggested in Ref. [17]. In the \(v\gg 1\) limit \(\Phi_{v}\simeq 1/\sqrt{Z_{\chi}}\) so that the solution in Eq. (18) depends solely on the wave function renormalization. The latter can be fixed by requiring \(S_{E}(v\gg 1)=1\) in Eq. (19): \[\lim_{x\to\infty}\chi_{v}^{\rm out}(x)\underset{v\gg 1}{\simeq}\frac{\sin(vx+ \delta)}{\sqrt{Z_{\chi}}}. \tag{20}\] The matching condition in Eq. (17) admits different branches of solutions for \(\Phi_{v}^{i}\) leaving the l.h.s of Eq. (17) unchanged [31]. However, only the solution in the first branch, defined by \(0<\Phi_{v}<\pi/2x_{\rm cut}\), leads to a well-defined SE given the boundary conditions in Eq. (17) and Eq. (20). This can be seen by considering the case of a very short-range potential with depth \(\mathcal{V}_{\rm cut}\), which vanishes immediately outside the potential well \(x>x_{\rm cut}\). This is a limiting case that perfectly captures the physics in the extremely weakly coupled case. Since the wave function outside the well is just a plane wave one can follow our UV matching procedure to get the SE \[S_{E}=\frac{1}{Z_{\chi}\cos^{2}(\Phi_{v}x_{\rm cut})\left[1+\frac{v^{2}}{4} \frac{\tan^{2}(\Phi_{v}x_{\rm cut})}{\Phi_{v}^{2}}\right]}\,. \tag{21}\] Requiring \(S_{E}(v\gg 1)=1\) as in Eq. (20) fixes \(Z_{\chi}=1\). Figure 4: Behavior of the UV NLO corrections described in Sec. III.2 with \(x_{\rm cut}=1\). The red lines exhibit the power counting on the relativistic corrections discussed below Eq. (14). On the right of the **red** dashed line, the NLO potential cannot be approximated with a \(1/r^{2}\) correction to the Coulomb potential while on the **red** shaded region the corrections to the kinetic energy become important and the non-relativistic description breaks down. The **gray dashed** lines show the NLO contribution to the UV phase in the Born approximation as defined in Eq. (25), normalized with respect to the LO one. In the **gray** shaded area the NLO contribution becomes larger than the LO one and the calculability of the phase breaks down. **Left: The blue** contours show the behavior of the mean value of the NLO SE normalized with respect with the LO SE. **Right:** The **blue** contours show the behavior of the theory error on the NLO SE due to the uncertainty on the UV phase determination as defined in Eq. (26). Eq. (21) shows explicitly how the SE depends on the branch through the \(1/\cos^{2}(\Phi_{v}x_{\rm cut})\) factor. Defining the \(i\)-th branch as the one with \((2i-1)\pi/2x_{\rm cut}<\Phi_{v}<(2i+1)\pi/2x_{\rm cut}\), with \(i\) integer, the SE increases by moving to higher branches because the \(\cos^{2}(\Phi_{v}x_{\rm cut})\) decreases. This behavior is inconsistent with the weak coupling limit (\(\mathcal{V}_{\rm cut}\ll v^{2}/4\) in this simple setup) where we expect to recover \(S_{E}\to 1\). This limit is realized in the first branch which we then select as the physical one.6 Footnote 6: One might wonder if this argument depends on the simplicity of the wave function outside the potential well. However, the same argument can be constructed starting from the Coulomb potential \(V(r)=\alpha/r\) and using both the regular and the irregular hypergeometric function to perform the UV matching [32]. We checked that in the limit \(\alpha\ll v\ll 1\) this exercise leads to an expression very similar to Eq. (21) where the \(1\) at the numerator is replaced by the SE at LO in Eq. (6). The existence of a solution for Eq. (17) in the first branch is intimately related to the calculability of the UV phase. Indeed, in order to have a solution the l.h.s of the equation should be larger than the minimum of the r.h.s which is obtained for \(\Phi_{v}\to 0\). This is possible only if \(\delta_{0}^{\rm NLO}>0\) which is equivalent to requiring the LO Coulomb potential (which contributes positively to the scattering phase) to dominate over the NLO corrections (which contribute negatively) in Eq. (16). In the gray region in Fig. 4 a solution for the matching equation can only be found by pushing \(x_{\rm cut}>1\), hence enhancing the relative weight of the Coulomb potential with respect to the NLO corrections. This is done to extract the theory prediction on the SE and its error in this region as we show in Fig. (IV). Before leaving this section we comment on the UV NLO contributions to the BSF cross-section which is encoded in the deformation of the wave function of the initial scattering state and the on of the BS. While computing the latter is straightforward, in order to compute the first we generalize to discontinuous potentials the variable phase method introduced in [33] (see also [34; 8; 35]). This is described in Appendix C.4. ## IV An Abelian example We consider a dark \(U(1)\) gauge theory where a heavy scalar DM \(\chi\) with mass \(M_{\chi}\) and charge \(q_{\chi}\) annihilates into a pair of dark photons with LO cross-section \(\sigma_{h}^{0}v=2\pi\alpha_{\chi}^{2}/M_{\chi}^{2}\), where \(\alpha_{\chi}=g^{2}q_{\chi}^{2}/4\pi\) is the gauge coupling of the dark abelian gauge group evaluated at the DM mass. We also allow the presence of a light fermion \(e\) of mass \(m_{e}\ll M_{\chi}\) and charge \(q_{e}\). The DM annihilates in dark sector states whose dynamics after DM freeze-out we ignore. In general one should provide a mechanism for a quick and harmless decay of these states (see Ref. [36; 37] for dedicated studies on secluded DM scenarios). The scalar DM mainly annihilates into a pair of dark photons in \(s\)-wave, while the annihilation into light fermions is velocity suppressed. Hence, at LO, the hard cross-section is dominated by the \(j=0\) partial wave.7 Accounting for the LO SE which is just \(S_{E}\simeq 2\pi\alpha_{\chi}/v\) for \(v\ll\alpha\) we find \(c_{\rm ann}^{0}=2\) as defined in Eq. (10). BSF is dominated by the BS with principal quantum numbers \(n\leq 2\), since for large \(n\) the BSF cross-section is suppressed by \(\sim n^{-5/2}\) (see Appendix C). The dominant BSF channel is the formation of \(1s\) and \(2s\) BS's from a p-wave scattering state and \(2p\) BS from both s-wave and d-wave scattering state.8 The PUB is dominated by the \(j=0\) wave. From Eq. (11) we find \(\alpha_{\rm LO}^{\rm max}=0.69\) which using Eq. (12) implies \(M_{\rm LO}^{\rm max}=240\) TeV as PUB on the scalar DM mass at LO. Footnote 7: Focusing on scalar DM simplifies both the spin structure of the hard cross-section and the one of the NLO potential, setting to zero all the spin-dependent contributions. For fermionic DM the computation of the SE is technically more complicated but the main message of this paper on how to assign a reliable theory uncertainty to the freeze-out masses is unchanged. Footnote 8: In the notation of Eq. (10) we can write the BSF contributions as \(c_{\rm BS}^{\rm th}=512(e^{-4}+8e^{-8})/3\simeq 3.6\), \(c_{\rm BS}^{0}=1024e^{-8}/9\simeq 0.04\) and \(c_{\rm BS}^{\rm th}=32768e^{-8}/9\simeq 1.2\). We now want to estimate the accuracy of the theoretical prediction on the DM freeze-out mass approaching the PUB. The expectation is that the theory uncertainty should increase at larger coupling strength. Moreover, we expect the leading corrections to be the ones affecting the SE and BSF, since these are the largest contributions to the annihilation cross-section in the non-relativistic limit. The origin of the theory uncertainty is not immediately apparent from the LO computation, because the LO Coulomb potential allows solutions that are regular everywhere and hence insensitive to the UV behavior of the theory. As discussed in Sec. III, introducing NLO threshold corrections induced by heavy DM loops the SE at NLO becomes UV sensitive to the boundary conditions set by the UV scattering phase. The calculability of the scattering phase is challenged by the UV Landau pole for the dark \(U(1)\) and its uncertainty dominates the theory error on the SE. For this simple theory, the NLO potential reads \[V_{\rm NLO}(r)=V_{\rm NLO}^{\rm IR}(r)+\Delta V_{\rm NLO}^{\rm UV}(r)\, \tag{22}\] where the NLO contributions are \[V_{\rm IR}^{\rm NLO}(r)=-\frac{\alpha(r)}{r},\quad\text{with} \tag{23}\] \[\alpha(r)=\frac{\alpha_{\chi}}{1+\frac{\alpha}{2\pi}b^{\rm IR} \left[\log(m_{e}r)\theta(1-m_{e}r)+\log\left(\frac{M_{\chi}}{m_{e}}\right) \right]},\] and \[\Delta V_{\rm UV}^{\rm NLO}(r)=\frac{\alpha_{\chi}^{2}}{4M_{\chi}r^{2}}-\frac{ 7\alpha_{\chi}^{2}}{6\pi M_{\chi}^{2}}\text{reg}\frac{1}{r^{3}}. \tag{24}\] The contribution of the light fermion to the \(U(1)\) running is encoded in \(b^{\rm IR}=4q_{e}^{2}/3\). The main result of our analysis are shown in Fig. 4. The grey contours show the phase shift of the UV scattering phase due to the NLO correction, \(\Delta\bar{\delta}_{0}^{\rm NLO}\) which we define as \[\bar{\delta}_{0}^{\rm NLO\pm}\equiv\bar{\delta}_{0}^{\rm LO}+\Delta\bar{\delta} _{0}^{\rm NLO}\pm(\Delta\bar{\delta}_{0}^{\rm NLO})^{2}/\bar{\delta}_{\rm LO}\, \tag{25}\] where \(\bar{\delta}_{0}^{\rm LO}\) is the UV scattering phase due to the LO Coulomb potential and \(\bar{\delta}_{0}^{\rm NLO}\) the mean value of the NLO one. Both phases are computed in the Born approximation plugging in Eq. (16) the full NLO potential in Eq. (22) with the only difference that the short distance integral is performed until \(r_{s}|_{\rm max}\) as defined in Sec. III.1 in order to avoid the UV Landau pole of the dark \(U(1)\). From Eq. (25), we defined the theory uncertainty interval in the determination of the UV scattering phase as \(\bar{\delta}_{0}^{\rm NLO+}-\bar{\delta}_{0}^{\rm NLO-}=2(\Delta\bar{\delta}_ {0}^{\rm NLO})^{2}/\bar{\delta}_{\rm LO}\). As long as \(\Delta\bar{\delta}_{0}^{\rm NLO}/\bar{\delta}_{0}^{\rm LO}<1\) the NLO prediction on the SE corresponds to the central value of the NLO UV phase in Eq. (25). This is shown by the blue contours in Fig. 4 left. The theory error on the scattering phase induces an uncertainty in the determination of the SE \[\Delta S_{E}^{\rm NLO}\equiv S_{E}|_{\bar{\delta}_{0}^{\rm NLO+}}-S_{E}|_{ \bar{\delta}_{0}^{\rm NLO-}}\,, \tag{26}\] which is shown by the blue contours in Fig. 4 right. Of course when \(\Delta\bar{\delta}_{0}^{\rm NLO}/\bar{\delta}_{0}^{\rm LO}>1\) the phase becomes incalculable as well as the associated NLO SE. This region is shaded in gray in Fig. 4 and its onset signals the breakdown of calculability which starts well before the LO PUB at \(\alpha_{\rm LO}^{\rm max}=0.69\) is achieved. In Fig. 4 we show the LO and NLO freeze-out predictions for this simple model setting \(q_{\chi}=q_{\rm c}=1\). As expected, both NLO corrections modify mostly the short-distance behavior of the Coulomb potential: the IR NLO corrections tend to increase the SE by making the effective coupling larger at short distances while the UV NLO corrections reduce the strength of the Coulomb potential hence reducing the SE. Combining the two effects make the mean of the NLO prediction accidentally close to the LO one. The theory error on the freeze-out mass reflects the uncertainty in the determination of the UV phase. A further intrinsic error arises from approximating the short distance potential as a single well matching solely the s-wave UV scattering phase \(\bar{\delta}_{0}\). In principle, the procedure of Sec. III.2 can be easily generalized by introducing an arbitrary number of potential wells at short distances with depths fixed by matching the UV theory scattering phases in different \(l\)-waves. We give an example of this generalized procedure for the LO Coulomb potential in App. B. In the Coulomb case, we find that the SE computed using a single potential well, \(S_{E}^{\rm low}\), underestimates the full LO SE by a factor \(\epsilon_{1w}\equiv S_{E}^{\rm low}/S_{E}^{\rm full}\) which is ex Figure 5: **Left:** DM thermal masses for LO and NLO corrections to the Coulomb potential in the dark QED model introduced in Sec. IV with \(q_{\chi}=q_{\rm c}=1\). The **black** line shows the LO prediction of the relic abundance. The **red**, **blue**, and **green** lines show the freeze-out prediction after the inclusion of IR, UV, and IR+UV corrections respectively with their theoretical uncertainty shown as shaded regions of the same color. The **gray dashed** line indicates the value of the coupling above which the UV phase becomes incalculable with \(x_{\rm cut}=1\). In the **bottom** panel we show the theoretical uncertainty on the freeze-out prediction as a function of the coupling \(\alpha\). The uncertainty is dominated by the UV NLO corrections (**blue**) while the IR corrections (**red**) are negligible in a IR free theory. The IR+UV error **green** is smaller because of a partial cancellation in the UV phase between the two contributions. We separately show the error due to the uncertainty in the UV phase defined in Eq. (26) (**dotted** lines) and the total error which includes the systematic uncertainty of our regularization procedure with a single potential well-estimated in App. B (**solid** lines). **Right:** Estimated theory uncertainties on the EW \(n\)-plets thermal masses as computed in Ref. [15; 16]. The bottom panel shows that the relative uncertainty on the freeze-out mass is dominated by simplifications in the treatment of BS cosmology for \(n<7\), while for \(n\geq 7\) the theory uncertainty is dominated by the NLO corrections to the annihilation cross-section discussed here. plicitly shown in the blue line of Fig. 6. Given that we are perturbatively expanding around the Coulomb case, we account for this extra "systematic" uncertainty by rescaling the upper limit on the theory error on the NLO SE defined in Eq. (26): \(S_{E}|_{\delta_{0}^{\text{NLO}}}\to S_{E}|_{\delta_{0}^{\text{NLO}}}/\epsilon_{ \text{1w}}\). This procedure should conservatively account for all the theory uncertainties in our freeze-out prediction. ### Theory uncertainty on the electroweak WIMPs As an application of the previous computation, we go back to the freeze-out predictions for the electroweak WIMPs derived in Ref. [15; 16]. In principle, the computation of the previous section should be repeated for the \(SU(2)\)\(n\)-plets, with the book-keeping challenge of including the NLO corrections in all the isospin channels for the SE and the BSF. Fortunately, enlarging the representation \(n\) of the multiplet, the non-relativistic potential is dominated by the abelian part. Therefore, we can approximately estimate the theory uncertainty for the EW WIMPs by taking the curve in the bottom plot of Fig. 4 left and rescaling the coupling constant \(\alpha_{\chi}\to\alpha_{\text{eff}}\simeq(2n^{2}-1)\alpha_{2}/8\) focusing on the zero isospin channel. We plot this in Fig. 4 right, where we see that for EW multiplets with \(n>7\) the dominant theory uncertainty comes from the NLO corrections while for lighter multiplets the theory error is dominated by the approximations on the BS cosmology.9 Of course, this rough rescaling is far from being a full NLO computation for the EW WIMPs, but it already fixes the unphysical behavior of the theory uncertainty for large \(n\)-plets estimated in Ref. [15; 16]. Footnote 9: The error on small multiplets is dominated by our approximate treatment of BSF in the presence of EW interactions. In particular, the uncertainty comes from neglecting the masses of the EW gauge bosons and the details of BS decoupling (see Ref. [16]). For \(n=6\) and for all \(n>7\) we further neglected the BS ionization from the thermal plasma. This introduces an additional error in the DM mass which was estimated to be at most 5 TeV in Ref. [16]. For \(n=6\), this turns out to be the dominant source of error explaining the offset of the theory error for this multiplet. ## V Conclusions In this paper, we studied NLO corrections to the freeze-out annihilation in the non-relativistic limit. These are important for heavy DM candidates, where the coupling strength approaches the PUB as originally defined in Ref. [1]. We discussed mostly IR-free gauge theories where the inclusion of NLO corrections makes it apparent that approaching the PUB, the theory uncertainty due to the matching of the UV data onto the non-relativistic NLO potential blows up. This allows us to estimate a trustworthy theory error on the heavy EW WIMPs masses computed in Ref. [15; 16]. We defined a systematic procedure to perform the matching from the UV relativistic theory to the non-relativistic potential. Even though our work builds upon previous works on the subject we believe that many subtle issues were clarified here. We hope that this can serve as a basis for further studies in this direction. For instance, we leave for the future a systematic treatment of NLO corrections in UV free gauge theories and a careful assessment of the impact of NLO corrections in exclusive channels like the ones considered in indirect detection [38; 39; 40]. ###### Acknowledgements. We thank Roberto Franceschini for asking how to reliably assign a theory error to freeze-out predictions. We are grateful to Brando Bellazzini for many enlightening discussions about Ref. [17]. We also thank Prateek Agrawal and Aditya Parikh for discussions about Ref. [29]. We thank Neot Smadar, CERN, and the Galileo Galilei Institute for hospitality during the completion of this work. We thank Marco Costa and Nick Rodd for a careful read and useful feedback on the draft. SB is supported by the Israel Academy of Sciences and Humanities & Council for Higher Education Excellence Fellowship Program for International Postdoctoral Researchers.
2301.03560
Solo: Data Discovery Using Natural Language Questions Via A Self-Supervised Approach
Most deployed data discovery systems, such as Google Datasets, and open data portals only support keyword search. Keyword search is geared towards general audiences but limits the types of queries the systems can answer. We propose a new system that lets users write natural language questions directly. A major barrier to using this learned data discovery system is it needs expensive-to-collect training data, thus limiting its utility. In this paper, we introduce a self-supervised approach to assemble training datasets and train learned discovery systems without human intervention. It requires addressing several challenges, including the design of self-supervised strategies for data discovery, table representation strategies to feed to the models, and relevance models that work well with the synthetically generated questions. We combine all the above contributions into a system, Solo, that solves the problem end to end. The evaluation results demonstrate the new techniques outperform state-of-the-art approaches on well-known benchmarks. All in all, the technique is a stepping stone towards building learned discovery systems. The code is open-sourced at https://github.com/TheDataStation/solo
Qiming Wang, Raul Castro Fernandez
2023-01-09T18:20:55Z
http://arxiv.org/abs/2301.03560v2
# Data Discovery using Natural Language Questions via a Self-Supervised Approach ###### Abstract. Data discovery systems help users identify relevant data among large table collections. Users express their discovery needs with a program or a set of keywords. Users may express complex queries using programs but it requires expertise. Keyword search is accessible to a larger audience but limits the types of queries supported. An interesting approach is _learned discovery systems_ which find tables given natural language questions. Unfortunately, these systems require a training dataset for each table collection. And because collecting training data is expensive, this limits their adoption. In this paper, we introduce a self-supervised approach to assemble training datasets and train learned discovery systems without human intervention. It requires addressing several challenges, including the design of self-supervised strategies for data discovery, table representation strategies to feed to the models, and relevance models that work well with the synthetically generated questions. We combine all the above contributions into a system, S2LD, the solves the problem end to end. The evaluation results demonstrate the new techniques outperform state-of-the-art approaches on well-known benchmarks. All in all, the technique is a stepping stone towards building learned discovery systems. The code is open-sourced at [https://github.com/TheDataStation/open_table_discovery](https://github.com/TheDataStation/open_table_discovery). dataset discovery, natural language questions, self-supervised + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + Footnote †: journal: Information and Communication + + Footnote †: journal: Information and Communication training dataset that represents the data from the underlying repository well. Without carefully controlling for dataset size, large volumes of data would translate into too large training datasets that would consume too much resources. And limiting the size of the dataset in an unprincipled fashion means the training dataset may not represent the underlying data well, thus leading to an underperforming system. * **Learned Table Representation.** The learned system ingests tables in vector format. This calls for a table representation strategy. There are many strategies in the literature to represent tables as vectors that we found negatively impacting the system performance. The challenge is to identify a good table format. * **Assemble end-to-end system.** Building a complete system requires solving several interrelated problems, each of which benefits from different machine learning model architectures. We follow a two-stage approach that cleanly splits the problem around the ML model architectures that fit best each problem. During _first-stage retrieval_, the system must identify a subset of potentially relevant tables. During _second-stage ranking_ the system must identify the table that contains the answer to the input question among those returned by _first-stage retrieval_. In this paper, we introduce novel techniques to address the above challenges and show how to incorporate those techniques into a system, S2LD, which we will release as open-source so the community can improve it and test it in different scenarios. The main contribution of the paper is a **self-supervised method** that leverages SQL as an intermediate proxy between questions and text, facilitating the synthesis of training data. The approach uses Bayesian neural networks to automatically determine the training data size efficiently. Second, to represent tables in vector format and feed them to the system, we introduce a simple-to-implement graph-based table representation, called **row-wise complete graph**, that is insensitive to the order of columns or rows, which do not bear any meaning in the relational model. Third, we introduce a new **relevance model that** works in tandem with the self-supervised training data collection technique to yield high retrieval performance. Together, these contributions lead to the first _self-supervised learned table discovery_ system that automatically assembles training datasets from repositories and lets humans pose their information needs as natural language questions. We evaluate the ability of S2LD to identify relevant tables given natural questions as input using existing state-of-the-art benchmarks and comparing the results with state of the art approaches--OpenDTR (Dong et al., 2019) and GTR (Srivastava et al., 2017). We show that S2LD outperforms other approaches on previously unseen data repositories, and that it matches and sometimes outperforms them even when expensive-to-collect training data is provided to the other baselines, demonstrating the quality of the synthesized training dataset. We also evaluate the impact of the new row-wise complete graph table representation, the new relevance model, and the use of Bayesian networks for efficient training data generation. Finally, we analyze system-oriented aspects of S2LD, including its runtime performance and reliance on different types of retrieval indexes to give a full account of the system characteristics. **Related Work.** The data management community has made much progress in NL2SQL interfaces (Srivastava et al., 2017; Wang et al., 2018): how to translate a natural question into a SQL query that can be evaluated against a database. In contrast, we concentrate on the data discovery problem: we do not need to translate natural questions to SQL, instead, we generate SQL from tables and then text from SQL to automatically generate training data. The rest of the paper is structured as follows. We present preliminaries in Section 2, technical background in Section 3, the approach in Section 4, evaluation in Section 5, and conclusions in Section 6. ## 2. Preliminaries In this section, we present the type of natural language questions supported on (2.1), the problem statement (2.2), related work (Section 2.3) and the challenges in Section 2.4. ### Format of Natural Language Question Natural language questions can be exceedingly complicated. In this work, we target factual questions, which are sufficiently expressive to let users discover interesting tables over large repositories. When defined over a collection of tables T a factual question is such that its answer is contained in one table and does not require complex processing and reasoning. For example, we do not target questions that require recovering information from multiple tables and then organizing them into a response. Factual questions are the type search engines can answer by matching against a knowledge base (Srivastava et al., 2017). In the context of our work, we support factual questions with answers as a cell in a table among T. More precisely, given a table \(T_{i}\) that represents an entity \(E_{i}\) with attributes \(E_{i}.A_{1}\),..., \(E_{i}.A_{m_{i}}\), the answer to a factual question exists in some \(E_{i}.A_{j}\) whether raw or via an aggregation operator such as \((Max,Min,Ang,Count)\). The factual question may optionally incorporate operators that further qualify the scope of the answer, such as \((>,<,=)\) (operators apply only to compatible data types). As a result, the equivalent SQL query to a factual question tends to be simple, with attributes and (maybe) aggregation functions and predicates that use the operators above. These questions are helpful for table discovery as evidenced by the format of popular benchmarks such as NQ-Tables that consist of real queries from Google users and which we use in the evaluation section. ### Problem Statement A table (relation), \(T_{i}\), has a schema \(\mathcal{R}\) with \(k\) columns \(C_{1},C_{2},...,C_{k}\) and rows, \(r\in T_{i}\). The table may have a title (caption) and column names may be missing. A table collection is defined as a set of tables, \(\text{T}=\{T_{1},T_{2},...T_{N}\}\). A cell is indexed by a row \(r\) and a column \(c\), and the function \(cell_{i}(r,c)\) returns the content of the cell in row \(r\) and column \(c\) of table \(T_{i}\). Given a factual question \(q\), the problem we solve is to find a table \(T_{i}^{*}\) among a large table collection T, such that \(T_{i}^{*}\) contains a cell \(cell_{i}(r,c)\) that answers \(q\). ### Learned vs Non-Learned Discovery We separate data discovery solutions into two categories: _non-learned_ and _learned_. **Non-learned discovery systems**(Kang et al., 2016; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) rely on indices to perform efficient search over large table collections. They often expose an API (Wang et al., 2018) to perform sophisticated discovery tasks such as identifying similar, unionable, and joinable tables (Bordes and McIntran, 2014; McIntran and McIntran, 2015; McIntran and McIntran, 2016) or a keyword search interface (Bordes and McIntran, 2014; McIntran and McIntran, 2016) to identify matching tables. None of them offers a natural language question interface. **Learned discovery systems** support natural language question retrieval (Krishnan et al., 2015; McIntran and McIntran, 2016; McIntran and McIntran, 2016) but do not offer the more sophisticated functionality of _non-learned_ systems. They consist of two stages: _first-stage retrieval_ and _second-stage ranking_(McIntran and McIntran, 2016). The first-stage retrieval is implemented by an index designed to identify a narrow set of relevant tables in a scalable way. The index can be sparse, such as those based on BM25 (Ross and Snoek, 2015) or dense, that model tables as distributed representations (vectors). For example, OpenDTR (Krishnan et al., 2015) uses dense indexes. It learns a vector representation for each table and for the question using a large training corpus of \(\prec q\), \(T_{i}^{*}\)- pairs. Then, offline, it uses a _Table Encoder_ component to obtain and index the vector representation of the tables. Online, it uses a _Question Encoder_ to obtain the vector representation of the question and then it finds the \(K\) closest tables to the question using the inner product. Then, during the second-stage ranking stage, a table is chosen among the \(K\) using a _semantic relevance model_(McIntran and McIntran, 2016; McIntran and McIntran, 2016), that integrates the question and table representation together. ### Challenges of Learned Discovery Systems Existing learned discovery systems suffer from several shortcomings that delineate the challenges we tackle in this paper. **C1. Collecting training data.** Learned discovery systems trained on a table collection do not perform well in new table collections, so they have to be freshly trained for each table collection. Collecting training data is expensive and a major limitation of learned discovery systems. For example, OpenDTR uses 12K \(\prec\)question,table-with-answer- pairs. The challenge is to avoid the manually intensive task of collecting and labeling training data and deciding how large the training dataset should be to become representative and thus lead to a system that performs well on the target table collection. **C2. Table Representation.** Learned discovery systems represent tables as vectors. OpenDTR uses TAPAS (Krishnan et al., 2015) to represent a table with a single vector. As we show in the evaluation, the sparse and simpler BM25 model sometimes outperforms TAPAS. More recently, GTR (McIntran and McIntran, 2016) represents tables as graphs, similar to other efforts in data managements such as EmBDI (Bordes and McIntran, 2014) and Leva (Leva, 2017). However, the graph construction depends on the order of rows and columns, which has no meaning in relations. Table representation has a big impact on system performance. Thus, the challenge is to find a dense table representation that outperforms sparse models and that works well on relations (Challenge 3). **C3. Relevance Model.** The model computes a question representation and measures the relevance of different tables with respect to the question. The model must be designed in tandem with table representation (Challenge 2). The challenge is to find a relevance model appropriate for relations that performs well empirically when trained with synthetically generated questions (Challenge 1). ## 3. Technical Background In this section, we introduce technical background that we use as part of the new techniques. ### Pretraining and fine-tuning We use existing, pre-trained machine learning models to assemble the end-to-end system. It is possible to adapt models that were trained on a dataset to work well on a different dataset by using transfer learning. We use this technique. Transfer learning helps leverage the training effort invested in a previously trained model in a new model. Empirically, it is often implemented by _fine-tuning_ a pre-trained model (Zhu et al., 2017). During fine-tuning, the pretrained model (sometimes modified by adding an extra layer) is retrained on the new task, with the parameters of the model changing. Transfer learning is heavily relied upon in NLP, where large language models such as BERT (Krishnan et al., 2015) are trained on large text corpora to learn to represent text and then fine-tuned to downstream NLP tasks such as semantic role labeling and question answering (Bordes and McIntran, 2016). We leverage this technique. ### Self-supervised learning Supervised machine learning relies on the existence of a training dataset with queries \(x\) and labels \(y\) to be predicted. Self-supervised learning refers to any technique (oftentimes application dependent) that automatically creates a collection of \((x,y)\). The idea behind self-supervised approaches is to derive labels, \(y\), directly from the data, thus automatically building training datasets \((x,y)\). For example, BERT (Krishnan et al., 2015) learns language representation by masking words from a sentence and trying to predict them from the remaining words. The application of self-supervised learning is application and context dependent. The recent interest in this paradigm is a consequence of the cost of collecting training data. We introduce new self-supervised techniques for collecting training data from large collection of tables, thus assembling training datasets for learned discovery systems. ### Bayesian neural networks Bayesian neural networks are a powerful class of models that attach a probability distribution to the model's parameters, thus paving the way to avoiding overfitting and providing a measure of uncertainty with the answer. We leverage Bayesian neural networks for a different purpose, to learn when a model's performance has reached a good quality and thus avoid keep feeding expensive to generate training data. In other words, they are the workhorse behind the technique we use to implement incremental training. A general approach to learn the parameters \(\theta\) of a neural network from dataset \(D\) is to maximize the likelihood, \(\tilde{\theta}=\operatorname{argmax}P(D|\theta)\). This point estimation of the parameters often leads to overfitting and overconfidence in predictions (Krishnan et al., 2015). To address this issue, Bayesian neural networks (Bordes and McIntran, 2014; Krishnan et al., 2015) learn a posterior distribution over \(\theta\) using the Bayes rule. \[P(\theta|D)=\frac{P(D|\theta)P(\theta)}{P(D)}=\frac{P(D|\theta)P(\theta)}{\int_{ \theta}P(D|\theta)P(\theta)d\theta}\] At inference time, the prediction is given by taking expection over \(P(\theta|D)\) \[P(y|x)=\mathbb{E}_{P(\theta|D)}(P(y|x,\theta))\] In practice, a list of \(\theta_{1},\cdots,\theta_{m}\) are sampled from \(P(\theta|D)\) and the prediction is using bayesian model averaging (Srivastava et al., 2015). \[P(y|x)=\frac{1}{m}\sum_{i=1}^{m}P(y|x,\theta_{i})\] While it is easy to define a network for \(P(D|\theta)\), such as the multilayer perceptron (MLP), and specify a prior for \(\theta\), such as an isotropic gaussian, the posterior distribution \(P(\theta|D)\) is computationally intractable because of the integration over \(\theta\). To address the intractablity, variational learning (Bahdan et al., 2015) uses another distribution \(q(\theta|\varphi)\), parameterized by \(\varphi\) to approximate \(P(\theta|D)\). Variational learning optimizes \(\varphi\) to minimize the KL divergence of \(P(\theta|D)\) and \(q(\theta|\varphi)\). We use Bayesian neural network to train the relevance model given a list of datasets \(D_{1},...D_{m}\). ## 4. Self-supervised Data Discovery In this section, we present the new self-supervised learned discovery system. First, we present the new self-supervised training method to automatically create synthetic labeled questions without manual intervention (Section 4.1). Next, we introduce a new table representation that boosts the performance of first-stage retrieval (Section 4.2). Finally, we present the relevance model that implements the second-stage ranking and uses the automatically created data to complete our contribution (Section 4.3). In Section 4.4, we present Bayesian incremental training to automatically choose the appropriate data size and train the relevance model. We conclude by presenting S2LD in Section 4.5 ### Synthetic Question Generation A training dataset consists of positive and negative samples. Here, we explain the method to generate synthetic questions and detail how to derive a training dataset from these in Section 4.3 and Section 4.4, thus completing the solution to **Challenge 1**. The goal is to obtain a collection of question and table pairs, \(<\)_q_, \(T_{i}^{*}\), where \(q\) can be answered with \(T_{i}^{*}\). To automatically generate natural questions from tables, the idea is to use SQL as an intermediate proxy between question and text. We first generate SQL queries that can be answered with a table and then we transform those SQL queries to natural language questions. #### 4.1.1. SQL Structure We generate SQL queries that match the structure of the factual question presented in Section 2.1: * Randomly include an aggregation operator, MAX, MIN, COUNT, SUM, AVG on numerical columns. * Include \(\geq\) 1 columns with predicates of the form "\(=\)\(<\) ". * Do not include joins as we seek a single table that answers \(q\). In generating questions we strive for including sufficient _context_ and for making the questions sufficiently _diverse_ to resemble questions posed by real users. In more detail: **Context-rich queries.** Whenever available, we include the table title with certain probability (as detailed below) as a special attribute (column) "_About_" to generate questions. This is because the table title often incorporates useful context. Consider the question "Where was he on February 19, 2009?". This cannot be answered because we do not know what "where" refers to and who is "he". The question, however, makes more sense if a table titled "List of International Predidential trips made by Barack Obama" can be integrated. The resulting question does not directly copy the title and instead incorporates tokens that correspond to named entities. We do not always include the title because it causes the learning procedure to overemphasize its importance and ignore the table schema. Instead, we introduce the title with certain probability \(\alpha\). If a SQL query contains \(m\) attributes in the predicates, we include the title with probability \(\alpha=1/(m+1)\): more attributes means more context is already available so the title is less likely helpful. **Diverse queries.** We want synthetic questions to resemble those posed by users. One way to achieve that is to ensure the questions represent well the attributes contained in the tables. To achieve that, we control how many predicates (referring to different attributes) to include in each question by sampling at most \(m\) attributes. In addition, if the "About" attribute from above is included, it makes a question refer to as many as \((m+1)\) attributes. #### 4.1.2. Generation Algorithm The generation algorithm must: i) deal with dirty tables; ii) sample to avoid generating humongous training datasets when the number of tables in the input is large. We first explain the strategy to deal with these two challenges and then present the algorithm. **Dealing with dirty data.** Dirty tables will contain missing column names, and cell values with too long (outlier) text, that may result from incorrect parsing (such as when tables are automatically crawled from the web, as in webtables (Bahdan et al., 2015) and NQ-Tables (Krishnan et al., 2016). A cell with too long text is a problem when translating from SQLs to questions because the translation model we are using allows 200-300 words at most as input. The algorithm identifies too long text using the inter-quartile range (IQR) of the cell length and discarding any cells longer than \(Q3+1.5(Q3-01)\), where \(Q1\) and \(Q3\) are the first and third quartiles, respectively. IQR is a simple yet robust way of detecting outliers (Krishnan et al., 2016). When generating SQLs, the algorithm also ignores columns without header names because the translation model asks for a relationship as input and the column name is the only option we have. **Sampling strategy.** The total number of SQL queries one can extract from a table collection is large even for small table collections. For example, consider a table collection with 100 tables, where each table has 10 columns and only 12 rows, we sample 4 columns per table and use only one aggregator operator. In this case, the number of unique SQLs is at least \(100\cdot 12\cdot 10\cdot\left(\sum_{0}^{3}\binom{10}{k}\right)=2,112,000\). This number grows fast with more tables and rows. Training on so many questions requires more hardware resources and time and quickly becomes prohibitively expensive. Instead, the generation algorithm uses a sampling procedure that is driven by the training process which asks for a collection of \(batch\_size\) SQLs to be generated. The sampling procedure then starts to sample a table, and then further samples columns and a row to generate a SQL until \(batch\_size\) is reached and then returns the batch of SQLs for question generation. **The Generation Algorithm.** The algorithm combines 3 independent sample procedures (Algorithm 1). The first samples tables. The second samples columns given a table, and includes one of the logical operators in the predicates. Then the \(alpha\) probability is computed (line 6) to decide whether to include the table's title. The third procedure samples rows, given columns of a table. When columns are numerical, the generation algorithm samples one of the aggregate operators. Finally, the algorithm makes an initial pass over the data to: i) determine column types; and ii) compute the IQR of text lenght, which is used to filter out outliers. The result of the algorithm is a list of SQL queries that are then split into training and validation sets. #### 4.1.3. Translation from SQL to question To translate SQL queries into natural language questions, we use the T5 [(30)] sequence-to-sequence model. T5 is pretrained on the "Colossal Clean Crawled Corpus" (C4). We fine-tune T5 using WikiSQL [(45)], a dataset with pairs of SQL queries and their corresponding natural language representation that has been previously used for tasks such as natural language question interface to relational databases [(45)]. During fine-tuning, we update the weights of T5 without adding new layers. When encoding SQL statements, and to escape SQL keywords, we include these in brackets following a format that indicates the model they must be escaped, e.g., "where" becomes "[W-H-E-R-E]". A complete list is shown in table 1. ``` 1\(SQLs\) = [] whilelen(SQLs) < batch_size do 2 sample a table \(T_{i}\) from the collection \(T\) sample a query column, \(sel\_col\) from \(T_{i}\) sample condition columns, \(cond\_cols\) from \(T_{i}\)\(\alpha=1/(len(cond\_cols)+1)\)\(use\_title=bernoulli.ros(\alpha,1)\)if\(use\_title\)then 3\(cond\_cols\) \(\leftarrow\) TITLE 4 sample aggregation operator, \(agg\_op\) if\(sel\_col\) is numeric 5 determine the good rows, \(good\_rows\) from \(T_{i}\) that have non-outlier values for \(cond\_cols\)if\(good\_rows\) is not emptythen 6 sample a row \(r\) from \(good\_rows\) construct a sql using \(r\), \(sel\_col\), \(cond\_cols\) and \(agg\_op\)ifthe sql not in sql_dictthen 7 add sql to SQLs and sql_dict 8 9return SQLs ``` **Algorithm 1**\(generate\_sqls(T,batch\_size,cell\_upper\_size,sql\_dict)\) #### 4.1.4. Assigning \(T_{i}^{*}\) Large table collections contain table duplicates and near-duplicates. Consequently, a single question may be answered by more than a table. The approach considers _any_ table that can answer a question as a valid solution, and thus it creates multiple question table pairs. To detect duplicates, the current approach finds tables with the same schema. It is easy to include other duplicate detection techniques such as presented in Aurum [(13)]. ### Table Representation The goal is to find a table representation that facilitates matching with questions, thus addressing **Challenge 2 (C2)**. We represent each table as a graph, where the nodes are the subject and object of every (subject, predicate, object) triple in the table. The intuition is that a natural question that can be answered with a table can be answered with a collection of triples from that table. Hence, representing the table via its triples facilitates matching it with relevant questions during first-stage retrieval. We present an example, the **row-wise complete graph** method, and finally the encoding method. **Example.** Consider the question "_When is the Albany Park library at Chicago open?_", and the table that answers this question, \(T_{i}^{*}\), shown in Fig. 1. The question contains two triples: _(Albany Park library-at, Chicago)_ and _(Albany Park library, Open,?_ )_, where the placeholder "?" corresponds to the answer we seek. At the same time, the table contains three triples: * _(Chicago Public Libraries, Name, Albany Park)_ denoting the relationship between table name and the _Name_ column, i.e. Albany Park is one of Chicago public libraries, which is close to _(Albany Park library, at, Chicago)_ in the question. * Hours of Operation, Mon. & Wed., 10-6;...)_ denoting the relationship between _Name_ and _Hours of Operation_ columns, with two column names connected by a hyphen is the predicate. This relationship is close to _(Albany Park library, Open,?_ )_ in the question. * _(Albany Park, City, Chicago)_ denoting the relation between Name and City columns which is close to _(Albany Park library, at, Chicago)_. Matching triples is the mechanism our approach uses to identify direct matches between question and table. **Row-Wise Complete Graph (RCG).** One challenge of decomposing a table into its constituent triples is to identify correctly the entity the table represents. With an entity-relationship diagram available, this is straightforward, but we do not have access to those in discovery scenarios with large collections of tables. Instead, we represent the complete graph of subject and objects in the table, as shown in the Fig. 1. Furthermore, the table title is included as a special column of each table, thus it is also represented in triples for every row. In such complete graph, some edges will be spurious, while others will match the table question, as shown in the example. We are not concerned with the spurious relationships because the first-stage retrieval will filter those out if they do not match any question. We show empirically in the evaluation the benefits of such a representation. Next we explain how to encode the graph for first-stage retrieval. **Encoding.** In this last step, the goal is to encode each triple from the graph produced by RCG. To do so we use dense representations because we found they outperform sparse representations based on TF-IDF and BM25, as we show in the evaluation section. We use \begin{table} \begin{tabular}{l|l|l|l|} \hline **SQL keywords** & **Custom T5 token** & **SQL keyword** & **Custom T5 token** \\ \hline select & [S-E-L-E-C-T] & where & [W-H-E-R-E] \\ \hline avg & [A-V-G] & = & [E-O] \\ \hline max & [M-A-X] & \textgreater{} & [G-T] \\ \hline min & [M-I-N] & \textless{} & [L-T] \\ \hline count & [C-O-U-N-T] & and & [A-N-D] \\ \hline sum & [S-U-M] & & \\ \hline \end{tabular} \end{table} Table 1. SQL keywords for T5 model input vector representations provided by a pre-trained open question-answering model over text ((18)). The model contains a question encoder (which we use to encode questions) and a passage encoder to encode the triples. Before encoding, we represent each triple in a textual format, amenable to be encoded. Specifically, the textual format is the table title concatenated with the column name and the value for each cell in a triple. The resulting dense vectors are then indexed for first-stage retrieval. ### Relevance Model We design a relevance model to solve the second-stage ranking problem, thus addressing **Challenge 3 (C3)**. Given a question, \(q\), the first-stage retrieval returns \(K_{u}\) triples arising from \(K_{t}\) tables (there may be multiple triples from the same table). The goal of second-stage ranking is to choose one table \(T_{i}\) from \(K_{t}\). The general idea is to match \(q\) to triples from \(T_{i}\). To do so, we use a pretrained OpenQA model (18) to match \(q\) to each triple, and then construct a matching representation (\(q\),\(T_{i}\)). The OpenQA model takes a question and a text representation of a triple and outputs a matching representation. We employ two techniques to boost the matching: **triple annotation** and **representation augmentation**. **Triple annotation.** We use the pretrained OpenQA model (18) to provide input features for each (question, triple) pair. To use the OpenQA model, we need to convert a triple to text, but this transformation to text is different than the performed during first-stage retrieval. Here, we seek to annotate the triple to improve its context which, in turn, helps with ranking. To improve the context, we use special tags to indicate the role of each string element in the triple and in the table. For example, consider the aggregation question: "_Which library in Chicago has the longest hours of operation?_". A triple in Fig. 1 that refers to the column _"Hours of Operation"_ is more likely to answer the question than a triple from another table where the text _"Hours of Operation"_ appears in a cell. We incorporate the context as part of the input fed to the OpenQA model: [T] to denote the table Title; [SC] to denote subject column name \(c_{x}\). [S] for \(Cell(r,c_{x})\), which means Subject. [OC] for column name \(c_{y}\), which means Object. [O] for \(Cell(r,c_{y})\), which means Object. When a triple involves two columns, we still include the table title as context. For example, the triple (_Albany Park, Hours of Operation, Mon. & Wed., 10-6..._) is annotated as "[T] Chicago Public Libraries [SC] Name [S] Albany Park [OC] Hours of Operation [O] Mon & Wed., 10-6". In contrast, the triple (_Chicago Public Libraries, Name, Albany Park_) that only refers to one column is annotated as: "[T] Chicago Public Libraries [SC] [S] [OC] Name [O] Albany Park". That is, "Chicago Public Libraries" works as title and subject. **Representation augmentation**. We augment the representation by adding redundancy (34), which is shown to reduce distributional shift and improve the matching process. A table is represented by the triples retrieved during the first-stage retrieval stage: \((q,\{p_{1},\cdots,p_{m}\})\). To augment that representation, we represent the pair \(m\) times, including each time the question and each of the triples, i.e., \((q,\{p_{1},\cdots,p_{m}\})+(q,p_{i})\forall i\in m\). This redundancy helps to introduce diversity because they share the label. **Model.** Given a question \(q\) and \(K\) annotated triples \(p_{1},p_{2},\ldots,p_{K}\), we first feed all of them to the pretrained OpenQA model to get a feature vector \(X_{i}\) for each \((q,\,p_{i})\) pair as shown in Fig. 2. We don't change the parameters of OpenQA and so \(X_{i}\) is fixed. We then project \(X_{i}\) to a vector space w.r.t. table to get a vector \(REP_{t}(q,p_{i})\) and then apply max-pooling to all \(REP_{t}(q,p_{i})\) vectors in the same table to get \((q,T_{j})\) representation vector \(REP(q,T_{j})\), where \(T_{j}\) means the table \(X_{i}\) belongs to. To construct multiple relevance representation, each \(X_{i}\) is projected to another vector space w.r.t. triple to get a vector \(REP_{u}(q,p_{i})\) which is concatenated with \(REP(q,T_{j})\) to get multiple equivalent relevance representation \(REP(q,p_{i},T_{j})\). Then a linear layer is added to compute the relevance score and a logistic layer is added to compute the relevance probability and loss. During prediction, tables are ranked by the highest score achieved by \(REP(q,p_{i},T_{j})\) in each table. To see why use max-pooling, each dimension of \(REP(q,T_{j})\) will choose the most responsive triple \(p_{i}\) for the question. and \(REP(q,T_{j})\) thus contains all best matched triples for the question. To make max-pooling more effective, the learned table-perspective representation \(REP_{t}(q,p_{i})\) must be as different from each other as possible in a table. This makes sense because they represent different cells in a table. To this end, we add a regularizer, the Diversity Regularizer to logistic loss, which minimizes the mean of mutual dot product of \(REP_{t}(q,p_{i})\) in a table. **Collect training triples.** We explained earlier how to generate synthetic questions automatically from a table collection. Here, Figure 1. Example of Row Wise Complete Graph (RCG) Figure 2. Model of relevance between a question \(q\) and triples from the same table returned by first-stage retrieval. Triples \(p_{i},\cdots,p_{K})\) come from two tables \(T_{j}\) and \(T_{k}\). The function qa_enc is the encoder in the OpenQA model we describe how we produce a training dataset, that consists of a list of \((q,\{p_{i}\}^{+},\{p_{j}\}^{-})\) pairs, where \(\{p_{i}\}^{+}\) means positive triples from correct tables for \(q\), while \(\{p_{j}\}^{-}\) means negative triples from incorrect tables. First, given a synthetic question \(q\), the system calls the first-stage retrieval component to get a small set of triples. If a triple, \(p\), comes from one of the ground truth tables of \(q\), \((q,p)\) is labeled as a positive example and as a negative example otherwise. If all triples are positive or negative, the system ignores this question. Collecting training triples this way emulates the test scenario where triples are always accessed by the first-stage retrieval and makes the distribution closer to the test distribution. ### Bayesian Incremental Training Instead of guessing the right training dataset size, we employ an incremental training process that stops when the the model performs well, using an early-stop strategy. A simple way of solving the problem is to successively generate training datasets of increasing size, retrain the model, and stop whenever the quality is satisfactory. The shortcoming of this approach is that each training process is independent from the previous and takes larger and larger inputs, hence, making it more and more time consuming. Instead, we apply a new Bayesian incremental training process (Hendle et al., 2017). The goal is to learn recursively a posterior distribution \(P(\theta|D_{1},...,D_{t})\) over the neural network parameters \(\theta\) given a prior distribution \(P(\theta|D_{1},...,D_{t-1})\) and only one dataset \(D_{t}\), instead of an accumulation of datasets, such as in the simple baseline explained above. The prior distribution \(P(\theta|D_{1},...,D_{t-1})\) contains the knowledge the relevance model learns from previous datasets so that the model does not have to start training from scratch and uses \(D_{t}\) only. This is the key to the efficiency gain. As discussed in Section 3.3, the posterior distribution \(P(\theta|D_{1},...,D_{t})\) is intractable to compute. Instead, we use another distribution, \(q(\theta|\varphi)\), parameterized by \(\varphi\) to approximate the posterior and then optimize \(\varphi\) using the backpropagation algorithm (Bengio et al., 2017), \[\hat{\varphi}=\operatorname{argmin}\sum_{i=1}^{m}\log q\left(\theta^{i}| \varphi\right)-\log P\left(\theta^{i}\right)-\log P\left(D|\theta^{i}\right)\] Where \(\theta^{i}\) is a sample from \(q(\theta|\varphi)\) and \(P\left(D|\theta^{i}\right)\) is determined by the relevance model (see Section 4.3). Next, we discuss the choice of \(q(\theta|\varphi)\) and \(P(\theta)\) and also some techniques to speedup the training. **The approximate distribution, \(q(\theta|\varphi)\)**. In the relevance model \(\theta=\{\langle\textbf{{W}}_{i}\,\textbf{{b}}_{i}\rangle\}\), where \(\textbf{{W}}_{i}\) and \(\textbf{{b}}_{i}\) are the weight matrix and the bias vector for layer \(i\). To simplify and speedup computation, we follow the approach in (Bengio et al., 2017) and assume that each weight / bias is an independent gaussian variable with mean \(\mu\) and standard deviation \(\sigma\), so that \(q(\theta|\varphi)\) can be fully factorized. In concrete, a random noise \(\epsilon\) is sampled from the unit gaussian \(\mathcal{N}(0,1)\), and \(\sigma\) is derived by \(\sigma=\log(1+\exp(\rho))\) and then a sample weight \(w\) in \(\textbf{{W}}_{i}\) is given by \(w=\mu+\sigma\cdot\epsilon=\mu+\log(1+\exp(\rho))\cdot\epsilon\). So the parameter \(\varphi\) is the set of \((\mu_{j},\rho_{j})\). To initialize \(\mu_{j}\), we sample uniformly from \((-1,1)\). To initialize \(\rho_{j}\), we sample uniformly from \((-3,0)\) and this makes the initial \(\theta_{j}\) sit in the range \((0.05,0.7)\). Smaller initial \(\theta_{j}\) for each weight/bias would make training much harder to converge. **The prior \(P(\theta)\)**. We only have to specify a prior for the first dataset \(D_{1}\). After training is done with \(D_{1}\), the posterior \(P(\theta|D_{1})\) is the prior for the next training and so on. Then both the prior and posterior are gaussian distributions. In particular, the initial prior is a gaussian with mean 0 and standard deviation 1 for each weight/bias, which is consistent with the initialization of the parameter \((\mu_{j},\rho_{j})\). In addition, in our scenario, an inaccurate initial prior is not an issue because there are many training data points and as the data increases, the impact of the initial prior plays a lesser role. **Speedup of training and evaluation.** To train a Bayesian neural network, often multiple random noise (\(\epsilon\)) are sampled for each batch and during evaluation also multiple noises are sampled to do bayesian model averaging. This, however, would make the Bayesian neural network training more time consuming than the baseline procedure, even when the baseline procedure trains over a larger training dataset. To address the issue, we use one sample of random noise only during training; we find that there is no perfomance degradation for our model. Second, on validation data, we use the posterior gaussian mean of weights/bias for prediction instead of doing bayesian model averaging, further reducing time. Together, these optimizations improve the training time of the Bayesian neural network, making it substantially faster than the baseline approach, as we will show in the evaluation section. ### System Implementation **Overview.** Fig. 3 shows the overview of S2LD, which has an offline mode and online mode. In offline mode, tables are decomposed into triples (step 1) which are then encoded (step 2), and indexed (step 3). Then SQL queries are sampled from the target table collection (step 4) and then a SQL2Question module translates SQL queries (step 5) into synthetic questions which are encoded (step 6) and sent to a vector index (step 7). A small set of top triples are returned (step 8) for each synthetic question to train the relevance model (step 9). If the more training data is needed, the training data assemble sends a message (step 10) for more questions. In online mode, a real question is encoded (step t1) and sent to the vector index (t2). A small set of triples are returned and together with the question are sent to the released relevance model (t3) to predict the most relevant tables. **Question and triple encoder.** We use FiD retrieval (Kang et al., 2017), a pre-trained passage retrieval for question answering over question and free text, to encode question and triples into 728-dimensional vectors. We use the version pre-trained on TriviaQA (Kang et al., 2017), a high-quality dataset with 95K question answer pairs with evidence documents. **Vector index.** We use Faiss (Kang et al., 2017) as the vector index, as it is scalable and supports similarity search. We use dot product as the similarity metric because FiD retrieval is pretrained using dot product. We assume that triple vectors do not fit in the memory of a single node and use SSD indices as provided by Faiss, and providing approximate (instead of exact) results. **2-round first-stage retrieval.** We want the top \(K_{u}\) triples returned by the index to contain at least \(K_{t}\) tables to choose from, but because multiple triples may come from the same table, a single query will not ensure this constraint. We use a 2-round strategy. First, \(K_{u}\) triples are retrieved. If the \(K_{u}\) triples contain at least \(K_{t}\) tables, the \(K_{u}\) triples are returned. Otherwise, a _max-try-\(k_{u}\)_ triples (including \(K_{u}\)) are retrieved and squashed to \(K_{u}\) triples whether they come from \(K_{t}\) tables or not. The squashing procedure first sorts tables by the highest-ranking triple, and then takes top \(K_{t}\) tables. It starts from the lowest ranking table of top \(K_{t}\) and keeps top \(m\) (e.g. 3) triples for each table and stop until \(K_{u}\) triples are left. **Relevance model.** We use the FiD reader (Friedman et al., 2017) pretrained on TriviaQA to get an initial feature vector for each \((q,p_{i})\). The FiD reader is a generative question answering model, which takes as input a question and a set of passages and outputs a sequence of tokens as answer conditioned on the input. Specifically, the FiD reader uses an encoder-decoder architecture, and each encoder converts a question and passage pair into a vector and then all these vectors are concatenated and sent to a decoder which then generates answer tokens. The output of the encoder has many layers. We use the last layer's representations of question and passage and then concatenate them with the first answer's tokens representation as the input feature for \((q,p_{i})\). ## 5. Evaluation In this section, we answer the main research questions of our work: * **RQ1. Does self-supervised data discovery work?** The main claim of our work is that learned discovery systems need repository-specific training data to perform well, and that our self-supervised approach can collect that data automatically. We measure the extent to which systems suffer when not specifically trained for a target repository and we measure the performance of our approach. We will compare with strong baselines to contextualize the results. This is presented in Section 5.1. * **RQ2. Does Bayesian incremental training work?** A challenge of producing the training dataset is to choose its size. Too large will lead to unnecessary resource consumption without any accuracy benefits and possibly a degradation. Too small will lead to underperforming models. The technique we introduce uses Bayesian neural networks to know when to stop. We evaluate its effectiveness here when compared to a baseline approach that retrains iteratively using ever larger datasets. This is presented in Section 5.2. * **RQ3. How does table representation affect accuracy?** An important contributor to the end to end performance is the representation of tables. We argued in the introduction that many existing approaches do not work well and thus we introduced a new row-wise graph based approach. We evaluate these claims in Section 5.3. * **RQ4. Microbenchmarks** To provide a full account of the performance of the end to end system we include microbenchmarks that concentrate in answering two questions: the performance differences of using sparse vs dense indices during the first-stage retrieval, and the runtime performance of the new approach, accounting for both offline and online components. This is presented in Section 5.4. **Measuring success.** The ultimate goal is to identify the table that contains the answer to an input question. To measure the accuracy of the different baselines when given a set of questions we use the precision-at-K metric (P@K) aggregated over all questions, as in previous work (Friedman et al., 2017). This metric indicates the ratio of questions for which the answer is in the top K tables. We measure P@1 and P@5. **Datasets.** We use two benchmark datasets that have been extensively used by previous systems and that have been generated from real-world representative queries. * **RQ-Tables**(Friedman et al., 2017) This dataset contains 210,454 tables extracted from Wikipedia pages and 959 test questions which are a subset of the Google Natural Questions (Krizhevsky et al., 2017). It is very popular in table retrieval (Friedman et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017) because the questions are asked by real Google Search users. The tables are relatively dirty, with 23.3% tables having some column header missing and 63% tables having cells that contain long chunks of text. * **FetaQA**(Krizhevsky et al., 2017) This dataset contains 10,330 clean tables from Wikipedia and 2003 test questions. The questions are generally longer and more complex than those from NQ-Tables. **System Setup.** We run all experiments on the Chameleon Cloud (Krizhevsky et al., 2017). We use one node with 48 processing threads, 187G RAM, and an NVIDIA RTX 6000 GPU. The OS is Ubuntu 18.04 and the CUDA version is 11.2 and 10.2 (for other baselines). The system is implemented in Python v3.7.9 and uses pytorch 6.0. ### RQ1. Does self-supervised data discovery work? We measure the P@1 and P@5 accuracy of the self-supervised discovery system on the two benchmark datasets. To interpret the results, we compare them against the following baselines: **BM25 (Table-Row-Tokens).** Not learned discovery systems such as Aurrum (Krizhevsky et al., 2017) and Auctus (Bucuk et al., 2017) use traditional information retrieval techniques based on BM25 to retrieve tables given keywords. We implement this baseline to represent these discovery solutions. In particular, we index every row of a table, along with the table's title in Elastic (Bucuk et al., 2017). **OpenDTR.**(Friedman et al., 2017) is a state-of-the-art learned discovery system. We use 3 variants: OpenDTR-Retrain, OpenDTR-NoRetrain and OpenDTR-Synthetic. * OpenDTR-Retrain is retrained on a human-annotated training dataset that has the same distribution as the test dataset. For Figure 3. Overview of S2LD example, when the target test dataset is NQ-Tables, OpenDTR-Retrain is trained on NQ-Tables. This _baseline requires collecting training data for each dataset and is thus not desirable_. Still, we include it because it allows us to understand the performance difference between (expensive) human-collected training datasets and the synthesized datasets our approach produces. * OpenDTR-NoRetrain is the system instance pretrained on a human-annotated training dataset having different distribution from the test dataset. For example, when the test dataset is NQ-Tables, OpenDTR-NoRetrain means the system instance trained on FetaQA. This baseline helps us understand what happens to the system performance when a learned discovery system is deployed on a new table collection without retraining. Our claim is that the performance will degrade and that, in turn, justifies the self-supervised approach that: i) yields good performance while; ii) avoiding human cost of collecting training data. * Finally, OpenDTR-Synthetic is trained on the synthetic questions generated by the self-supervised approach we introduce in this paper. We use this baseline to understand the contribution of the other techniques we introduce, including the new table representation and the semantic relevance model. **GTR.**[36] is a state-of-the-art second-stage ranking model. Because it only solves second-stage ranking, we use our first-stage retrieval system to obtain results end-to-end. We implement 3 variants as before: GTR-Retrain, GTR-NoRetrain and GTR-Synthetic. **Experimental Setup.** We train OpenDTR models using the released official code. In our system, we encode each triple as a 768-dimensional vector using the Fdt Reviewer [18] pretrained on the TriviaQA dataset. This results in 41,883,973 vectors on NQ-Tables, and 3,877,603 vectors on FetaQA. We index those vectors using Faiss [20]. We retrieve at least 5 tables during first-stage retrieval and then apply the second-stage ranking. We use the original code to train the GTR models. We train our system, OpenDTR-Synthetic and GTR-Synthetic using the exact same set of training data produced in a self-supervised manner. #### 5.1.1. Main Results We show the P@1 and P@5 in Fig. 4 and Fig. 5, respectively. We highlight several key insights: **Baselines underperform on new table collections when not specifically retrained.** The first observation we make concerns OpenDTR-NoRetrain and GTR-NoRetrain. When these systems are trained with data from one benchmark and evaluated on the other, their performance deteriorates significantly, by 30 and 18 points in the case of OpenDTR on NQ-Tables and FetaQA, respectively. And by 20 and 4 points in the case of GTR-NoRetrain. This demonstrates that without retraining, a learned table discovery system vastly underperforms on new table collections. **S2LD produces synthetic training datasets that achieve high accuracy.** S2LD vastly outperforms non-trained systems. Compared to the NoRetrain variants, S2LD achieves 30 and 15 more points than OpenDTR-NoRetrain and GTR-NoRetrain in NQ-Tables and 40 and 13 more points in FetaQA. This improvement in accuracy comes at the same cost for the human, who does not need to collect data and label it manually because our approach does it for them. This result validates the main contribution of our paper. **S2LD is competitive in performance compared to the retrain baselines without needing to pay the cost of obtaining a new training dataset.** Even when compared to the other baselines retrained on benchmark-specific training datasets, S2LD achieves good performance. In fact S2LD outperforms OpenDTR-Retrain in both datasets by 3 and 25 points in NQ-Tables and FetaQA respectively. It also outperforms GTR-Retrain in the FetaQA benchmark by 6 points. It underperfomrs GTR-Retrain in the NQ-Tables dataset by only 3 points, but again, to emphasize, without paying the cost of collecting data manually. **S2LD always outperforms BM25, but this is not true for the other baselines.** BM25 represents the retrieval performance of non-learned discovery systems. Note that OpenDTR-NoRetrain underperforms BM25 in NQ-Tables by 6 points. This means that, without our approach, and without the ability to collect a new dataset, a non-learned data discovery solution will outperform the more sophisticated OpenDTR. Or rather, that collecting high-quality training data is decidedly important for learned table discovery to perform well on new table collections. GTR does better than OpenDTR when compared to BM25\(-\)recall it uses our new first-stage retrieval. In contrast, S2LD always outperforms the BM25 baseline. **S2LD performance benefits go beyond the synthetic data generation.** To test this hypothesis and evaluate the contributions of the new table representation and relevance model we use the same Figure 4. P@1 Accuracy on baseline datasets Figure 5. P@5 Accuracy on baseline datasets synthetically generated dataset produced by our approach to train all baselines and compare their performance; this corresponds to the Synthetic baselines. As shown in the figures, when all baselines are trained on the synthetically generated dataset, S2LD outperforms every other baseline by 20, 15, 33, and 15 points (from the left to the right of the figure). These results validate the design of the table representation and retrieval model. **The trends are similar and accentuated when measuring P@5.** The trend when observing P@5 is the same as in P@1, with the total accuracy much higher, as expected. This suggests that when the end user has the bandwidth to manually check a ranking of 5 tables that may answer their question, they are much more likely to identify the answer they are after. #### 5.1.2. In-depth analysis of results Here, we delve deeper into some of the results: **Why do the Retrain baselines perform worse than S2LD despite having access to high-quality manually collected training data?**. When analyzing the logs for the FetaQA benchmark, we find that S2LD performs better at matching entities in the question and table at the word level. This is largely because S2LD takes advantage of the OpenQA (Krishnan et al., 2017) model which is pretrained on the TriviaQA (Krishnan et al., 2017) question dataset. The same reason applies to OpenDTR-Retrain. On NQ-Tables (where S2LD underperforms GTR-Retrain), we find S2LD is more likely misled by long cells that contain information related to the question, while GTR-Retrain performs better in exploiting the table cell topology structure to find the ground truth table. We use an indicative example from our logs. Given the question _"where is hindu kush mountains located on a map"_, S2LD chooses a table with a long cell text _"The general location of the Himalayas mountain range (this map has the Hindu Kush in the Himalaya, not normally regarded as part of the core Himalayas)"_. The text does indeed contain an answer. In contrast, GTR chooses a table with a cell containing _"Hindu Kush"_ and a cell containing the coordinates, which gives a more precise answer and is what the benchmark was expecting. It is arguable whether one answer is indeed superior to the other for an end-user, but it is certainly the case the benchmark favors GTR in this case. Finally, because FetaQA is a relatively clean benchmark and most cells contain simple information, we do not see the same effect in this dataset, i.e., S2LD is less likely to select a long cell and it consequently outperforms GTR-Retrain and OpenDTR-Retrain. Note that in any case, the performance of S2LD, which is close to the other baselines, is achieved without any human-collected dataset. **Why do the Synthetic baselines perform so bad?**. GTR models tables as a graph where each cell is a node and it is connected only to neighboring cells. This representation is unnatural for relational data, where the order of column does not matter. Such representation makes the model brittle to situations where the distribution of the training dataset and the test set differs, such as it is the case when comparing the synthetically generated training dataset produced by our approach and the test set of the benchmarks. Concretely, it is often the case that the subject and object in a relation are not neighbors and thus do not make it into the representation used by GTR. From our analysis we observe that GTR is more likely to overfit on the synthetic data, thus explaining the deterioration of quality. In contrast, our table representation does not suffer from the aforementioned problems and thus prevents our relevance model from overfitting. A similar phenomenon explains the results for OpenDTR-Synthetic. Although this baseline does not use the same table representation as GTR, it encodes question and table _together_, using a pretrained TableQA (Krishnan et al., 2017) model which seems brittle to different formats of the training dataset. ### RQ2. Does Bayesian incremental training work? Since the self-supervised approach can generate huge training datasets that become prohibitively expensive to train, we introduce an incremental training procedure to determine the appropriate training dataset size without human intervention. We measure the P@1(5) accuracy and training cost (the total training time and epochs used) of the new Bayesian incremental training on the two datasets and compare it against the simple approach that sequentially grows the input training dataset and trains the model from scratch until it performs well (as described in 4.4). **Experimental Setup**. We generate 10 partial training datasets \(D_{1},\cdots,D_{10}\) for each benchmark, each \(D_{i}\) with 1,000 different questions with corresponding triples. Given a fixed \(\{D_{1},\cdots,D_{m}\}\), the simple approach trains the relevance model using maximum likelihood estimation and an early-stop strategy with patience of 1 epoch (each epoch is a full pass of \(\{D_{1},\cdots,D_{m}\}\)). This means if a model checkpoint does not improve the P@1 accuracy in two continuous epochs, the training process stops. The patience of dataset for incrememntal training is also 1, i.e. if either \(\{D_{1},\cdots,D_{m},D_{m+1}\}\) or \(\{D_{1},\cdots,D_{m},D_{m+2}\}\) does not improve P@1 over \(\{D_{1},\cdots,D_{m}\}\) the whole incremental training process stops. The same patience of epoch and patience of dataset settings are applied to the Bayesian approach. By "Prior(\(D_{1},\cdots,D_{j}),D_{m}\)" we indicate that \((D_{1},\cdots,D_{j})\) have been used for training previously and thus they constitute the prior for training on \(D_{m}\). During test, we sample 6 versions of model parameters from the posterior distribution, and use also the posterior mean parameters, then take the average of predictions of the 7 versions of parameters as output. **Results.** Table 2 and 3 shows the results on NQ-Tables and FetaQA. We summarize the following insights: \(\bullet\) Our new Bayesian incremental approach takes much less training time and achieves better test accuracy than the simple approach. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline **Approach** & **Time** & **Training** & **Eval** & **Test** & **Training Cast** \\ \cline{2-7} \multirow{-10}{*}{**Single**} & **Step** & **Datasets** & **P@1** & **P@5** & **P@1** & **P@5** & **S** (epoch) \\ \cline{2-7} & **1** & D1 & 72.65 & 82.45 & 42.02 & 70.90 & 6.134 (7) \\ \cline{2-7} & **1** & D1, D2 & 73.70 & 82.75 & 42.65 & 79.51 & 9.285 (7) \\ \hline \multirow{3}{*}{**Simple**} & **B** & D1, D2, D3 & 74.10 & 82.45 & 41.93 & 70.29 & 12.168 (7) \\ \cline{2-7} & **1** & **D1, D2, D3, D4** & **74.35** & **83.05** & **40.15** & **69.76** & 21.659 (10) \\ \cline{2-7} & **1** & **D1, D2, D3, D4, D5** & 73.00 & 82.50 & 42.54 & 70.91 & 7.825 (7) \\ \cline{2-7} & **1** & **D1, D2, D3, D4, D5** & 72.65 & 82.80 & 42.54 & 70.91 & 7.737 (3) \\ \cline{2-7} & **1** & **S**m** & & & & & & **64,790** \\ \hline \multirow{3}{*}{**Bayesian**} & **t1** & D1 & 66.30 & 81.60 & 41.71 & 70.70 & 2.657 (3) \\ \cline{2-7} & **1** & **Prior (D1), D2** & **69.70** & **82.35** & **43.07** & **71.32** & 2.218 (8) \\ \cline{2-7} & **1** & **Prior (D1), D23** & 69.65 & 82.50 & 41.60 & 71.32 & 3.553 (4) \\ \cline{2-7} & **1** & **Prior (D1), D24** & 69.40 & 82.25 & 40.83 & 70.70 & 2.679 (3) \\ \cline{2-7} & **1** & **Prior (D1), D24** & 69.40 & 82.25 & 40.83 & 70.70 & 2.679 (3) \\ \hline \end{tabular} \end{table} Table 2. Test accuracy and training cost of **Bayesian/Simple incremental training on NQ-Tables** On NQ-Tables, the simple approach takes 64,790 seconds for the whole incremental training and chooses \(\{D1,D2,D3,D4\}\) on the evaluation dataset. In contrast, the Bayesian incremental approach takes only 16,108 seconds (chooses \(\{D1,D2\}\)), a reduction of running of 4x, or equivalently 18 hours vs only 4 hours to find a train the system for a new table collection. This reduction in runtime makes it more feasible to keep the system up to date as the underlying data tables naturally change. On FetaQA, the simple approach takes 25,028 seconds, while the Bayesian incremental approach takes only 15,554 seconds, about 62% of simple approach with close P@1 (82.73 vs 82.53). \(\bullet\) More data does not necessarily lead to better accuracy. Since the training data are generated automatically, there is redundancy and noise. As shown in Table 2, \(\{D1,D2,D3\}\) performs worse than \(\{D1,D2\}\) and even \(\{D1\}\). Simply adding more data to the simple approach does not suffice even though it slows down the entire process. This further validates the Bayesian incremental approach. ### RQ3. How does table representation affect accuracy? In this section, we demonstrate the impact of RCG, the table representation strategy we implement, by comparing its performance against other state of the art alternatives: * **Sliding Token**. Many existing approaches concatenate the table cells from left to right and include tags to indicate schema information (Han et al., 2017; Wang et al., 2018; Wang et al., 2019). To use this approach for learned data discovery, the resulting tokens become the input of OpenQA, which we use to obtain a vector embedding. Because the input size of OpenQA is bounded (Wang et al., 2018) we must tokenize the concatenated cells. We follow the approach in (Wang et al., 2018) and produce a sliding window over the cells with size 150 and a stride of size 1. * **RCG&generated text**. RCG produces (subject, predicate, object) triples. We ask whether a textual representation of the triples performs better than the purely structured triples: the intuition is that text would be a closer representation to the input questions than triples. In this baseline, we transform triples into text by fine-tuning the T5 model (Wang et al., 2018) using the WebNLG (Wang et al., 2018) dataset. Then, we feed triples to the fine-tuned model and obtain text as output, which is itself indexed for first-stage retrieval. **Experimental Setup.** We apply the two baseline strategies to the target table collection and this results in a vector index and a first-stage result and a trained relevance model for each strategy. We show the performance of both the first-stage dense index only and the end-to-end system S2LD, i.e., index plus relevance model. **Results.** Fig. 6 shows the results. RCG outperforms the other baselines. The plot on the top-left shows P@Max (performance with an oracle second-stage ranking), thus isolating the effect on first-stage retrieval. While the performance of all baselines is similar on the simpler FetaQA (cleaner dataset), RCG vastly outperforms the Slide Token baseline in NQ-Tables. This is because tables in the NQ-Tables baseline have more columns than FetaQA. 25% ground truth tables have more than 18 columns with an average of 170 tokens per row. In contrast, in FetaQA, the 75th percentile is 6 columns and only 17 tokens per row on average. The RCG approach will relate cells no matter how far apart they are in the table, as per the construction of the complete graph. In contrast, the common table representation methods from the state of the art lose context when tables are wide, as evidenced by the Slide Token performance on NQ-Tables. Finally, the performance gains during first-stage retrieval carry on to the end-to-end system; the plot in the middle-left shows P@1 and the one on the bottom-left shows P@5. Finally, generating text from triples (the RCG&generated text baseline) makes the system much slower (by requiring an inference from the T5 model per triple) without yielding significant accuracy gains. On further analysis, we found that the generated text did not really help with retrieving better content and that in some cases it was erroneous, thus hurting the end-to-end performance. ### RQ4. Microbenchmarks We consider the impact of different indexing techniques on the first-stage retrieval component (Section 5.4.1) and we demonstrate the system answers questions at interactive times in Section 5.4.2. #### 5.4.1. First-Stage Retrieval Indexing The first-stage retrieval index determines the input of the relevance model (Challenge 3), affects the training triples (Challenge 1) and also determines the table representation (Challenge 2). Here, we measure the effect of choosing different indexing techniques: * **First Stage (Sparse Index)** We index and retrieve triples using the BM25 algorithm (on Elasticsearch) that measures the similarity of question and triple using TF-IDF features (Kumar et al., 2019). * **First Stage (Dense Index)** This is our first-stage retrieval implementation which uses the Fid Encoder and Faiss index. * **S2LD (Sparse Index)** We index triples using Elasticsearch, and then construct training data from the sparse Index using the same synthetic questions. We then retrain the relevance model. **Results.** Fig. 6 shows the results. The Dense Index outperforms the two sparse indices baselines. The performance gains originate during first-stage retrieval, especially for NQ-Tables with 18 points difference and results in 13 points and 6 points in P@5 and P@1 respectively. The Relevance model performs well using the dense index. On FetaQA, the first-stage gap P@MAX is 0.95 points, but the P@1 gap is increased to 1.35 points i.e., the second-stage relevance model benefits more from dense index. The sparse index will retrieve triples based on the overlap with the input question. Because we produce questions from the tables and we use the index to construct training data, this indexing method bias the training dataset towards \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Approach**} & **Time** & **Training** & **Eval** & **Test** & **Training Cost** \\ \cline{2-9} & **Step** & **Datasets** & **P@1** & **P@5** & **P@1** & **P@5** & **S (epochs)** \\ \hline \multirow{5}{*}{Simple} & 11 & D1 & 76.70 & 80.52 & 82.88 & 93.06 & 2.519 (4) \\ \cline{2-9} & 12 & D1, D2 & 77.05 & 80.20 & 83.08 & 79.31 & 1.74 (89) \\ \cline{2-9} & 13 & **D1, D2, D3** & **77.15** & **80.25** & **82.53** & **93.01** & 6.889 (6) \\ \cline{2-9} & 14 & D1, D2, D3, D4 & 76.35 & 79.90 & 82.62 & 92.96 & 4.25 (3) \\ \cline{2-9} & 15 & D1, D2, D3, D5 & 76.80 & 80.05 & 83.33 & 93.06 & 4.24 (20) \\ \cline{2-9} & & & & & & & & **25,028** \\ \hline \multirow{5}{*}{Bayesian} & 11 & D1 & 75.85 & 80.00 & 82.43 & 92.66 & 3.225 (5) \\ \cline{2-9} & 12 & **Prior (D1), D2** & 76.30 & 79.80 & 82.73 & **92.81** & 2.52 (4) \\ \cline{1-1} \cline{2-9} & 13 & Prior (D1, D2), D3 & 75.80 & 79.80 & 82.93 & 92.76 & 5.104 (6) \\ \cline{1-1} \cline{2-9} & 14 & Prior (D1, D2), D4 & 75.65 & 79.65 & 82.83 & 92.71 & 4.601 (7) \\ \cline{1-1} \cline{2-9} & 14 & Prior (D1, D2), D4 & 75.65 & 79.65 & 82.83 & 92.71 & 4.601 (7) \\ \cline{1-1} \cline{2-9} & & & & & & & **13,554** \\ \hline \end{tabular} \end{table} Table 3. Accuracy and training cost of Bayesian / Simple incremental training on FetaQA triples that overlap the most with the input question. The dense index retrieves tuples based on semantic similarity that goes beyond the purely syntactic level, resulting in higher performance. #### 5.4.2. Pipeline Time Performance We evaluate the runtime performance of different systems on the dataset NQ-Tables. We consider the performance of: \(\bullet\) Encoding and indexing table repository. This step converts tables into numeric vectors and then stores them in an on-disk index. \(\bullet\) Training data collection. This step generates the training data. \(\bullet\) Training. This step trains the system (model). \(\bullet\) Inference runtime. In this step, we measure the throughput in question per second. \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{**Pipeline**} & \multicolumn{3}{c|}{**Time**} \\ \cline{2-4} & **S2LD** & **OpenDTR** & **GTR** \\ \hline Encoding \& Indexing & 13.8 h & 0.4 h & - \\ Table Repository & (41 M vectors) & (0.17 M vectors) & \\ \hline Training data & \multirow{2}{*}{2.4 h} & \multirow{2}{*}{Manual} & \multirow{2}{*}{Manual} \\ collection & & & \\ \hline Training & 4.5 h & 4.6 h & 4.3h \\ \hline Prediction & 2.6 s/q & 0.02 s/q & - \\ (end-to-end) & (Index on disk) & (Index in memory) & - \\ \hline \end{tabular} We compare the runtime of OpenDTR, GTR, and S2LD. **Results.** Table 4 shows the results, with one row for each of the four categories explained above. * **Encoding and Indexing Table Repository** There are 17k tables in NQ-Tables. S2LD generates many more vectors (41 M vs 0.17 M) than OpenDTR because of the row-wise approach while OpenDTR has only one vector per table. In addition, S2LD supports an on-disk index to scale to larger table collections, while OpenDTR uses an in-memory index. Consequently, S2LD takes more time (13.8 h vs 0.4 h) to encode and index the whole table repository. * **Training data collection** S2LD automatically collects training data 4 times as in Table 2 of Section 5.2 and it takes 2.4 h. While both OpenDTR and GTR-Retrained rely on manual work to collect data. The NQ-Tables dataset contains 9,534 train questions and 1,067 evaluation questions and there is heavy manual work in creating them. * **Training runtime** All systems were trained on the same synthetic dataset generated by our approach. Training time is dominated in all cases by the model training. Consequently, all systems take a similar amount of time to train. * **Inference throughput** The end-to-end throughput of S2LD is lower (2.6 s/q vs 0.02 s/q) than OpenDTR because S2LD assumes the table vectors do not fit in memory and uses an on-disk index, while OpenDTR-Official assumes the table vectors fit in memory. With sufficient memory, we expect the throughput of both systems to be similar. The main takeaway message is that despite a slower indexing time, which we think can be further optimized, the main gain of S2LD is its ability to automatically generate training datasets that lead to high-performance models, as demonstrated in this section. ## 6. Conclusions We argued that learned discovery systems are attractive to users, who can pose natural langauge questions, but that their reliance on manually collected training datasets hampers their deployment. In response, we introduce a new self-supervised approach to assemble training datasets automatically. Unlike existing systems, the new approach permits deploying the system without human effort. We \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{**Pipeline**} & \multicolumn{3}{c|}{**Time**} \\ \cline{2-4} & **S2LD** & **OpenDTR** & **GTR** \\ \hline Encoding \& Indexing & 13.8 h & 0.4 h & - \\ Table Repository & (41 M vectors) & (0.17 M vectors) & \\ \hline Training data & \multirow{2}{*}{2.4 h} & \multirow{2}{*}{Manual} & \multirow{2}{*}{Manual} \\ collection & & & \\ \hline Training & 4.5 h & 4.6 h & 4.3h \\ \hline Prediction & 2.6 s/q & 0.02 s/q & - \\ (end-to-end) & (Index on disk) & (Index in memory) & - \\ \hline \end{tabular} \end{table} Table 4. Pipeline time of different systems on NQ-Tables Figure 6. Effect of table representation and index type introduce a new table representation method and associated relevance model that makes the system outperform existing approaches. We show that such an approach leads to high quality models that outperform the state of the art baselines. We incorporate the new approach into an end-to-end system, S2LD, which we use to perform an in-depth evaluation of learned table discovery. All in all, this work contributes a useful technique to the growing area of learned data discovery systems.
2307.07488
Useful Circuit Analogies to Model THz Field Effect Transistors
The electron fluid model in plasmonic field effect transistor (FET) operation is related to the behavior of a radio-frequency (RF) cavity. This new understanding led to finding the relationships between physical device parameters and equivalent circuit components in traditional parallel resistor, inductor, and capacitor (RLC) and transmission models for cavity structures. Verification of these models is performed using PSpice to simulate the frequency dependent voltage output and compare with analytical equations for the drain potential as a function of frequency.
Adam Gleichman, Kindred Griffis, Sergey V. Baryshev
2023-07-14T17:19:31Z
http://arxiv.org/abs/2307.07488v1
# Useful Circuit Analogies to Model THz Field Effect Transistors ###### Abstract The electron fluid model in plasmonic field effect transistor (FET) operation is related to the behavior of a radio-frequency (RF) cavity. This new understanding led to finding the relationships between physical device parameters and equivalent circuit components in traditional parallel resistor, inductor, and capacitor (RLC) and transmission models for cavity structures. Verification of these models is performed using PSpice to simulate the frequency dependent voltage output and compare with analytical equations for the drain potential as a function of frequency. ## I Introduction Moore's law predicted that every 2 years that the transistor count would double for a constant amount of area real estate, power consumption, and would cost the same [12]. This progression of higher availability of transistors in integrated circuits was fostered by discoveries like lithography. These improvements to the amount of transistors at lower cost led to massive windfalls for growth in automated control systems, data processing, portable communication systems, and cost for broad consumer use [2]. Moore's Law is said to end at some point in the future and projections currently are around 2025. [2] Although in the beginning when Moore first made his prophecy it was initially thought to last for only 20 years. [12] Many advancements did delay the slowing down of transistor development, but ultimately there are limitations of material properties and the physics of energy transportation. [2] This situation leads to a question of "what is the next technological advancement for computing devices?" Microwave devices could possibly be the next advancement in the race for better transistors by using the physics of the device to create better performance. Dyakonov and Shur proposed that field-effect transistors (FETs) can have a new operation mode that utilizes a relationship between radiation at the gate and steady current across the channel that creates Langmuir waves in the channel [3; 4]. They describe an instability under the condition where a short channel FET operates with a large amount of electrons travelling in the channel, which leads to many electron on electron collisions [3]. This system is modelled as a 2D gas which describes instability that exists for electrons that are slower than their own saturation velocity [35]. High concentration of electrons in the channel leads to many electron on electron collisions and fluid choking will occur at the boundary where the channel meets the drain [35]. The electrons accelerate across the channel in subsonic flow until their velocity reaches the speed of sound at the boundary of the drain [5]. This constant electron velocity present at the drain means that from the outside observer perspective, the drain has constant DC current [5]. The high concentration of electrons slow their group velocity to be less than their saturation velocity, but the Langmuir waves phase velocity can exceed the electron transit time [3]. Using the phase velocity of Langmuir waves instead of the group velocity of the electrons leads to terahertz level radiation. This output terahertz frequency at the gate can used in place of the traditional switching speed for FETs. This instability relationship is dependent on the volume of the channel structure [3]. The DC bias potential controls the depth of the channel, which controls the excitation frequency of the channel to create current across the channel. [34] Creating models of plasmonic operating devices is difficult. Previous models [6; 7; 8] were designed as exceptional level complexity or were dependent on empirical data [9]. In Ref.[[7]] a so-called MOSFET segmentation concept was introduced. Because each segment was on a nm to sub-nm length scale the model had to be solved inside EKV (Enz, Krummenacher, Vittoz) model framework, thereby making the segmentation concept dependent on dozens of free parameters (previously optimized only for classical operation of short-channel MOSFETs) and hence making difficult for practical applications. Conceptually simple and easy to interpret THz plasma FET models, if created, could play vital role to allowing the development of architectures and ICs for future microelectronics. In this paper, we show that plasmonic FET, behaving as a resonating device caused by standing waves that arise from the boundary conditions of the source and Figure 1: A Plasmonic FET operating as a detector. drain contacts, as illustrated in Figure 1, can be treated as a classical radiofrequency (RF) exavitiy. Small signal parallel RLC circuit model for solving first resonant mode and transmission line model capturing higher order modes are introduced and shown to have excellent agreement with analytical model for Si THz FET operation. Finally, simple symbolic PSpice codes are developed and presented in the Appendices at the end of this paper. All PSpice parameters are physical and can be directly calculated using geometry and material properties of the channel and gate. ## II Theories and equations The signal between the gate and the source is a combination of an AC signal and DC bias potential shown in Figure 14. Footnote 4: The source is a single-mode The depletion channel capacitance is therefore [11] \[C_{d}=\frac{LW\epsilon_{0}\epsilon_{S}}{d_{d}}. \tag{10}\] And the total capacitance, \(C_{tot}\), for the channel is solved by taking the parallel combination of the capacitance of the depletion region, \(C_{d}\), with the capacitance of the insulator from the gate, \(C_{i}\), [11] \[C_{tot}=\frac{C_{i}C_{d}}{C_{i}+C_{d}}. \tag{11}\] Calculation of the Drude inductance is performed by solving for the electron sheet density in terms of the ideality constant of the FET, \(\eta\)[11] \[\eta=1+C_{d}/C_{i}. \tag{12}\] The expression of the initial sheet electron is given by [11] \[n_{0}=\frac{\eta V_{TH}C_{OX}}{2q}. \tag{13}\] Where the capacitance of the oxide per unit area is \(C_{OX}=\frac{\epsilon_{f}\epsilon_{0}}{t_{ox}}\). The concentration of electrons in the channel per unit area once the thermal effects are accounted for is [11] \[n_{s}=n_{0}\ln{[1+0.5\exp{(\frac{V_{DC}-V_{T}}{\eta V_{TH}})}]}. \tag{14}\] This concentration of electrons per unit area in the channel will be used to solve for the the Drude inductance, \(L_{drude}\), because the concentration effects the amount of electron on electron collisions that are present in the system. The Drude inductance equation is as follows [12] \[L_{drude}=\frac{L\cdot m_{eff}\cdot m_{0}}{q^{2}\alpha^{2}n_{s}W}. \tag{15}\] From here, the resistance is calculated in order is calculated in order to model the leakage between the gate and the drain the quality factor equation as a relationship between the fundamental mode, the total capacitance of the channel, and the quality factor \(R=\frac{Q}{\omega_{0}C_{total}}\). This is important to make sure that the quality factor of the new models will still match results in [6] because the bandwidth of the system is dependent on the quality factor from \(Q=BW^{-1}\). The transconductance equation for the model [13], \[g_{m}=\frac{W\mu_{n}}{L}C_{OX}(V_{DC}-V_{T}), \tag{16}\] ## III Model validation To validate agreement between the lumped and transmission line models match and the fluid plasmonic model, we considered a silicon channel (\(\epsilon_{S}=11.9\) from [14]) with a 3D concentration of \(N_{b}=10\times 10^{17}\)cm\({}^{-3}\) and an intrinsic concentration of silicon of \(n_{i}=10^{10}\)cm\({}^{-3}\) from [14] with a silicon oxide insulator (\(\epsilon_{I}=3.9\) from [14]) that has a thickness of \(t_{ox}=4.315\) nm. The mobility of the substrate is \(\mu=0.1\)\(\frac{\text{m}^{2}}{\text{V.s}}\) and the effective mass used is 0.19 (from [14]). Dimensions of the device are a length of \(L=25\) nm and the width of \(W=5\)\(\mu\)m. A DC potential is applied between the gate and source of \(V_{DC}=0.6\) V with a thermal potential of \(V_{TH}=0.28\) V and the applied AC signal amplitude is \(V_{AC}=100\) mV. All calculations were made with an assumption of room temperature operation at \(T=300\) K. The RLC solution to these system was a transconductance of \(g_{m}=12.7\) mS, \(L_{drude}=8.352\) pH, \(C_{tot}=9.864659\times 10^{-17}\) F, and a resistance of 1800 \(\Omega\). The corresponding PSpice file for the RLC simulation is included in appendix 1. The parallel RLC model is simulated in PSpice, which exported the data of the drain-source potential (\(\Delta V\)) as a function of frequency (with respect to \(V_{AC}\)) into a comma-separated values format for MATLAB to import. MATLAB is used to compare the PSpice models with the fluid model as shown in Figures 3, 4, and 6. The parallel RLC model is successful in replicating the fundamental resonant mode, but fails to create the higher order modes from the fluid model. Lack of the higher order modes present motivates the use of a lossless transmission line component in PSpice, Figure 3 is the circuit model used (file is in Appendix 1), to generate those higher order modes. A lossless half wave transmission line is shown to replicate the fundamental modes with the higher order modes [15]. Usually, an open circuited half wave transmission line is equivalent to a passive parallel RLC model, however this transmission line model requires the same resistor connected as a load so that the gain from the Figure 3: The lumped RLC model in appendix 1 (solid line) compared to the analytical model (dotted line). transconductance is still accounted for in the model [15]. In Figure 6, the result of the transmission line model (file is in Appendix 2) has good agreement with the fundamental model by having a consistent quality factor, but with the higher order modes that are not present in the parallel RLC model in Figures 3 and 4. This transmission line model increasingly shifts the center frequency of higher order modes, this is also dependent on the transconductance of the VCCS. ## IV Conclusion The results of this paper show that cavity-inspired circuit models demonstrate promising results for small signal/ ultra high frequency design. Our results illustrate that the cavity behavior in plasmonic FET operation leads to simple and effective circuit models for small signal evaluations with only five parameters solved from physical dimensions. ## V Appendix 1: Parallel RLC.Cir File PARALLEL RLC *\(V_{in}=V_{AC}^{2}/(4*(V_{DC}-V_{T})),V_{AC}=0.01\) V, \(V_{DC}-V_{T}=0.32\) V Vin 1 0 AC 7.8125e-5 * This is the voltage dependent current source G1 3 0 1 0 12.7m * This is the Drude inductance L1 3 0 8.8352e-12 * This is the resistance R1 3 0 1800 * This is the total capacitance of the channel C1 3 0 9.86465905084e-17 * This creates a sweep of 5000 points from 1 to 30 THz .AC LIN 5000 1T 30T .PROBE .OP .END author's note: Some of the gain scaling was multiplied to the AC input potential instead of the transconductance term because solving a number and adjusting the input voltage was easier in PSPICE than trying to put transconductance as a multiple of the AC voltage divided by the DC potential. This is also done in the transmission line file. The corresponding figure to this model layout is Figure 2. Figure 4: Close up of the fundamental mode of the lumped RLC in appendix 1 (solid line) and the analytical model (dotted line). Figure 5: Transmission Line Model Figure 6: Comparison of the lumped RLC in appendix 1 (dash line), transmission line in appendix 2 (solid line), and analytical model (dotted line). ## VI Appendix 2: Transmission line.Cir file transmission line model \(*V_{in}=V_{AC}^{2}/(4*(V_{DC}-V_{T})),V_{AC}=0.01\) V, \(V_{DC}-V_{T}=0.32\) V Vin 1 0 AC 7.8125e-5 * This is the voltage dependent current source G1 2 0 1 0 0.012923082392042 * This is the transmission line element. Impedance and center frequency are solved from lumped RLC components. T1 2 0 4 0 Z0 = 290.974 f = 5.544774THZ * This is the load resistance RLR 4 0 1800 * This creates a sweep of 5000 points from 1 to 30 THz.AC LIN 5000 1T 30T .PROBE .OP .END author's note: Some of the gain scaling was multiplied to the AC input potential instead of the transconductance term because solving a number and adjusting the input voltage was easier in PSPICE than trying to put transconductance as a multiple of the AC voltage divided by the DC potential. This is also done in the parallel RLC line file. The corresponding figure to this model layout is Figure 5.
2310.15443
Special Lagrangian pair of pants
We construct special Lagrangian pair of pants in general dimensions, inside the cotangent bundle of $T^n$ with the Euclidean structure.
Yang Li
2023-10-24T01:30:59Z
http://arxiv.org/abs/2310.15443v1
# Special Lagrangian pair of pants ###### Abstract We construct special Lagrangian pair of pants in general dimensions, inside the cotangent bundle of \(T^{n}\) with the Euclidean structure. ## 1 Introduction A familiar picture on Riemann surfaces is the _pair of pants decomposition_. Each pair of pants is diffeomorphic to the thrice punctured \(S^{2}\), and a general Riemann surface can be topologically built from gluing pair of pants along cylinders. This perspective is fruitful in many applications, such as the tropical degeneration of holomorphic curves [10], and the cuspidal degeneration of hyperbolic metrics. The long term goal of our project is to produce a large supply of _special Lagrangian submanifolds_ inside complex \(n\)-dimensional Calabi-Yau manifolds \((X,\omega,\Omega)\) with SYZ fibrations, by gluing together some (hitherto unknown) higher dimensional analogues of pair of pants, via some combinatorial pattern prescribed by a tropical hypersurface. Here special Lagrangians of phase \(\hat{\theta}\) are real \(n\)-dimensional submanifolds with \[\omega|_{\mathbb{L}}=0,\quad\mathrm{Im}(e^{-i\hat{\theta}}\Omega)|_{\mathbb{L }}=0.\] A fundamental observation of Harvey-Lawson [3] is that these are minimal surfaces. The basic challenge of the field is that the non-perturbative construction techniques are quite limited, and the simplest case of constructing new special Lagrangians inside the Euclidean \(T^{*}T^{n}\) already requires new inputs. In the case \(n=2\), the local model for special Lagrangian pair of pants can be obtained through the hyperkahler rotation trick (_cf._ (6) below). The main output of this paper is to construct the higher dimensional analogue for the special Lagrangian pair of pants, building upon a combination of PDE techniques developed by Caffarelli-Nirenberg-Spruck and many subsequent authors [2][4][13][14], and some minimal surface theory. It is worth pointing out that Matessi [6] has considered the'soft' version of our problem involving Lagrangians, and we shall give an impressionistic review of some of his main ideas. ### Matessi's work on Lagrangian pair of pants Matessi is motivated by the mirror symmetry dual construction to Mikhalkin's tropical-to-complex correspondence [9]. In the B-side story, one starts with a tropical hypersurface \(\Gamma\subset\mathbb{R}^{n}\) with some smoothness assumptions, and aims to produce a \(1\)-parameter family of algebraic hypersurfaces \(Y_{t}\) in \((\mathbb{C}^{*})^{n}\), whose images under the projection \[\operatorname{Log}_{t}:(\mathbb{C}^{*})^{n}\to\mathbb{R}^{n},\quad(z_{1}, \ldots z_{n})\to\frac{1}{\log t}(\log|z_{1}|,\ldots,\log|z_{n}|)\] (known as the 'amoeba') converges to \(\Gamma\) in the Hausdorff distance. In a more refined description, there is a piecewise linear lift \(\hat{\Gamma}\subset(\mathbb{C}^{*})^{n}\) (known as the 'phase tropical hypersurface'), which has a singular fibration map to \(\Gamma\subset\mathbb{R}^{n}\) with fibres being the closures of 'coamoebas'. The phase tropical hypersurface \(\hat{\Gamma}\) captures the leading asymptote of \(Y_{t}\) for \(t\ll 1\), and as a topological manifold \(\hat{\Gamma}\) is homeomorphic to \(Y_{t}\). Here the _coamoeba_ is just a subset of \(T^{n}\) consisting of two copies of an \(n\)-dimensional open simplex glued at the \((n+1)\) vertices. Let \[\Delta_{n}=\{0\leq x_{i}\leq\pi,\forall i=1,\ldots n,\quad\sum_{1}^{n}x_{i} \leq\pi\},\quad-\Delta_{n}=\{x\in T^{n}|-x\in\Delta_{n}\}, \tag{1}\] then the standard coamoeba is \[C_{std}=\operatorname{Int}(\Delta_{n}\cup-\Delta_{n})\cup\{\text{vertices of }\Delta_{n}\}. \tag{2}\] Notice that \(\Delta_{n},-\Delta_{n}\) share the vertices \((0,\ldots 0),(\pi,0,\ldots 0),\ldots(0,\ldots 0,\pi)\in T^{n}\). The boundary of a coamoeba consists of lower dimensional coamoebas. Matessi's goal is to imitate this picture in the Lagrangian setting (_i.e._ the A-side). Given a tropical hypersurface \(\Gamma\) in \(\mathbb{R}^{n}\), one can construct a Lagrangian piecewise-linear (PL) lift \(\hat{\Gamma}\subset(T^{*}T^{n}=(\mathbb{R}/2\pi\mathbb{Z})^{n}\times\mathbb{R}^ {n},\omega=\sum dx_{i}\wedge dy_{i})\) roughly as follows. The tropical hypersurface \(\Gamma\) is a union of \((n-1)\)-dimensional rational polyhedra glued along faces, with a balancing condition at the intersecting faces. The PL lift \(\hat{\Gamma}\) is a piecewise linear object inside \(T^{*}T^{n}\) projecting to \(\Gamma\). Given a point \(y\) inside the interior of an \((n-1)\)-dimensional face of \(\Gamma\), the fibre over \(y\) in \(\hat{\Gamma}\) is the \(S^{1}\subset T^{n}\) spanned by the \(\omega\)-orthogonal complement direction to the \((n-1)\)-dimensional face. More generally, if \(y\) is a point in the interior of the \(k\)-dimensional face \(\Gamma_{J}\) of \(\Gamma\), then the fibre is the closure \(E_{J}\) of an \((n-k)\)-dimensional coamoeba, which lives inside the subtorus \(T^{n-k}\subset T^{n}\) spanned by the \(\omega\)-orthogonal complement of the face. The PL lift is then \[\hat{\Gamma}=\cup_{J}\Gamma_{J}\times E_{J}.\] The \(\omega\)-orthogonal complement property guarantees that it is a singular Lagrangian. The correspondence between the faces of the tropical hypersurface and the coamoebas is inclusion-reversing, so that the boundary components of \(\Gamma_{J}\times E_{J}\) cancel out. The main problem is that \(\hat{\Gamma}\) is singular, and one would like to find smooth Lagrangians which approximate \(\hat{\Gamma}\). Matessi [6] constructs the following local model, called the _Lagrangian pair of pants_. Define the explicit function 1 on the interior of the coamoeba (2), Footnote 1: Matessi used slightly different conventions for the dimension and the periodicity of the lattice. \[\begin{cases}F(x)=(\sin(\frac{x_{1}}{2})\ldots\sin(\frac{x_{n}}{2})\sin(\frac{ \pi-\sum_{1}^{n}x_{i}}{2})^{1/n},&\text{on Int}(\Delta_{n}),\\ F(-x)=-F(x),&\text{on Int}(-\Delta_{n}),\end{cases}\] and consider the graph of \(dF\), which is a Lagrangian inside \(T^{*}T^{n}\). As \(x\) tends to the boundary of the coamoeba, but not to one of the \((n+1)\) vertices, then \(|dF|\to\infty\). This divergence has the important geometric meaning that the boundary of the coamoeba (except for the vertices) does not contribute to the finite distance boundary of the Lagrangian graph. To capture the limit as \(x\) tends to the vertices, one performs the real blow up \(\tilde{C}_{std}\) of the coamoeba \(C_{std}\) at these vertices (_cf._ section 3.4 below), and shows that \(dF\) extends smoothly to \(\tilde{C}_{std}\). This gives a 1-parameter family of smooth Lagrangian embedding of \(\tilde{C}_{std}\) into \(T^{*}T^{n}\) via the map \((x,\lambda dF(x))\)[6, Lem 3.5]. Matessi calls the images \(L_{\lambda}\) of this embedding the'rescaled \(n\)-dimensional Lagrangian pair of pants'. In the case \(n=2,3\), Matessi [6, Prop. 3.27] shows that the Lagrangian pair of pants is homeomorphic to the PL lift of the standard tropical hypersurface \[\Gamma_{std}=\text{non-smooth locus of }\max(0,y_{1},\ldots y_{n})\subset \mathbb{R}_{y}^{n},\] and by analysing the asymptotic behaviour of \(L_{\lambda}\), he shows that \(L_{\lambda}\) converges as \(\lambda\to 0\) in the Hausdorff topology to the PL lift \(\hat{\Gamma}_{std}\) of \(\Gamma_{std}\)[6, Cor. 3.9]. He then proceeds to use these local models to construct Lagrangian smoothings of PL lifts of more general smooth tropical hypersurfaces, in the case of dimension \(n=2,3\), with some extensions to other toric manifolds [6, Thm 1.1]. The same results are conjectured to hold in all higher dimensions, but the asymptotic behaviour near the boundary of the coamoeba will be combinatorially more complex. **Remark 1.1**.: Matessi pointed out to the author that his subsequent paper [7, section 9.4] has speculated on the construction of special Lagrangian pair of pants, through an inductive solution of Dirichlet problems. The interplay between Lagrangian submanifolds and tropical geometry is a popular research topic, _cf._[7, Introduction] for many other works. ### Special Lagrangian pair of pants We now give a summary of the main results. The ambient space is \(T^{*}T^{n}\), with the Euclidean structure \[\begin{cases}\omega=\sum dx_{i}\wedge dy_{i},\\ \Omega=\bigwedge_{i}(dx_{i}+\sqrt{-1}g_{ij}dy_{j}),\end{cases}\] where \(x_{i}\in\mathbb{R}/2\pi\mathbb{Z}\) are the angle coordinates, and \(y_{i}\in\mathbb{R}\) are the moment maps, and \((g_{ij})\) is an \(n\times n\) positive definite symmetric matrix. **Theorem 1.2**.: _Let \(n\geq 2\). There exists a special Lagrangian \(\mathbb{L}_{n}\) of phase \(\hat{\theta}=\frac{n-1}{2}\pi\) inside \(T^{*}T^{n}\), such that_ 1. _(Smoothness)_ \(\mathbb{L}_{n}\) _is an embedded submanifold of_ \(T^{*}T^{n}\) _without boundary (_cf._ _Prop._ 3.1_,_ 3.4_)._ 2. _(Coamoeba) Under the projection_ \(T^{*}T^{n}\to T^{n}\)_, the image of_ \(\mathbb{L}_{n}\) _is the standard coamoeba_ \(C_{std}\) _(_cf._ _Prop._ 3.3_)._ 3. _(Fibres) Over_ \(\text{Int}(\Delta_{n}\cup-\Delta_{n})\)_, the special Lagrangian_ \(\mathbb{L}_{n}\) _can be written as the graph of the gradient of some potential function_ \(u_{n}\)_. Over each of the_ \((n+1)\)_-vertices, the preimage is an_ \((n-1)\)_-dimensional real analytic hypersurface in the contangent fibre, which can be realised as the boundary of a convex set in_ \(\mathbb{R}^{n}\) _(_cf._ _Prop._ 3.3_, Cor._ 21_)._ 4. _(Topology) The projection_ \(\mathbb{L}_{n}\to C_{std}\) _lifts to a real analytic map to the real blow up_ \(\mathbb{L}_{n}\to\tilde{C}_{std}\)_, which is a homeomorphism (_cf._ _Prop._ 3.12_)._ 5. _(Tropical hypersurface) Under the projection_ \(T^{*}T^{n}\to\mathbb{R}^{n}\)_, the image of_ \(\mathbb{L}_{n}\) _lies within bounded distance to the standard tropical hypersurface_ \(\Gamma_{std}\subset\mathbb{R}^{n}_{y}\) _(_cf._ _Cor._ 2.26_)._ Moreover the construction has an inductive pattern. The \(n=1\) case is just the zero section \(S^{1}\subset T^{*}S^{1}\). The \(n=2\) case is the pair of pants known from the hyperkahler rotation trick. **Theorem 1.3**.: _(_cf. _section 3.3_) Let \(n\geq 2\). The special Lagrangian submanifold \(\mathbb{L}_{n}\) satisfies that_ 1. _(Regularity) Around any point on_ \(\mathbb{L}_{n}\)_, there is an ambient ball of radius bounded below by a constant depending only on_ \(n,g\)_, such that within the ball_ \(\mathbb{L}_{n}\) _is graphical over an_ \(n\)_-plane, with_ \(C^{k,\alpha}\) _norm bounded by_ \(C(n,g,k,\alpha)\)_. In particular, the_ \(C^{k,\alpha}\)_-topology is well defined._ 2. _(Inductive asymptote) In the region where_ \(y_{n}\) _is sufficiently negative depending on_ \(n,g\)_, the special Lagrangian_ \(\mathbb{L}_{n}\) _is the graph of a_ \(C^{k,\alpha}\)_-small normal vector field over_ \(\mathbb{L}_{n-1}\times\mathbb{R}_{y_{n}}\)_, and the local_ \(C^{k,\alpha}\)_-norm satisfies the exponential decay bound_ \(O\big{(}e^{y_{n}/C(n,g)}\big{)}\) _as_ \(y_{n}\) _tends to_ \(-\infty\)_. Similar asymptotes hold when one of_ \(y_{1},\ldots y_{n-1},-\sum_{1}^{n}y_{i}\) _is sufficiently negative. These regions cover the complement of a compact subset in_ \(T^{*}T^{n}\)_._ The basic strategy is to inductively solve a sequence of Dirichlet problems depending on \(n\), for the special Lagrangian graph equation over the \(n\)-dimensional simplex \(\Delta_{n}\). In each step, the lower dimensional solutions are utilised to prescribe the boundary data of the \(n\)-dimensional problem. The main work is to prove the existence and the properties of these PDE solutions \(u_{n}\), and interpret the results geometrically. From the PDE perspective the principal novelty is the behaviour of the solution near the domain boundary. While the regularity theory for the convex solutions to the special Lagrangian graph equation is well established, the known global regularity results almost inevitably relies on some _strict (pseudo)convexity_ assumption on the boundary of the domain, so that one can build some barrier function out of the boundary defining function, to achieve boundary Lipschitz estimates and higher order derivative estimates [2][4]. In contrast, the domain in our setting is the simplex \(\Delta_{n}\), where the strict boundary convexity fails, so both the existence and the properties of solutions require new types of barrier constructions. In the transition from PDE to geometry, an important step is to show that _the gradient \(du_{n}(x)\) diverges to infinity_ as \(x\) tends to any point of \(\partial\Delta_{n}\) except for the \((n+1)\) vertices (_cf._ Cor. 2.18). This behaviour is opposite to what happens in the standard theory for smooth and strictly convex domains, and the proof involves a delicate barrier argument. To obtain the special Lagrangian \(\mathbb{L}_{n}\), we take a reflected copy of the solution over \(-\Delta_{n}\), and take the _closure_ of the special Lagrangian graph over \(\operatorname{Int}(\Delta_{n}\cup(-\Delta_{n}))\). The gradient divergence then has the geometric significance that \(\partial\Delta_{n}\) (except for the vertices) has no contribution to the boundary of the special Lagrangian as an integral current. However, the \((n+1)\) vertices shared by \(\Delta_{n}\) and \(-\Delta_{n}\) bring in extra points in the step of taking the closure, and the happy fact is that their contributions to the current boundary \(\partial\mathbb{L}_{n}\)_cancel_ exactly by an involution symmetry. The asymptotic information of the special Lagrangian corresponds to the boundary behaviour of \(u_{n}\) in the PDE perspective. Our boundary behaviour is complicated from the original PDE viewpoint: for instance, an infinite amount of volume of the special Lagrangian is concentrated near \(\partial\Delta_{n}\). This complexity is partly caused by describing the special Lagrangian using the domain coordinates on \(\Delta_{n}\), and the geometric perspective affords the flexibility to use alternative coordinate systems. Away from the vertices, each boundary face is associated with a new coordinate chart mixing the position and momentum variables, which analytically corresponds to taking _partial Legendre transform_, with the effect that the special Lagrangian \(\mathbb{L}_{n}\) is a small graph over the new coordinates. The behaviour near the vertices is more delicate, and to prove the _smoothness_ of \(\mathbb{L}_{n}\) and the _asymptotic exponential decay_ estimates, we make use of minimal surface theory. **Notation.** In this paper \(C\geq 1\) will stand for constants which depend only on the dimension and the ambient metric on \(T^{*}T^{n}\), and do not depend on the other auxiliary parameters, unless explicitly indicated otherwise. The notation \(a\lesssim b\) means \(a\leq Cb\) for some estimable constant \(C\). Since this paper will involve many auxiliary barrier constructions depending on multiple parameters, the notations involved in each barrier will be localised to the subsection, and then recycled in later subsections. The \(C^{k,\alpha}\)-norms will always refer to bounded balls inside local charts around a given point, rather than taking the global supremum over the special Lagrangian. **Acknowledgement.** The author is a current Clay Maths Research Fellow, based at MIT. He thanks D. Matessi for comments, and Y-S. Lin, S. Esfahani, and I. Zharkov for discussions on related topics. Dirichlet problem ### Special Lagrangian graph equation We look for special Lagrangians inside the the cotangent bundle of \(T^{n}\) with the following Euclidean structure: \[\begin{cases}\omega=\sum dx_{i}\wedge dy_{i},\\ \Omega=\bigwedge_{i}(dx_{i}+\sqrt{-1}g_{ij}dy_{j}),\end{cases}\] where \(x_{i}\in\mathbb{R}/2\pi\mathbb{Z}\) are the angle coordinates, and \(y_{i}\in\mathbb{R}\) are the moment maps, and \((g_{ij})\) is an \(n\times n\) positive definite symmetric matrix. In particular, the \(T^{n}\) fibres are special Lagrangians of phase zero. We consider special Lagrangians which are graphical, namely \(y_{i}\) can be expressed as the gradient of a potential \[y_{i}=\frac{\partial u}{\partial x_{i}}.\] The special Lagrangian condition with phase \(\hat{\theta}\) means that \(\mathrm{Im}(e^{-\sqrt{-1}\theta}\Omega)=0\), and translates into the second order equation \[\mathrm{Im}(e^{-\sqrt{-1}\hat{\theta}}\det\left(I+\sqrt{-1}g_{ik}\frac{ \partial^{2}u}{\partial x_{k}\partial x_{j}}\right))=0. \tag{3}\] The case where \(g_{ij}=\delta_{ij}\) is the familiar _special Lagrangian graph equation_ \[\mathrm{Im}(e^{-\sqrt{-1}\hat{\theta}}\det\left(I+\sqrt{-1}D^{2}u\right))=0.\] In fact we can reduce the equation to the standard form by a linear change of coordinates. To see this, let \((a_{ij})\in GL_{+}(n,\mathbb{R})\) with inverse matrix \((a^{ij})\), and \[x_{i}=a_{ij}\tilde{x}_{i},\quad y_{i}=a^{ji}\tilde{y}_{j},\] so that \(\omega=\sum d\tilde{x}_{i}\wedge d\tilde{y}_{i}\), and \(\tilde{y}_{i}=\frac{\partial u}{\partial\tilde{x}_{i}}\). We find \[\Omega=\det(a)\bigwedge_{i}(d\tilde{x}_{i}+\sqrt{-1}\tilde{g}_{ij}d\tilde{y}_ {j}),\quad\tilde{g}_{lk}=a^{li}g_{ij}a^{kj}.\] We can find the matrix \((a_{ij})\) so that \(\tilde{g}_{ij}=\delta_{ij}\), since all positive definite quadratic forms are congruent to the standard form. The special Lagrangian graph equation in the \(\tilde{x}_{i}\) coordinates is then in the standard form. It is well known that the special Lagrangian consists of phase branches. In the coordinate independent language, the Hessian of \(u\) is a symmetric quadratic form, and we can define an operator \(A\) by \[D^{2}u(v_{1},v_{2})=g(Av_{1},v_{2}).\] Since \(A\) is self-adjoint with respect to the inner product \(g=g^{ij}dx_{i}dx_{j}\), its eigenvalues are real: \[\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{n},\] and the special Lagrangian graph equation can be written as \[\arctan D^{2}u:=\sum_{1}^{n}\arctan\lambda_{i}=\hat{\theta}\mod\pi\mathbb{Z}.\] The analytic behaviour of the equation \[\arctan D^{2}u=\hat{\theta} \tag{4}\] depends strongly on the range of phase angle \(\hat{\theta}\). The range of \(\arctan\) forces \(\hat{\theta}\in(-\frac{n\pi}{2},\frac{n\pi}{2})\), and up to replacing \(u\) by \(-u\) we only need to consider \(\hat{\theta}\geq 0\). A very incomplete list of notable results include: * In the range \(\hat{\theta}\in[\frac{n-1}{2}\pi,\frac{n}{2}\pi)\), so that \(u\) is convex, Caffarelli-Nirenberg-Spruck [2] proved the existence and uniqueness of solution for smooth boundary data, over smooth and strictly convex bounded domains in \(\mathbb{R}^{n}\). * In the'supercritical' range \(\hat{\theta}\in[\frac{n-2}{2}\pi,\frac{n}{2}\pi)\), Wang-Yuan [14] proved the a priori interior Hessian estimate \[\max_{B_{R}}|D^{2}u|\leq C(n,\mathrm{osc}_{B_{2R}}(\frac{u}{R})),\] which implies the interior real analyticity of viscosity solutions. The convex case was previously settled in [13]. * For all phases \(\hat{\theta}\in(-\frac{n}{2}\pi,\frac{n}{2}\pi)\), Harvey-Lawson [4] proved the existence and uniqueness of \(C^{0}\) viscosity solution for any continuous boundary data, over smooth bounded domains whose boundary is pseudoconvex in an appropriate sense. * When \(|\hat{\theta}|<\frac{n-2}{2}\pi\) the regularity property of the viscosity solution can be very bad. A very recent sample result by Mooney-Savin [11] constructed Lipschitz viscosity solutions in 3D which fails to be \(C^{1}\). We will be interested in solving the equation over the simplex, which is neither smooth nor _strictly_ convex, so some care is needed in applying the known existence results. ### The dimension two case We now consider the special case \(n=2\), with the special choice of phase angle \(\hat{\theta}=\frac{\pi}{2}\). We shall use the hyperkahler rotation trick to find the special solution. Notice that \[\sqrt{g_{11}g_{22}-g_{12}^{2}}\omega,\mathrm{Re}\Omega,\mathrm{Im}\Omega\] form a hyperkahler triple. The special Lagrangian condition is equivalent to being holomorphic with respect to the complex structure defined by the (2,0)-form \(\sqrt{g_{11}g_{22}-g_{12}^{2}}\omega+\sqrt{-1}\mathrm{Re}\Omega\). We find holomorphic coordinates with respect to this new complex structure \[\begin{cases}z_{1}=\exp(\sqrt{g_{11}g_{22}-g_{12}^{2}}y_{2}-\sqrt{-1}x_{1}), \\ z_{2}=\exp(\sqrt{g_{11}g_{22}-g_{12}^{2}}y_{1}+\sqrt{-1}x_{2}),\end{cases}\] so that \(T^{*}T^{2}\) can be identified with \((\mathbb{C}^{*})^{2}_{z_{1},z_{2}}\). The standard pair of pants surface is \[\{z_{1}+z_{2}=1\}\subset(\mathbb{C}^{*})^{2}.\] Solving \(y_{1},y_{2}\) in terms of \(x_{1},x_{2}\) from the equation \[\begin{cases}\exp(\sqrt{g_{11}g_{22}-g_{12}^{2}}y_{2})e^{\sqrt{-1}x_{1}}+\exp( \sqrt{g_{11}g_{22}-g_{12}^{2}}y_{1})e^{-\sqrt{-1}x_{2}}=1,\\ \exp(\sqrt{g_{11}g_{22}-g_{12}^{2}}y_{2})e^{-\sqrt{-1}x_{1}}+\exp(\sqrt{g_{11} g_{22}-g_{12}^{2}}y_{1})e^{\sqrt{-1}x_{2}}=1,\end{cases}\] we obtain \[y_{1}=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}}\log\frac{\sin x_{1}}{\sin(x_{1} +x_{2})},\quad y_{2}=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}}\log\frac{\sin x _{2}}{\sin(x_{1}+x_{2})}.\] Notice the \(x\)-domain is contained in the closure of the coamoeba \(\Delta_{2}\cup-\Delta_{2}\subset T^{2}\), \[\Delta_{2}=\{0\leq x_{1}\leq\pi,0\leq x_{2}\leq\pi,0\leq x_{1}+x_{2}\leq\pi\}.\] We can recover the potential \(u\) from \(du=y_{1}dx_{1}+y_{2}dx_{2}\). We introduce the special function \[L(x)=\int_{0}^{x}\log\sin tdt,\quad 0\leq x\leq\pi. \tag{5}\] Then the solution \(u\) over \(\Delta_{2}\) is \[u_{2}=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}}(L(x_{1})+L(x_{2})+L(\pi-x_{1}-x _{2})-L(\pi)), \tag{6}\] and the solution over \(-\Delta_{2}\) is given by \(u_{2}(-x)=-u_{2}(x)\). The three edges of the triangle \(\Delta_{2}\) appear on symmetric footing as expected. The notation \(u_{2}\) is meant to indicate the 2D problem. **Remark 2.1**.: The pair of pants surfaces come in families \(\{az_{1}+bz_{2}=1\}\) for \(a,b\in\mathbb{C}^{*}\). The effect of \(a,b\) is to translate the surface inside \(T^{*}T^{2}\). In particular, if \(a,b\in U(1)\), the \(x\)-domain is translated inside \(T^{2}\), and if \(a,b>0\), then the \(y_{i}\) are translated in \(\mathbb{R}^{2}\), corresponding to the more general solutions over \(\Delta_{2}\), \[u=u_{0}+a_{0}+a_{1}x_{1}+a_{2}x_{2},\] for real constants \(a_{0},a_{1},a_{2}\). **Remark 2.2**.: If we start with other choices of phase angle \(\hat{\theta}\in[0,\pi)\), the solution will no longer be graphical over the \(x_{i}\) variables. Instead \(x_{i}\) and \(y_{i}\) will be mixed up, and we do not see an obvious path for higher dimensional generalisations. We now seek a better analytic understanding of the special solution (6), which will guide our higher dimensional generalisation. An easy calculus exercise gives **Lemma 2.3**.: \[L^{\prime}(x)=\log\sin x,\quad L^{\prime\prime}(x)=\cot x,\quad L(x)+L(\pi-x)=L( \pi).\] _In particular \(L\) is a negative, decreasing function, is convex for \(0\leq x\leq\frac{\pi}{2}\), and \(L(x)=x\log x+O(x)\) for \(x\to 0\)._ **Lemma 2.4**.: _The function \(\sqrt{g_{11}g_{22}-g_{12}^{2}}u_{2}\) can be rewritten as \(L(x_{1})+L(x_{2})-L(x_{1}+x_{2})\), and its Hessian matrix is explicitly calculated as_ \[\begin{pmatrix}\cot x_{1}-\cot(x_{1}+x_{2})&-\cot(x_{1}+x_{2})\\ -\cot(x_{1}+x_{2})&\cot x_{2}-\cot(x_{1}+x_{2})\end{pmatrix}.\] _This matrix is positive definite, and has determinant one. (Hint: use the trigonometric formula \(\tan(x+y)=\frac{\tan(x)+\tan(y)}{1-\tan x\tan y}\).)_ **Remark 2.5**.: Using the formula \(\cot x=x^{-1}-\frac{x}{3}+O(x^{3})\), we see that for \((x_{1},x_{2})\) close to the origin, the Hessian matrix is to leading order \[\frac{1}{x_{1}+x_{2}}\big{(}\frac{x_{2}}{x_{1}}dx_{1}^{2}-2dx_{1}dx_{2}+\frac{ x_{1}}{x_{2}}dx_{2}^{2}\big{)}+O(x_{1}+x_{2}).\] It has a large eigenvalue comparable to \(\min(x_{1},x_{2})^{-1}\), and a small eigenvalue comparable to \(\min(x_{1},x_{2})\), with product one. **Corollary 2.6**.: The solution \(u_{2}\) has boundary data zero on \(\partial\Delta_{2}\), is convex on \(\Delta_{2}\), and is a smooth negative valued function in the interior. Near the edge \(x_{2}=0\) but away from the vertices of \(\Delta_{2}\), the boundary behaviour is \[\begin{cases}u_{2}=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}}x_{1}\log x_{1}+O( x_{1}),\\ \frac{\partial u_{2}}{\partial x_{1}}=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}} \log x_{1}+O(1),\\ \frac{\partial^{2}u_{2}}{\partial^{2}x_{1}}=\frac{1}{\sqrt{g_{11}g_{22}-g_{1 2}^{2}}}\frac{1}{x_{1}}+O(1),\end{cases}\] and similarly for the other two edges. Near the vertices, the gradient of \(u_{2}\) converge to finite values along the rays \(\frac{x_{1}}{x_{2}}=const>0\), but the largest eigenvalue of the Hessian of \(u_{2}\) tends to \(+\infty\). **Remark 2.7**.: The zero boundary data is related to the observation that \(u_{1}=0\) solves the 1D version of the special Lagrangian graph equation. The fact that the Hessian becomes degenerate along the edge can be expected as follows: the zero boundary data on the edge suggests that the smallest eigenvalue of the Hessian should tend to zero on the boundary, and the sum of arctan of the eigenvalues is \(\frac{\pi}{2}\), so the arctan of the largest eigenvalue should tend to \(\frac{\pi}{2}\). ### Inductive boundary value prescription We now propose an inductive strategy to produce special Lagrangian graphs in general dimensions. The main guiding principle is that _the \(n\)-dimensional solution should supply the boundary value for the \((n+1)\)-dimensional Dirichlet problem_. The domain in the dimension \(n\) case is \[\Delta_{n}=\{0\leq x_{i}\leq\pi,i=1,2,\ldots,n,\text{and }0\leq x_{1}+x_{2}+ \ldots+x_{n}\leq\pi\}.\] For concreteness, we will first describe the Dirichlet problem in 3D. The domain is the 3-dimensional simplex \(\Delta_{3}\). The boundary \(\partial\Delta_{3}\) consists of 4 triangles. On all the 6 edges of the 4 triangles, we put the boundary data \(u_{1}=0\). On each of the \(4\) triangle faces, we use the solution to the 2D version of the problem to prescribe the boundary data for \(u\): \[\begin{cases}u(x_{1},x_{2},0)=\frac{1}{\sqrt{g_{11}g_{22}-g_{12}^{2}}}(L(x_{1})+L (x_{2})+L(\pi-x_{1}-x_{2})-L(\pi)),\\ u(x_{1},0,x_{3})=\frac{1}{\sqrt{g_{11}g_{33}-g_{13}^{2}}}(L(x_{1})+L(x_{3})+L( \pi-x_{1}-x_{3})-L(\pi)),\\ u(0,x_{2},x_{3})=\frac{1}{\sqrt{g_{22}g_{33}-g_{23}^{2}}}(L(x_{2})+L(x_{3})+L( \pi-x_{2}-x_{3})-L(\pi)),\end{cases}\] and similarly for the face \(\{x_{1}+x_{2}+x_{3}=\pi\}\). By construction, the boundary data match continuously on the intersection of faces. The Dirichlet problem in 3D is to find \(u=u_{3}\) with the above boundary data on \(\partial\Delta_{3}\), solving the special Lagrangian graph equation (3)(4) in the phase branch \(\hat{\theta}=\frac{(3-1)}{2}\pi\), such that \(u_{3}\) is smooth in the interior of \(\Delta_{3}\), and continuous up to the boundary. More generally, assuming the \((n-1)\)-dim problem has a solution, then the \(n\)-dim problem has well defined boundary data, which is \(C^{0}\) on \(\partial\Delta_{n}\) by construction, and we seek a solution \(u_{n}\) to (3)(4) with \(\hat{\theta}=\frac{(n-1)}{2}\pi\), such that \(u_{n}\) is smooth in the interior of \(\Delta_{n}\), and continuous up to the boundary. **Remark 2.8**.: We emphasize that the phase branch \(\hat{\theta}=\frac{(n-1)}{2}\pi\) automatically implies the convexity of the solution. This easily implies by induction that the Dirichlet solution is nonpositive. ### Existence theorem and barrier functions We shall prove **Theorem 2.9**.: _(Existence) Let \(n\geq 1\). There is a unique solution \(u_{n}\) to the Dirichlet problem in dimension \(n\), which is smooth in the interior of \(\Delta_{n}\), and continuous up to the boundary of \(\Delta_{n}\). More precisely, we have the following boundary modulus of continuity estimate:_ \[|u_{n}(x_{1},\dots x_{n})-u_{n-1}(x_{1},\dots x_{n-1})|\leq-C(n)x_{n}\log\frac {x_{n}}{2\pi},\quad\forall x\in\Delta_{n}, \tag{7}\] _for some constant \(C(n)\) depending only on the dimension \(n\) and \(g_{ij}\). Similarly with the other boundary components._ **Remark 2.10**.: Notice that \(0\leq x_{n}\leq\pi\), so the log term on the right is negative. For \(x_{n}\) bounded from below, the estimate (7) amounts to a lower bound on \(u_{n}\). The main force of (7) is when \(x_{n}\) is small. By induction (7) implies \[|u_{n}(x_{1},\dots x_{n})-u_{m}(x_{1},\dots x_{m})|\leq-C(n)\sum_{m+1}^{n}x_{i }\log\frac{x_{i}}{2\pi},\quad 1\leq m\leq n-1. \tag{8}\] We may call this \(O(x\log x)\) boundary continuity near the face \(\Delta_{m}\subset\partial\Delta_{n}\). In the statement we focus on the \(\{x_{n}=0\}\) boundary, but of course the same kind of boundary modulus of continuity applies to any other boundary faces. The case of \(n=1,2\) have been checked explicitly, so we will focus on \(n\geq 3\), and assume by induction that the cases \(\leq n-1\) are known. The key is to build barrier functions. We begin with the upper barrier construction. **Lemma 2.11**.: _The function_ \[\bar{u}_{n}=(1-\frac{x_{n}}{\pi})u_{n-1}\big{(}\frac{x_{1}}{1-\frac{x_{n}}{\pi}}, \ldots\frac{x_{n-1}}{1-\frac{x_{n}}{\pi}}\big{)},\] _is a supersolution to the \(n\)-dim Dirichlet problem._ Proof.: For our choice of phase, the solutions to the \(m\)-dimensional problems are automatically convex for \(m=1,2,\ldots n-1\). These lower dimensional solutions prescribe the boundary data of the \(n\)-dim problem. By construction \(\bar{u}_{n}\) is linear on the line segments joining \((0,\ldots 0,\pi)\) and the points of \(\Delta_{n-1}\subset\partial\Delta_{n}\), and agree with the boundary data \(u_{n}|_{\partial\Delta_{n}}\) at the endpoints of these line segments. Hence \(\bar{u}_{n}\geq u_{n}|_{\partial\Delta_{n}}\) on \(\partial\Delta_{n}\). The induction hypothesis implies \(\bar{u}_{n}\) is smooth in the interior of \(\Delta_{n}\) and continuous up to the boundary. Since \(\bar{u}_{n}\) is linear when restricted to the line segments connecting \((0,\ldots 0,\pi)\) to \((x_{1},\ldots x_{n-1},0)\), its smallest Hessian eigenvalue cannot be positive. Thus \[\arctan D^{2}\bar{u}_{n}\leq\frac{n-1}{2}\pi\] on the interior of \(\Delta_{n}\), namely \(\bar{u}_{n}\) is a _supersolution_ to the Dirichlet problem. Our next goal is to build lower barrier functions. We shall first describe the simpler case, when the dihedral angle between any two codimension one faces of \(\Delta_{n}\) is acute. The barrier function in this 'acute simplex' case is \[\underline{u}_{n}=u_{n-1}(\tilde{x}_{1},\ldots\tilde{x}_{n-1})+\sum_{i=1}^{n-1 }(L(x_{n})+L(x_{i})-L(x_{i}+x_{n}))+Kx_{n}\log\frac{x_{n}}{2\pi}\] Here \(\tilde{x}_{i}\) are linear functions of the form \(x_{i}+a_{i}x_{n}\) for some \(a_{i}\in\mathbb{R}\), such that \(g(d\tilde{x}_{i},dx_{n})=0\). The advantage of the 'acute simplex' assumption is that \((\tilde{x}_{i})\in\Delta_{n-1}\), so that \(u_{n-1}(\tilde{x}_{i})\) makes sense. The parameter \(K\gg 1\) is a large positive constants depending on \(n\) to be determined. Notice that \(\underline{u}_{n}\) agrees with \(u_{n-1}\) on the \(x_{n}=0\) boundary face. **Lemma 2.12**.: _The function \(\underline{u}_{n}\) is a subsolution to the \(n\)-dim Dirichlet problem._ Proof.: For \(K\gg C(n-1)\), the negativity of the term \(Kx_{n}\log\frac{x_{n}}{2\pi}\) for \(x_{n}>0\) would overwhelm the boundary modulus of continuity in the lower dimensional problems, hence on \(\partial\Delta_{n}\), we have \(\underline{u}_{n}\leq u_{n}|_{\partial\Delta_{n}}\). We now estimate the Hessian eigenvalues of \(\underline{u}_{n}\). Notice that the Hessian of \(L(x_{n})+L(x_{i})-L(x_{i}+x_{n})\) (computed in Lem 2.4) is bounded below by \(C^{-1}x_{n}dx_{i}^{2}-\frac{C}{x_{n}}dx_{n}^{2}\). Thus after summation, \[D^{2}\underline{u}_{n}\geq D^{2}u_{n-1}+C^{-1}x_{n}\sum_{1}^{n-1}dx_{i}^{2}+(K -C)x_{n}^{-1}dx_{n}^{2},\] where we freely change \(C\) from line to line. Let \(e_{1},\ldots e_{n-1}\) be the orthonormal eigenbasis of \(D^{2}u_{n-1}\) on \(\mathbb{R}^{n-1}\), with respect to the metric \(g=g^{ij}dx_{i}dx_{j}\), and denote the eigenvalues as \(\lambda^{\prime}_{1}\leq\ldots\leq\lambda^{\prime}_{n-1}\). By induction \(\sum_{1}^{n-1}\arctan\lambda^{\prime}_{i}=\frac{n-2}{2}\pi\), so \(0\leq\lambda^{\prime}_{1}\leq\cot\frac{\pi}{2n}\ll K\). We require \(K\gg C\). Then \[C^{-1}x_{n}\sum_{1}^{n-1}dx_{i}^{2}+(K-C)x_{n}^{-1}dx_{n}^{2}\geq\frac{1}{2C}x _{n}\sum_{1}^{n-1}d\tilde{x}_{i}^{2}+\frac{K}{2}x_{n}^{-1}dx_{n}^{2}\] so we get a lower bound (with some changed constant \(C\)), \[D^{2}\underline{\mathfrak{u}}_{n}\geq\sum_{1}^{n-1}(\lambda^{\prime}_{i}+C^{-1 }x_{n})e_{i}^{*}\otimes e_{i}^{*}+\frac{K}{2x_{n}}dx_{n}^{2}.\] The main point of using \(\tilde{x}_{i}\) in favour of \(x_{i}\) is that \(e_{i}^{*},dx_{n}\) are all \(g\)-orthogonal, so we can easily estimate the matrix arctan of the RHS: \[\arctan D^{2}\underline{\mathfrak{u}}_{n}\geq\sum_{1}^{n-1} \arctan(\lambda^{\prime}_{i}+C^{-1}x_{n})+\arctan\frac{K}{Cx_{n}}\] \[\geq\sum_{1}^{n-1}\arctan\lambda^{\prime}_{i}+C^{-1}x_{n}+(\frac{ \pi}{2}-CK^{-1}x_{n})\] \[=\frac{(n-1)}{2}\pi+C^{-1}x_{n}-CK^{-1}x_{n}.\] Here the second line uses that \(\lambda^{\prime}_{1}\leq\cot(\frac{\pi}{2n})\leq C\). Choosing \(K\gg C\), we obtain \[\arctan D^{2}\underline{\mathfrak{u}}_{n}\geq\frac{(n-1)}{2}\pi,\] hence the claim. In the general case, without the acute simplex assumption, the above barrier is not globally defined on \(\Delta_{n}\). Our remedy is an inductive sequence of barriers, which aims to prove boundary continuity from vertices up to codim one faces. These will depend on parameters \(1\ll K_{0}\ll K_{1}\ll\ldots K_{n-1}\). The barrier for the vertex \((0,\ldots 0)\) is simply \[\underline{\mathfrak{u}}_{n,0}=K_{0}\sum_{1}^{n}x_{i}\log\frac{x_{i}}{2\pi}.\] The barrier for the \(m\)-dimensional face \(\Delta_{m}\subset\partial\Delta_{n}\) is \[\underline{\mathfrak{u}}_{n,m}=u_{m}(\tilde{x}_{1},\ldots\tilde{x}_{m})+\sum _{i\leq m}\sum_{j>m}(L(x_{i})+L(x_{j})-L(x_{i}+x_{j}))+K_{m}\sum_{m+1}^{n}x_{ j}\log\frac{x_{j}}{2\pi}.\] Here \(\tilde{x}_{i}\) are linear functions of the form \(x_{i}+\sum_{j>m}a_{ij}x_{j}\) for some \(a_{ij}\in\mathbb{R}\), such that \(g(d\tilde{x}_{i},dx_{j})=0\) for all \(i\leq m\) and \(j>m\). As a caveat, \(\tilde{x}_{i}\) depends on the choice of \(m\). The barrier \(\underline{\mathfrak{u}}_{n,m}\) is only defined on the subset of \(\Delta_{n}\) with \((\tilde{x}_{i})\in\Delta_{m}\), namely \[0\leq\tilde{x}_{i}\leq\pi,\quad\sum_{1}^{m}\tilde{x}_{i}\leq\pi.\] Almost the same argument as above shows **Lemma 2.13**.: _On the interior of the domain of definition of \(\underline{u}_{n,m}\), it satisfies \(\arctan D^{2}\underline{u}_{n,m}\geq\frac{(n-1)}{2}\pi\)._ We now prove the existence theorem 2.9. Proof.: We can find strictly convex smooth domains \(\Omega_{\epsilon}\subset\Delta_{n}\), whose boundary is within \(\epsilon\)-Hausdorff distance to \(\partial\Delta_{n}\). The inductive hypothesis implies that the boundary data \(u_{n}|_{\partial\Delta_{n}}\) has \(O(x\log x)\)-boundary modulus of continuity bound. We can extend \(u_{n}|_{\partial\Delta_{n}}\) to some function on the \(\epsilon\)-neighbourhood of \(\partial\Delta_{n}\) preserving the \(O(x\log x)\)-boundary modulus of continuity, so by restriction we obtain a function on \(\partial\Omega_{\epsilon}\). By the main result of Harvey-Lawson [4], there is a unique continuous viscosity solution \(u_{n,\epsilon}\) on \(\Omega_{\epsilon}\) with this boundary data. The induction hypothesis implies \(O(x\log x)\)-modulus of continuity for the barrier functions \(\bar{u}_{n},\underline{u}_{n,0}\), whence on \(\partial\Omega_{\epsilon}\), the boundary data satisfies \[\underline{u}_{n,0}+C\epsilon\log\epsilon\leq u_{n,\epsilon}\leq\bar{u}_{n}-C \epsilon\log\epsilon,\] Since \(\bar{u}_{n},\underline{u}_{n,0}\) are supersolutions/subsolutions, the inequality holds also on \(\Omega_{\epsilon}\). In particular, \(u_{n,\epsilon}\) is bounded independent of \(\epsilon\). The regularity theorem of Wang-Yuan [13][14] then implies uniform interior \(C^{k,\alpha}\) estimates of \(u_{n,\epsilon}\) on any fixed compact subdomain of \(\operatorname{Int}(\Delta_{n})\), independent of small enough \(\epsilon\). We can then take a subsequential limit as \(\epsilon\to 0\), to obtain a smooth limit \(u_{n}\) on the interior of \(\Delta_{n}\). This limit then satisfies \[\underline{u}_{n,0}\leq u_{n}\leq\bar{u}_{n}.\] This gives the modulus of continuity at the vertices: \[0\geq u_{n}\geq K_{0}\sum_{1}^{n}x_{i}\log\frac{x_{i}}{2\pi}.\] Next we compare \(u_{n,\epsilon}\) with the lower barrier \(\underline{u}_{n,1}\) on the domain of definition of \(\underline{u}_{n,1}\), which may be smaller than \(\Delta_{n}\). The point is that outside this domain of definition, the previous barrier \(\underline{u}_{n,0}\) already provided the requisite a priori lower bound on \(u_{n,\epsilon}\). By the choice \(K_{1}\gg K_{0}\), we can ensure that on the boundary of the domain of definition, we have \[\underline{u}_{n,1}+C\epsilon\log\epsilon\leq u_{n,\epsilon}.\] Passing to the \(\epsilon\to 0\) limit, we obtain the \(O(\sum_{2}^{n}x_{i}\log x_{i})\) boundary continuity along \(\Delta_{1}\). Continuing with this inductive yoga, we obtain the \(O(\sum_{m+1}^{n}x_{i}\log x_{i})\) boundary continuity along \(\Delta_{m}\), until we reach the codimension one face \(m=n-1\). In particular, we obtain the continuity of \(u_{n}\) up to the boundary, with the prescribed boundary value, so \(u_{n}\) solves the Dirichlet problem. Finally, the uniqueness of the solution follows from the standard comparison principle argument, _cf._ eg. Harvey-Lawson [4, Thm 6.3]. ### Improved lower bound near vertices Near the vertices of \(\Delta_{n}\) the boundary modulus of continuity bound can be improved by removing the log factor. We focus on the neighbourhood of the origin. **Proposition 2.14**.: Let \(n\geq 2\). For \(x\in\Delta_{n}\), we have an improved estimate \[u_{n}(x_{1},\ldots x_{n})\geq-C(n)\sum_{1}^{n}x_{i},\] for some constant \(C(n)\) depending only on the dimension \(n\) and \(g_{ij}\). The proof depends on another barrier argument. Let \[v=K\sum_{i,j}(L(x_{i})+L(x_{j})-L(x_{i}+x_{j}))-K\sum_{1}^{n}x_{i},\quad K \gg 1.\] **Lemma 2.15**.: \(\arctan D^{2}v\geq\frac{n-1}{2}\pi\) _in a neighbourhood of the origin._ Proof.: By symmetry, without loss \(x_{1}\leq x_{2}\leq\ldots\leq x_{n}\ll 1\). We will focus primarily on the case where \(g_{std}=\sum dx_{i}^{2}\), and indicate how to adapt to general \(g\). The Hessian of each summand \(L(x_{i})+L(x_{j})-L(x_{i}+x_{j})\) has been computed in Lem 2.4, which is positive definite, with a large eigenvalue comparable to \(\min(x_{i},x_{j})^{-1}\) along an eigendirection \(e_{i,j}\) approximated by \(-\partial_{x_{i}}+\frac{x_{i}}{x_{j}}\partial_{x_{j}}\). Notice that \(e_{1,2},e_{2,3},\ldots,e_{n-1,n}\) are effectively linearly independent (_i.e._ the \(g_{std}\)-inner product matrix for these vectors is uniformly elliptic even as \(x\to 0\)), so we deduce that on the \(k\)-dimensional subspace \[V_{k}=\operatorname{span}\{e_{1,2},\ldots e_{k,k+1}\},\] we have \[D^{2}v\geq\frac{1}{C}K\sum_{1}^{k}x_{i}^{-1}e_{i,i+1}^{*}\otimes e_{i,i+1}^{* }\geq\frac{K}{Cx_{k}}g_{std}.\] Since the general \(g\) is uniformly equivalent to \(g_{std}\), we see that \(D^{2}v\geq\frac{K}{Cx_{k}}g\) on \(V_{k}\). By the minmax characterisation of eigenvalues, the \(k\)-th largest eigenvalue of \(D^{2}v\) with respect to \(g\) is bounded below by \(\frac{K}{Cx_{k}}\), for \(k=1,\ldots n-1\). The 2D Hessian of each summand also has a small eigenvalue (with respect to \(g_{std}\)) comparable to \(\min(x_{i},x_{j})\), along an eigendirection \(e_{i,j}^{\prime}\) approximated by \(\frac{x_{i}}{x_{j}}\partial_{x_{i}}+\partial_{x_{j}}\). Let \(e_{n}=e_{n-1,n}^{\prime}\), whose associated eigenvalue is bounded below by \(C^{-1}x_{n-1}\). Then \(e_{1,2},\ldots e_{n-1,n},e_{n}\) are effectively linearly independent, and by the minmax arguments as above \[D^{2}v\geq C^{-1}Kx_{n-1}g_{std}\geq C^{-1}Kx_{n-1}g.\] Hence the smallest eigenvalue of \(D^{2}v\) is bounded below by \(C^{-1}Kx_{n-1}\). Combining the above, \[\arctan D^{2}v\geq(n-1)\arctan(\frac{K}{Cx_{n-1}})+\arctan(\frac{Kx_{n-1}}{C })\geq\frac{n-1}{2}\pi-\frac{Cx_{n-1}}{K}+\frac{Kx_{n-1}}{C}.\] Choosing \(K\gg C\) then shows the Lemma. We now prove the vertex modulus of continuity Prop. 2.14. Proof.: In the case \(n=2\), without loss \(x_{1}\leq x_{2}\), and we observe \[L(x_{1})+L(x_{2})-L(x_{1}+x_{2})\geq-Cx_{1}\log\frac{x_{2}}{x_{1}}\geq-C(x_{1}+x _{2}).\] In general, we argue by induction, and assume the statement is already known for \(n-1\). Consider the barrier \(v\) for \(K\gg 1\), which is a subsolution for \(x\) close to the origin, say \(\sum x_{i}^{2}\leq\delta\). Since \(v\leq 0\), this barrier function is bounded above by \(-K\sum x_{i}\). On the locus \(\sum x_{i}^{2}=\delta\), by choosing \(K\) large enough, the term \(-K\sum x_{i}\) is more negative than the \(C^{0}\)-norm of \(u_{n}\), so the barrier lies below \(u_{n}\) for \(\sum x_{i}^{2}=\delta\). Morever, by the inductive hypothesis, by choosing \(K\) large enough, the negativity of \(v\) will force the barrier to lie below \(u_{n}\) on \(\{\sum x_{i}^{2}\leq\delta\}\cap\partial\Delta_{n}\). Now by the comparison principle, the barrier \(v\leq u_{n}\) for all \(x\in\Delta_{n}\) inside \(\sum x_{i}^{2}\leq\delta\). Since \(v\geq-CK\sum_{1}^{n}x_{i}\), this implies Prop. 2.14. ### Improved upper bound and gradient divergence Our next goal is to improve the upper bound on \(u_{n}\) close to the faces in \(\partial\Delta_{n}\) but away from the vertices, to deduce the divergence of the gradient. The proof involves a delicate barrier argument. We will use the following matrix inequality, which must be well known, but we include a proof for the reader's convenience. **Lemma 2.16**.: _Suppose \(A\) is a positive semidefinite symmetric matrix, and \(\{e_{i}\}\) be an orthonormal basis of \(\mathbb{R}^{n}\). Then_ \[\arctan A\leq\sum_{1}^{n}\arctan A(e_{i},e_{i}).\] Proof.: We can write \(A\) in terms of an orthonormal eigenbasis as \(A=\sum_{1}^{n}\lambda_{i}f_{i}\otimes f_{i}\). Then using the concavity of \(\arctan\) on \(\mathbb{R}_{\geq 0}\), \[\sum_{1}^{n}\arctan A(e_{i},e_{i})=\sum_{i}\arctan(\sum_{j} \lambda_{j}(f_{j},e_{i})^{2})\] \[\geq \sum_{i}\sum_{j}(f_{j},e_{i})^{2}\arctan\lambda_{j}=\sum_{j} \arctan\lambda_{j}=\arctan A.\] **Notation.** Given a small parameter \(\delta>0\), we shall denote the shrunken boundary face \[\Delta_{m}^{\delta}=\{x=(x_{1},\dots x_{m})\in\Delta_{m}|\text{dist}(x, \partial\Delta_{m})\gtrsim\delta\}. \tag{9}\] This notation allows for the freedom to slightly shrink the domain in later arguments. **Proposition 2.17**.: Let \(n\geq 2\) and \(1\leq m\leq n-1\). In the neighbourhood of the interior of the \(\Delta_{m}\) face, \[(x_{1},\dots x_{m})\in\Delta_{m}^{\delta},\quad x_{m+1},\dots x_{n}\ll 1,\] then \[\begin{split}& u_{n}(x_{1},\ldots x_{n})-u_{m}(x_{1},\ldots x_{m})\\ &\leq-C(\delta)^{-1}\max_{m+1}^{n}x_{k}|\log x_{k}|^{1/2}+C(\delta )^{\prime}\max_{m+1}^{n}x_{k}.\end{split}\] where \(C,C^{\prime}\) are constants which depend only on \(n,g,\delta\). Proof.: Without loss we focus on any point \((\xi_{1},\ldots\xi_{m})\in\Delta_{m}^{\delta}\), and \(0<\xi_{m+1}\leq\ldots\leq\xi_{n}<1\). The goal is to estimate the value \(u_{n}(\xi_{1},\ldots\xi_{n})\) from the above. We take a cutoff function \(\chi(x_{1},\ldots,x_{n})\) supported around \((\xi_{1},\ldots\xi_{m},0,\ldots 0)\), which vanishes near \(\partial\Delta_{n}\setminus\operatorname{Int}(\Delta_{m})\), so that \[0\leq\chi\leq 1,\quad\chi(\xi_{1},\ldots\xi_{n})=1,\quad|D\chi|+|D^{2} \chi|\leq C(\delta).\] We take another auxiliary one-variable function \(\psi\), with the following properties: \[\begin{cases}\psi(x)=\frac{1}{\xi_{n}|\log\xi_{n}|}x^{2},&x\leq 0,\\ \psi(x)>0,&x>0,\\ \psi^{\prime\prime}(x)\leq\frac{2}{\xi_{n}|\log\xi_{n}|},&0\leq x\leq\xi_{n}, \\ \psi^{\prime\prime}(x)=0,&x>\xi_{n}.\end{cases}\] We will use the barrier function \[\begin{split}& v=\sum_{m+1}^{n}\psi(x_{k}-\xi_{k})+(1-\frac{x_{m +1}+\ldots x_{n}}{\pi})u_{m}(\frac{x_{1}}{1-\frac{\sum_{m+1}^{n}x_{k}}{\pi}}, \ldots\frac{x_{m}}{1-\frac{\sum_{m+1}^{n}x_{k}}{\pi}})\\ &+\Lambda\chi(x_{1},\ldots x_{n})\sum_{m+1}^{n}\frac{x_{k}\log x_{k}}{| \log\xi_{n}|}+\frac{\Lambda}{2}\sum_{m+1}^{n}x_{k},\end{split}\] where \(\Lambda\) is a non-negative parameter to be determined, and we will keep track of dependence on \(\Lambda\) in the estimates. By the convexity of \(u_{n}\), and the fact that \(u_{n}=0\) at the vertices, \[u_{n}(x_{1},\ldots x_{n})\leq(1-\frac{x_{m+1}+\ldots x_{n}}{\pi})u_{m}(\frac{ x_{1}}{1-\frac{\sum_{m+1}^{n}x_{k}}{\pi}},\ldots\frac{x_{m}}{1-\frac{\sum_{m+1}^{n} x_{k}}{\pi}}),\] and the inequality is strict for \((x_{m+1},\ldots x_{n})\in\operatorname{Int}(\Delta_{n-m})\). For \(\Lambda=0\), this implies \(v>u_{n}\) on \(\Delta_{n}\). We observe that at the point \((\xi_{1},\ldots,\xi_{n})\), the coefficient of the \(\Lambda\) parameter is \[\sum_{m+1}^{n}(\frac{\xi_{k}\log\xi_{k}}{|\log\xi_{n}|}+\frac{1}{2}\xi_{k}) \leq-\frac{1}{2}\sum_{m+1}^{n}\xi_{k}<0.\] Thus as \(\Lambda\) increases, the value of \(u_{n}-v\) at \((\xi_{1},\ldots\xi_{n})\) will eventually switch sign, so the graph of \(u_{n}\) and \(v\) must first touch at some parameter value \(\Lambda_{0}\), where \[\begin{split}\frac{1}{2}\Lambda_{0}&\leq\xi_{n}^{-1} (-u_{n}(\xi)+(1-\frac{\xi_{m+1}+\ldots\xi_{n}}{\pi})u_{m}(\frac{\xi_{1}}{1- \frac{\sum_{m+1}^{n}\xi_{k}}{\pi}},\ldots\frac{\xi_{m}}{1-\frac{\sum_{m+1}^{n} \xi_{k}}{\pi}}))\\ &\leq\frac{-u_{n}(\xi_{1},\ldots\xi_{n})+u_{m}(\xi_{1},\ldots\xi _{m})}{\xi_{n}}+C(\delta).\end{split} \tag{10}\] Here the second inequality uses the interior Lipschitz bound on \(u_{m}\). If we know \(\Lambda_{0}\geq|\log\xi_{n}|^{1/2}\), then \[\frac{-u_{n}(\xi_{1},\ldots\xi_{n})+u_{m}(\xi_{1},\ldots\xi_{m})}{\xi_{n}}\geq \frac{1}{2}|\log\xi_{n}|^{1/2}-C(\delta),\] which implies the conclusion of the Proposition. Thus without loss \(\Lambda_{0}\leq|\log\xi_{n}|^{1/2}\). We will focus on the point \(x=(x_{1},\ldots x_{n})\) where the two graphs touch. Clearly this point lies on the set \(\{\chi\sum_{m+1}^{n}\frac{x_{k}\log x_{k}}{|\log\xi_{n}|}>0\}\subset\mbox{Int}( \Delta_{n})\). Furthermore, \[\frac{\Lambda_{0}}{2}\sum_{m+1}^{n}x_{k}+\sum_{m+1}^{n}\psi(x_{k}-\xi_{k})+ \Lambda_{0}\chi(x_{1},\ldots x_{m})\sum_{m+1}^{n}\frac{x_{k}\log x_{k}}{|\log \xi_{n}|}\leq 0. \tag{11}\] We write \(x_{k_{0}}=\max_{m+1\leq k\leq n}x_{k}\), and deduce some constraints on \(x_{k_{0}}\). If we ignore the non-negative \(\psi\) terms, then \[\frac{1}{2}x_{k_{0}}\leq-\sum_{m+1}^{n}\frac{x_{k}\log x_{k}}{|\log\xi_{n}|} \leq(n-m)x_{k_{0}}|\log x_{k_{0}}||\log\xi_{n}|^{-1}\ll 1,\] so \(x_{k_{0}}\leq\xi_{n}^{\frac{1}{2(n-m)}}\ll 1\). If \(x_{n}\leq x_{k_{0}}<\frac{\xi_{n}}{2}\), then (11) implies \[\xi_{n}\leq 4\psi(x_{n}-\xi_{n})|\log\xi_{n}|\leq C\Lambda_{0}x_{k_{0}}|\log x _{k_{0}}|.\] Since we can without loss assume \(\Lambda_{0}\leq|\log\xi_{n}|^{1/2}\), we obtain \[x_{k_{0}}\geq\frac{\xi_{n}}{C\Lambda_{0}|\log\xi_{n}|}\geq\frac{\xi_{n}}{C| \log\xi_{n}|^{2}}.\] In summary, \[\frac{\xi_{n}}{C|\log\xi_{n}|^{2}}\leq x_{k_{0}}\leq\xi_{n}^{\frac{1}{2(n-m)} }\ll 1. \tag{12}\] Since \(v\geq u_{n}\) and \(v(x)=u_{n}(x)\) at the interior point \(x\), the gradients of \(v\) and \(u_{n}\) agree at \(x\), and \(D^{2}u_{n}(x)\leq D^{2}v(x)\). In particular, \(D^{2}v(x)\geq 0\) and \[\arctan D^{2}v(x)\geq\frac{n-1}{2}\pi. \tag{13}\] We will now derive an upper bound on \(\arctan D^{2}v(x)\). On the linear subspace \(\mbox{span}\{dx_{m+1},\ldots dx_{n}\}\subset T_{x}\mathbb{R}^{n}\), we find an eigenbasis \(e_{1},\ldots e_{m}\) for \(D^{2}u_{m}\) with respect to the metric \(g\), with eigenvalues \(\lambda_{1}^{\prime},\ldots\lambda_{m}^{\prime}\). By the construction of \(u_{m}\), we have \(\sum_{1}^{m}\arctan\lambda_{i}^{\prime}=\frac{m-1}{2}\pi\), and by the support property of \(\chi\) and the interior smoothness of \(u_{m}\), all the \(\lambda_{i}^{\prime}\) are bounded from above by constants \(C(\delta)\). The contribution of the \(u_{m}\) term to \(D^{2}v(e_{i},e_{i})\) is bounded above by \((1+Cx_{k_{0}})\lambda_{i}^{\prime}\), and taking advantage of the fact that \(D^{2}v(e_{i},e_{i})\) only involve differentiation in the \(x_{1},\ldots x_{m}\) variables, \[D^{2}v(e_{i},e_{i})\leq (1+Cx_{k_{0}})\lambda_{i}^{\prime}+C\Lambda_{0}|D^{2}\chi|\sum_{k =m+1}^{n}|x_{k}\log x_{k}||\log\xi_{n}|^{-1}\] \[\leq (1+Cx_{k_{0}})\lambda_{i}^{\prime}+\frac{C(\delta)}{|\log\xi_{n} |}\Lambda_{0}x_{k_{0}}|\log x_{k_{0}}|\] \[\leq \lambda_{i}^{\prime}+C(\delta)(\Lambda_{0}+1)x_{k_{0}}.\] The last inequality here uses the lower bound part of (12). By the mean value inequality for \(\arctan D^{2}v(e_{i},e_{i})\leq\arctan\lambda_{i}^{\prime}+C(\delta)(\Lambda_{0} +1)x_{k_{0}}\), so after summation, \[\sum_{1}^{m}\arctan D^{2}v(e_{i},e_{i})\leq\frac{m-1}{2}\pi+C(\delta)(\Lambda_{ 0}+1)x_{k_{0}}. \tag{14}\] We now pick \(e_{m+1}\) to be the unit vector orthogonal to this \(m\)-dimensional subspace, satisfying \[dx_{i}(e_{m+1})=0,\quad\forall i\in\{m+1,\ldots n\}\setminus\{k_{0}\},\] and we complete \(e_{1},\ldots e_{m+1}\) into an orthonormal basis \(e_{1},\ldots e_{n}\). We will estimate \(D^{2}v(e_{m+1},e_{m+1})\). The \(u_{m}\) term contribution is bounded by \(C(\delta)\) by the support property of \(\chi\). The \(\psi\) term contribution is bounded by \(\frac{C}{\xi_{n}|\log\xi_{n}|}\) if \(x_{k_{0}}\leq 2\xi_{n}\), and vanishes otherwise. Thus for \(x_{k_{0}}\leq 2\xi_{n}\), \[D^{2}v(e_{m+1},e_{m+1})\leq\frac{C}{\xi_{n}|\log\xi_{n}|}+C( \delta)+\frac{C\Lambda_{0}}{|\log\xi_{n}|x_{k_{0}}}+\frac{C\Lambda_{0}|D\chi \|\log x_{k_{0}}|}{|\log\xi_{n}|}\] \[\leq C(\delta)(\frac{\Lambda_{0}+1}{|\log\xi_{n}|x_{k_{0}}}+1).\] The same estimate holds for \(x_{k_{0}}>2\xi_{n}\), where the \(\frac{C}{\xi_{n}|\log\xi_{n}|}\) term does not appear. Hence in both cases, \[\arctan D^{2}v(e_{m+1},e_{m+1})\leq\frac{\pi}{2}-C(\delta)^{-1}\min(1,\frac{| \log\xi_{n}|x_{k_{0}}}{\Lambda_{0}+1}).\] Combined with Lemma 2.16 and (14), we obtain the upper bound \[\arctan D^{2}v(x)\leq\sum_{1}^{n}\arctan D^{2}v(e_{i},e_{i})\] \[\leq\frac{n-1}{2}\pi+C(\delta)(\Lambda_{0}+1)x_{k_{0}}-C(\delta )^{-1}\min(1,\frac{|\log\xi_{n}|x_{k_{0}}}{\Lambda_{0}+1}).\] Comparing this with the lower bound (13), \[C(\delta)(\Lambda_{0}+1)x_{k_{0}}-\min(1,\frac{|\log\xi_{n}|x_{k_{0}}}{ \Lambda_{0}+1})\geq 0. \tag{15}\] Recall that without loss \(\Lambda_{0}\leq|\log\xi_{n}|^{1/2}\), so by (12), \[C(\delta)(\Lambda_{0}+1)x_{k_{0}}\leq C(\delta)|\log\xi_{n}|^{1/2}\xi_{n}^{ \frac{1}{(n-m)}}\ll 1,\] whenever \(\xi_{n}\) is small enough depending on \(C(\delta)\). Thus (15) reduces to \[C(\delta)(\Lambda_{0}+1)x_{k_{0}}-\frac{|\log\xi_{n}|x_{k_{0}}}{\Lambda_{0}+1 }\geq 0,\] hence \[\Lambda_{0}+1\geq C(\delta)^{-1}|\log\xi_{n}|^{1/2}\gg 1,\] which implies the Prop. using (10). Prop. 2.17 has the important consequence on the divergence of the gradient. **Corollary 2.18**.: (Gradient divergence) Let \(n\geq 2\) and \(1\leq m\leq n-1\). In the neighbourhood of the interior of the \(\Delta_{m}\) face, \[(x_{1},\ldots x_{m})\in\Delta_{m}^{\delta},\quad x_{m+1},\ldots x_{n}\ll 1,\] then \[\partial_{i}u_{n}(x_{1},\ldots x_{n})\leq-C(\delta)^{-1}|\log\max_{m+1}^{n}x_ {k}|^{1/2}+C(\delta)^{\prime},\quad i=m+1,\ldots n.\] In particular, the gradient \(du_{n}(x)\)_diverges uniformly to infinity_ as \(x\) tends to \(\partial\Delta_{n}\), bounded away from the vertices. Proof.: Without loss \(x_{m+1}\leq\ldots\leq x_{n}\). Let \(K\gg 1\) be a parameter much larger than the \(C(\delta)\) from Prop. 2.17, and assume that \(x_{n}|\log x_{n}|\ll K^{-1}\). Using Prop. 2.17, \[u_{n}(x_{1},\ldots x_{m},x_{m+1},\ldots x_{i}+Kx_{n}|\log x_{n}| ^{1/2},x_{i+1},\ldots x_{n})-u_{m}(x_{1},\ldots x_{m})\] \[\leq -C(\delta)^{-1}Kx_{n}|\log x_{n}|+C^{\prime}(\delta)Kx_{n}|\log x _{n}|^{1/2},\] Recall from (8) that \[|u_{n}(x_{1},\ldots x_{n})-u_{m}(x_{1},\ldots x_{m})|\leq C\sum_{m+1}^{n}x_{k} |\log x_{k}|\leq Cx_{n}|\log x_{n}|.\] Since \(K\gg C(\delta)\), we can absorb the \(Cx_{n}|\log x_{n}|\) term, so \[\partial_{i}u_{n}(x_{1},\ldots x_{n})Kx_{n}|\log x_{n}|^{1/2}\] \[\leq u_{n}(x_{1},\ldots x_{m},x_{m+1},\ldots x_{i}+Kx_{n}|\log x_{n}| ^{1/2},x_{i+1},\ldots x_{n})-u_{n}(x_{1},\ldots x_{n})\] \[\leq -\frac{1}{2}KC(\delta)^{-1}x_{n}|\log x_{n}|+C^{\prime}(\delta) Kx_{n}|\log x_{n}|^{1/2}.\] This implies the gradient divergence bound with modified constants. **Corollary 2.19**.: Let \(n\geq 2\). The convex function \(u_{n}\) admits no subgradient at any point \(x\in\partial\Delta_{n}\) except for the vertices. ### Gradient bounds **Lemma 2.20**.: _Near the boundary_ the gradient can diverge at most logarithmically_: _in the region_ \[(x_{1},\ldots x_{m})\in\Delta_{m}^{\delta},\quad 0<x_{m+1},\ldots,x_{n}\ll\delta,\] _we have_ \[|\partial_{i}u_{n}|\leq C(\delta),\quad i=1,2,\ldots m,\quad C\log x_{i}\leq \partial_{i}u\leq C(\delta),\quad i=m+1,\ldots n.\] Proof.: By the boundedness and convexity of \(u_{n}\), we easily see that \[-C(\delta)\leq\partial_{i}u\leq C(\delta),\quad i=1,\ldots m.\] Now we apply the boundary modulus of continuity (_cf._ Theorem 2.9), together with convexity, to deduce \[C(\delta)\geq\partial_{n}u\geq\frac{u_{n}(x_{1},x_{2},\ldots x_{n})-u_{n}(x_{1},x_{2},\ldots x_{n-1},0)}{x_{n}}\geq C\log x_{n}.\] Similarly for \(i=m+1,\ldots n\). This gradient bound can be improved in a way which sheds some light on the inductive structure in \(n\): when we are very close to a lower dimensional face, then the directional derivatives tangential to the face would be close to the gradient in the lower dimensional problem. **Lemma 2.21**.: _In the region with_ \[(x_{1},\ldots x_{m})\in\Delta_{m}^{\delta},\quad 0<x_{m+1},\ldots x_{n}\ll\delta,\] _we have for \(i\leq m\),_ \[|\partial_{i}u_{n}(x_{1},\ldots x_{n})-\partial_{i}u_{m}(x_{1},\ldots x_{m})| \leq C(\delta)\big{(}\max_{k\geq m+1}x_{k}|\log x_{k}|\big{)}^{1/2}.\] _Here \(C(\delta)\) denotes some constant which depends only on \(n,g,\delta\)._ Proof.: Without loss \(x_{n}=\max_{k\geq m+1}x_{k}\ll 1\). Denote \(y_{i}=\partial_{i}u_{n}(x)\) at the given point \(x\in\operatorname{Int}(\Delta_{n})\). By convexity, \[u_{n}(x^{\prime})-u_{n}(x)\geq\sum_{1}^{n}y_{i}(x^{\prime}_{i}-x_{i}).\] We consider \(x^{\prime}\in\Delta_{m}\), so that \(x^{\prime}_{m+1}=\ldots=x^{\prime}_{n}=0\). Then by Lemma 2.20, \[u_{n}(x^{\prime})-u_{n}(x)\geq\sum_{1}^{m}y_{i}(x^{\prime}_{i}-x_{i})-\sum_{m +1}^{n}y_{i}x_{i}\geq\sum_{1}^{m}y_{i}(x^{\prime}_{i}-x_{i})-C(\delta)\sum_{m +1}^{n}x_{i}.\] By Theorem 2.9, \[|u_{n}(x)-u_{m}(x_{1},\ldots x_{m})|\leq-C\sum_{m+1}^{n}x_{i}\log\frac{x_{i}} {2\pi}\leq Cx_{n}|\log x_{n}|.\] By the boundary prescription \(u_{n}(x^{\prime})=u_{m}(x^{\prime})\), so combining the above, \[u_{m}(x^{\prime})-u_{m}(x_{1},\ldots x_{m})\geq\sum_{1}^{m}y_{i}(x^{\prime}_{ i}-x_{i})+C(\delta)x_{n}\log x_{n}.\] By the interior smoothness of \(u_{m}\) (_cf._ Theorem 2.9), we can find some ball in \(\Delta_{m}\) of radius comparable to \(\delta\), such that the Hessian of \(u_{m}\) has an upper bound \(C(\delta)\). This gives \[u_{m}(x^{\prime})-u_{m}(x_{1},\ldots x_{m})\leq\sum_{1}^{m}\partial_{i}u_{m}( x_{1},\ldots x_{m})(x^{\prime}_{i}-x_{i})+C(\delta)\sum_{1}^{m}|x_{i}-x^{\prime}_{ i}|^{2},\] for all \(x^{\prime}\) within the ball. Contrasting the two bounds gives \[\sum_{1}^{m}(y_{i}-\partial_{i}u_{m}(x_{1},\dots x_{m}))(x^{\prime}_{i}-x_{i}) \leq C(\delta)\sum_{1}^{m}|x_{i}-x^{\prime}_{i}|^{2}+C(\delta)x_{n}|\log x_{n}|,\] which shows the result. **Corollary 2.22**.: Let \(x^{\prime}\) be an interior point of \(\Delta_{m}\subset\partial\Delta_{n}\), then for any sequence of \(x\in\operatorname{Int}(\Delta_{n})\) converging to \(x^{\prime}\), the tangential gradient also converges: \[\partial_{i}u_{n}(x)\to\partial_{i}u_{m}(x^{\prime}),\quad i=1,\dots m.\] We now turn the attention to the vertices. **Lemma 2.23**.: _The set of subgradients at the origin_ \[Du_{n}(0)=\{y\in\mathbb{R}^{n}:u(x)\geq u(0)+\langle x,y\rangle,\forall x\in \Delta_{n}\}\] _is sandwiched between translated copies of the negative quadrant:_ \[-C(n)(1,\dots,1)-(\mathbb{R}^{n}_{\geq 0})\subset Du_{n}(0)\subset-(\mathbb{R}^ {n}_{\geq 0}).\] _Here \(C(n)\) is the large constant from Prop. 2.14. In particular \(Du_{n}(0)\) is a closed convex set with nonempty interior._ _Similarly, the set of subgradients \(Du_{n}((0,\dots,k\text{-th entry }\pi,\dots 0)\) is sandwiched between two polyhedral cones in \(\mathbb{R}^{n}\):_ \[\{y_{k}\geq\max_{i}(0,y_{i})\}+C(n)(0,\dots 1,\dots 0)\subset Du_{n}((\dots, \pi,\dots))\subset\{y_{k}\geq\max_{i}(0,y_{i})\}.\] Proof.: Since \(u_{n}=0\) at all the vertices, by convexity \(y_{i}\leq 0\) for all the \(y\in Du_{n}(0)\), namely \(Du_{n}(0)\subset-(\mathbb{R}^{n}_{\geq 0})\). For the other inclusion, notice that Prop. 2.14 implies the non-emptiness of \(Du_{n}(0)\), and that \(x_{i}\geq 0\) implies that translation in the \(-\partial_{y_{i}}\) direction remains inside \(Du_{n}(0)\). **Lemma 2.24**.: _Let \(n\geq 2\). Then the set of subgradients at the different vertices do not intersect._ Proof.: If \(y\) is a subgradient at two different vertices, then it is a subgradient at any point on the line segment joining the two vertices. This does not exist by Cor. 2.19. **Proposition 2.25**.: Let \(n\geq 2\). The gradient image of \(\operatorname{Int}(\Delta_{n})\) is \[Du_{n}(\operatorname{Int}(\Delta_{n}))=\mathbb{R}^{n}\smallsetminus\bigcup Du_ {n}(\text{vertices}).\] The set of limiting points for \(du_{n}(x)\) as \(x\in\operatorname{Int}(\Delta_{n})\) tends to a vertex, is \(\partial Du_{n}(\text{vertex})\). Proof.: First we observe that any \(y\in\mathbb{R}^{n}\) is a subgradient at some \(x\in\Delta_{n}\). This is because the graph of the affine linear function \(\langle x,y\rangle+a\) has to touch the graph of \(u_{n}\) as \(a\) increases from negative infinity. By Cor. 2.19, this \(x\) is either an interior point in \(\Delta_{n}\), or is one of the vertices. This shows \[\mathbb{R}^{n}=Du_{n}(\operatorname{Int}(\Delta_{n}))\cup\bigcup Du_{n}(\text{ vertices}).\] Notice that \(Du_{n}(\operatorname{Int}(\Delta_{n}))\) does not intersect \(Du_{n}(\text{vertices})\) by the strict convexity of \(u_{n}\) in the interior, so \[Du_{n}(\operatorname{Int}(\Delta_{n}))=\mathbb{R}^{n}\setminus\bigcup Du_{n}( \text{vertices}).\] Now the set of subgradients at the vertices are disjoint closed sets (_cf._ Lemma 2.24). This implies the claim on the limit of gradients. **Corollary 2.26**.: Let \(n\geq 2\). The closure of the gradient image \(\overline{Du_{n}(\operatorname{Int}(\Delta_{n}))}\) stays within bounded distance to the tropical hypersurface \[\Gamma_{std}=\text{non-smooth locus of }\max(0,y_{1},\dots y_{n})\subset \mathbb{R}_{y}^{n}.\] Proof.: At a given point \(y\in\overline{Du_{n}(\operatorname{Int}(\Delta_{n}))}\), without loss we assume \(\max(0,\dots y_{n})\) is achieved by zero, namely \(y_{i}\leq 0\). By Lemma 2.23, the interior of \(Du_{n}(0)\) contains the open cone \(-C(n)(1,\dots 1)-\mathbb{R}_{>0}^{n}.\) Prop. 2.25 forces \(y\) to lie inside its complement, so \(\max_{i}y_{i}\geq-C(n)\). Thus \(\operatorname{dist}(y,\Gamma_{std})\leq C(n)\). ### Partial Legendre transform The gradient divergence near the boundary is an indication that the \(x_{i}\) coordinates are not best suited for studying the asymptotic geometry of the special Lagrangian graph. Instead, we shall use a mixture of the position and momentum coordinates, which analytically corresponds to the _partial Legendre transform_. In the Lagrangian version, this idea already appeared in Matessi [6]. Given \(1\leq m\leq n-1\) and \((x_{1},\dots x_{m})\in\operatorname{Int}(\Delta_{m})\), we can consider the \((n-m)\)-dimensional slice of \(\Delta_{n}\) prescribed by \((x_{1},\dots x_{m})\). Partial Legendre transform is simply doing Legendre transform on every such slice, by replacing \(x_{m+1},\dots x_{n}\) with the \(y_{m+1},\dots y_{n}\) coordinates, where \(y_{i}=\partial_{x_{i}}u_{n}\). **Lemma 2.27**.: _On each constant \((x_{1},\dots x_{m})\) slice of \(\operatorname{Int}(\Delta_{n})\), the partial derivative map \((x_{m+1},\dots x_{n})\mapsto(y_{m+1},\dots y_{n})\) is a diffeomorphism onto \(\mathbb{R}^{n-m}\)._ Proof.: The partial derivative map is a diffeomorphism onto its image, because \(u_{n}\) restricts to a smooth and strictly convex function on the slice, which is an open convex set. To see this map is surjective to \(\mathbb{R}^{n-m}\), the argument imitates Prop. 2.25. Notice that the slice stays away from the vertices of \(\Delta_{n}\), since \((x_{1},\dots x_{m})\in\operatorname{Int}(\Delta_{m})\). We can then define the partial Legendre transform function \[u_{n,m}(x_{1},\dots x_{m},y_{m+1},\dots y_{n})=\min_{x_{m+1},\dots x_{n}}(u_{n }(x)-\sum_{m+1}^{n}x_{k}y_{k}), \tag{16}\] which satisfies \[\begin{cases}\partial_{x_{k}}u_{n,m}=y_{k},&k=1,\ldots m,\\ \partial_{y_{k}}u_{n,m}=-x_{k},&k=m+1,\ldots n.\end{cases}\] The domain of definition is \(\operatorname{Int}(\Delta_{m})\times\mathbb{R}^{n-m}\). The function \(u_{n,m}\) is convex in \(x_{1},\ldots x_{m}\), and concave in \(y_{m+1},\ldots y_{n}\). **Lemma 2.28**.: _The special Lagrangian graph equation in the \(x_{1},\ldots x_{m},y_{m+1},\ldots y_{n}\) coordinate system reads \(\arctan D^{2}u_{n,m}=\frac{m-1}{2}\pi\)._ **Lemma 2.29**.: _We have_ \[\begin{cases}x_{i}\leq e^{y_{i}/C},\quad i=m+1,\ldots n,\\ \left|u_{n,m}(x_{1},\ldots x_{m},y_{m+1},\ldots)-u_{m}(x_{1},\ldots x_{m}) \right|\leq C^{\prime}\max_{k\geq m+1}e^{y_{k}/C}.\end{cases}\] _The constants only depend on \(n,g\)._ Proof.: The global estimate \(x_{i}\leq e^{y_{i}/C}\) follows from the at worst log divergence of gradient, _cf._ Prop. 2.20, which comes from the boundary modulus of continuity estimate (7), and the constant depends only on \(n,g\). By the definition of the Legendre transform \[u_{n,m}=u_{n}(x)-\sum_{m+1}^{n}x_{k}y_{k},\] so using the boundary modulus of continuity (8), \[|u_{n,m}-u_{m}|\leq|u_{n}(x)-u_{m}|+\sum x_{k}|y_{k}|\leq C\sum x_{k}|\log x_{ k}|+\sum x_{k}|y_{k}|\leq C^{\prime}\max_{m+1}^{n}|y_{k}|e^{y_{k}/C}.\] The \(|y_{k}|\) factor can be absorbed by the exponential after modifying constants. **Proposition 2.30**.: Given small \(\delta>0\), and \(k\geq 2,\alpha\in(0,1)\), there exists large positive constants \(C(\delta),C^{\prime}(\delta)\), such that on the region \[(x_{1},\ldots x_{m})\in\Delta_{m}^{\delta},\quad y_{m+1},\ldots y_{n}\leq-C( \delta), \tag{17}\] we have the local higher order derivative estimates \[\left\|u_{m,n}-u_{m}\right\|_{C^{k,\alpha}}\leq C(\delta,k,\alpha)\max_{i\geq m +1}e^{y_{i}/C},\] and \[\left\|\partial_{y_{i}}u_{m,n}\right\|_{C^{k,\alpha}}\leq C(\delta,k,\alpha)e ^{y_{i}/C},\quad i=m+1,\ldots n.\] Proof.: The special Lagrangian graph equation satisfied by \(u_{n,m}\) is elliptic, and independent of first order derivatives. The function \(u_{m}\) (independent of the \(y_{m+1},\ldots y_{n}\) variables) is a special solution to the same equation with smoothness bounds on \(\Delta_{m}^{\delta}\times\mathbb{R}^{n-m}\). Morever, for \(y_{m+1},\ldots y_{n}\) sufficiently negative depending on \(\delta\), the \(C^{0}\)-difference \(|u_{n,m}-u_{m}|\) can be made arbitrarily small by the exponential decay estimate in Lemma 2.29. Thus on a slightly shrunken domain, we can verify the conditions for Savin's small perturbation theorem [12], to bound \(\left\|u_{n,m}-u_{m}\right\|_{C^{2,\alpha}}\) by a small quantity, on local coordinate balls of radius \(O(\delta)\). Once we know \(u_{n,m}\) is \(C^{2,\alpha}\)-close to \(u_{m}\), the problem linearises, and we can bootstrap the exponentially small \(C^{0}\) bound on \(u_{n,m}-u_{m}\) to \(C^{k,\alpha}\) norm. Similarly, by differentiating the equation we obtain a uniformly elliptic equation on \(\partial_{y_{i}}u_{m,n}=-x_{i}\). The \(C^{0}\) bound on this quantity is \(O(e^{y_{i}/C})\), which we can bootstrap to higher orders. **Remark 2.31**.: The reason to separately state the higher derivative bound on \(\partial_{y_{i}}u_{m,n}=-x_{i}\), is that when the \(y_{i}\) are large but not all of comparable order, then some \(x_{i}\) would be much smaller than others. **Remark 2.32**.: We will regard (17) as giving a local chart of the special Lagrangian graph. Lemma 2.29 implies that \(x\) lies close to the shrunken boundary face \(\Delta_{m}^{\delta}\). Conversely, for \((x_{1},\ldots x_{m})\in\Delta_{m}^{\delta}\), and \(x_{m+1},\ldots x_{n}\) sufficiently small depending on \(\delta\), the gradient divergence estimate in Cor. 2.18 implies that \(y_{i}=\partial_{x_{i}}u_{n}\) is very negative, hence lies inside the chart (17). These charts cover a \(C(\delta)^{-1}\) neighbourhood of the boundary of \(\partial\Delta_{n}\), minus the \(O(\delta)\)-neighbourhood of the vertices. Of course, the interior region \(\{\operatorname{dist}(x,\partial\Delta_{n})\gtrsim C(\delta)^{-1}\}\) is simply described by \(u_{n}\) itself, with its interior smoothness estimates. ## 3 Geometry of the solution ### Special Lagrangian current We extend \(u_{n}\) to a function over \(\Delta_{n}\cup-\Delta_{n}\) as an odd function, \(u_{n}(-x)=-u_{n}(x)\). Over \(\operatorname{Int}(-\Delta_{n})\), this odd extension \(u_{n}\) satisfies \[\arctan D^{2}u_{n}=-\frac{(n-1)\pi}{2}.\] This differs from \(\hat{\theta}=\frac{n-1}{2}\pi\) by an integer multiple of \(\pi\). Thus under the orientation convention that \(\operatorname{Re}(e^{-\sqrt{-1}\hat{\theta}}\Omega)>0\), the graph of the gradient \(du_{n}\) on \(\operatorname{Int}(\Delta_{n}\cup(-\Delta_{n}))\) defines a smooth special Lagrangian \(\mathbb{L}^{o}\subset T^{*}T^{n}\), with phase angle \(\hat{\theta}\). The main drawback is that \(\mathbb{L}^{o}\) is not yet a closed subset of \(T^{*}T^{n}\). To remedy this, we view \(\mathbb{L}^{o}\) as defining an integration current \(\mathbb{L}\). The support of \(\mathbb{L}\) contains also the points on the closure of \(\mathbb{L}^{o}\). As a current \(\mathbb{L}\) is sent to \(-\mathbb{L}\) under the involution \[x_{i}\to-x_{i},\quad y_{i}\to y_{i}. \tag{18}\] We will use the notation \(\mathbb{L}_{n}\) when we wish to emphasize on the \(n\)-dependence. **Proposition 3.1**.: The current \(\mathbb{L}\) is a special Lagrangian integral current of phase \(\hat{\theta}\), and \(\partial\mathbb{L}=0\). The volume growth has the upper bound \(\operatorname{Mass}(\mathbb{L}\cap B(R))\leq CR^{n-1}\). Proof.: We first show that \(\mathbb{L}\) has locally finite mass in \(T^{*}T^{n}\). For simplicity we first treat the special case \(g_{ij}=\delta_{ij}\), and consider a large ball \(B(R)\subset T^{*}T^{n}\). By the special Lagrangian property, \[\operatorname{Mass}(\mathbb{L}\cap B(R))=\int_{\mathbb{L}^{o}\cap B(R)} \operatorname{Re}(e^{-\sqrt{-1}\hat{\theta}}\Omega)=\int_{\mathbb{L}^{o}\cap B (R)}\operatorname{Re}(-\sqrt{-1}\bigwedge_{1}^{n}(dy_{i}-\sqrt{-1}dx_{i})),\] which is a weighted sum of terms like \[\int_{\mathbb{L}^{\circ}\cap B(R)}dy_{i_{1}}\wedge\ldots dy_{i_{k}}\wedge dx_{i_{ k+1}}\ldots\wedge dx_{i_{n}},\quad k<n.\] Notice here that at least one \(dx_{i}\) factor appears in the integrand. Now \(y_{i}=\partial_{x_{i}}u_{n}\), and \(u_{n}\) is a smooth and strictly convex function on \(\operatorname{Int}(\Delta_{n})\). Thus the projection from the portion of \(\mathbb{L}^{\circ}\) over \(\operatorname{Int}(\Delta_{n})\), to the \(n\)-dimensional plane with coordinates \(y_{i_{1}},\ldots x_{i_{n}}\), is diffeomorphic to its image. This image is contained in \(\Delta_{n-k}\times\{|y_{i}|\leq R,\forall i\leq k\}\), so the volume integral is bounded by \(CR^{k}\leq CR^{n-1}\). More generally, we can reduce to the \(g_{ij}=\delta_{ij}\) case by a linear change of coordinates (_cf._ section 2.1). Next we analyse the boundary of the current \(\mathbb{L}\). From Cor. 2.18 we know the divergence of the gradient \(du_{n}\) near all the boundary strata except for the vertices, so the only possible contribution to \(\partial\mathbb{L}\cap B(R)\) comes from the neighbourhood of the vertices. We focus on the vertex at the origin. By the same kind of volume computation as above, for small \(r\ll 1\) and fixed \(R\), we have \[\operatorname{Mass}(\mathbb{L}^{\circ}\cap\{|x_{i}|<r,|y_{i}|\leq R,\forall i \})\leq CR^{n-1}r.\] Morever, the integral of forms involving at least two \(dx_{i}\) factors are suppressed by \(O(R^{n-2}r^{2})\). We can pick a standard cutoff function \(\chi\) which is supported on \(\{|x_{i}|<r,\forall i\}\) (and similarly on the neighbourhood of the other vertices), and satisfies \(|d\chi|\leq Cr^{-1}\). Then for any fixed \((n-1)\)-form \(\alpha\) compactly supported inside \(B(R)\), we have \[\int_{\mathbb{L}^{\circ}}d(\chi\alpha)\leq\left\|d\alpha\right\| _{C^{0}}CR^{n-1}r+\sum\left\|\alpha(\partial_{y_{i_{1}}},\ldots\partial_{y_{i_{ n-1}}})\right\|_{C^{0}}CR^{n-1}r\left\|d\chi\right\|_{C^{0}}\] \[+\left\|\alpha\right\|_{C^{0}}CR^{n-2}r^{2}\left\|d\chi\right\|_ {C^{0}}\] \[\leq C\left\|\alpha\right\|_{C^{1}}R^{n-1}r+\sum\left\|\alpha( \partial_{y_{i_{1}}},\ldots\partial_{y_{i_{n-1}}})\right\|_{C^{0}}CR^{n-1}.\] Observe that \[\int_{\mathbb{L}^{\circ}}d((1-\chi)\alpha)=0\] since \((1-\chi)\alpha\) is supported away from the vertex neighbourhoods. Thus \[\int_{\mathbb{L}^{\circ}}d\alpha\leq C\left\|\alpha\right\|_{C^{1}}R^{n-1}r+ \sum\left\|\alpha(\partial_{y_{i_{1}}},\ldots\partial_{y_{i_{n-1}}})\right\|_{ C^{0}}CR^{n-1}.\] Taking the \(r\to 0\) limit, this shows \[\int_{\mathbb{L}^{\circ}}d\alpha\leq\sum\left\|\alpha(\partial_{y_{i_{1}}}, \ldots\partial_{y_{i_{n-1}}})\right\|_{C^{0}}CR^{n-1}.\] Thus \(\partial\mathbb{L}\) has finite mass within the ball \(B(R)\). Clearly \(\partial\mathbb{L}\) is supported in the cotangent fibres over the vertices of \(\Delta_{n}\), and by the Riesz representation theorem it can be written as a \((n-1)\)-form with signed measure coefficients on these cotangent fibres. We emphasize that only \(dy_{i}\) type factors appear, and the \(dx_{i}\) factors are killed. Now \(\partial\mathbb{L}\) is sent to \(-\partial\mathbb{L}\) under the involution (18). Since \(dy_{i}\) are invariant under the involution, we conclude that the measure coefficients vanish. This means that \(\partial\mathbb{L}=0\) within any fixed ball. **Remark 3.2**.: Geometrically, the boundary contributions from the vertices of \(\Delta_{n}\) and \(-\Delta_{n}\) are both nonzero, but cancel out. It is an instructive excercise to see how this works in \(n=1,2\). **Proposition 3.3**.: The support of \(\mathbb{L}\) is the closure of \(\mathbb{L}^{o}\), whose projection to \(T^{n}\) is contained in the union of \(\operatorname{Int}(\Delta_{n}\cup-\Delta_{n})\) and the vertices of \(\Delta_{n}\). The set of limiting points of \(\mathbb{L}\) inside the cotangent fibre over the origin, is a copy of \(\partial Du_{n}(0)\subset\mathbb{R}_{y}^{n}\), and similarly with the other vertices. Proof.: By the divergence of the gradient in Prop. 2.18, there is no limiting point over the boundary of \(\Delta_{n}\cup-\Delta_{n}\) except at the vertices. The claim on the limiting points is a reformulation of Prop. 2.25. ### Smoothness of the special Lagrangian Our goal is to show **Proposition 3.4**.: The special Lagrangian current \(\mathbb{L}_{n}\) is supported on a smooth embedded submanifold (also denoted \(\mathbb{L}_{n}\) henceforth). By the interior smoothness of \(u_{n}\), the essential task is to prove smoothness at the points over the vertices. By the Allard regularity theorem, it suffices to show **Proposition 3.5**.: Let \((0,\mathfrak{uration})\in\operatorname{supp}(\mathbb{L})\), then any tangent cone \(\mathcal{C}\) at \((0,\mathfrak{uration})\) is a multiplicity one \(n\)-plane. In fact, it projects to a one-dimensional line in the \(x\)-plane. The following Lemma is inspired by Joyce's work [5, Lemma 5.7] **Lemma 3.6**.: _(Multiplicity one projection) Let \((a_{ij})\) be an \(n\times n\) matrix with \(\det>0\), and the last row entries \(a_{nj}\geq 0\). Let \((a^{ij})\) be the inverse matrix, and write the rotated coordinates_ \[\tilde{x}_{i}=a_{ij}x_{j},\quad\tilde{y}_{i}=a^{ji}y_{j}.\] _Let \(\chi(\tilde{y}_{1},\ldots\tilde{y}_{n-1},\tilde{x}_{n})\) be any nonnegative function on \(\mathbb{R}^{n}\), compactly supported inside \(\{\tilde{x}_{n}>0\}\). Then for any tangent cone \(\mathcal{C}\) at \((0,\mathfrak{uration})\in\operatorname{supp}(\mathbb{L})\), the projection map from \(\mathcal{C}\) to the \((\tilde{y}_{1},\ldots\tilde{y}_{n-1},\tilde{x}_{n})\) coordinate \(n\)-plane has non-negative Jacobian, and_ \[\int_{\mathcal{C}}\chi d\tilde{y}_{1}\wedge\ldots d\tilde{y}_{n-1}\wedge d \tilde{x}_{n}\leq\int_{\mathbb{R}^{n}}\chi d\tilde{y}_{1}\ldots d\tilde{y}_{n -1}d\tilde{x}_{n}. \tag{19}\] Proof.: First, we notice that in a fixed small ball \(B_{(0,\mathfrak{uration})}(r)\), the projection map from \(\mathbb{L}\cap B_{(0,\mathfrak{uration})}(r)\) to the \(\mathbb{R}^{n}\) plane with the \(\tilde{y}_{1},\ldots\tilde{y}_{n-1},\tilde{x}_{n}\) variables, is an injective map over \(\{\tilde{x}_{n}>0\}\). To see this, observe that \(a_{nj}\geq 0\) and \(x\in\Delta_{n}\cup-\Delta_{n}\) forces the preimage to lie on the part of \(\mathbb{L}\) defined by the graph of \(du_{n}\) over \(\operatorname{Int}(\Delta_{n})\). In these rotated coordinates, the Lagrangian \(\mathbb{L}\) satisfies \(\tilde{y}_{i}=\frac{\partial u_{n}}{\partial\tilde{x}_{i}}\) for \(i=1,\ldots n\), so by the strict convexity of \(u_{n}\), the preimage is unique. This shows \[\int_{\mathbb{L}\cap B_{(0,\mathfrak{uration})}(r)}\chi d\tilde{y}_{1}\wedge \ldots d\tilde{y}_{n-1}\wedge d\tilde{x}_{n}\leq\int_{\mathbb{R}^{n}}\chi d \tilde{y}_{1}\ldots d\tilde{y}_{n-1}d\tilde{x}_{n}.\] for any non-negative function \(\chi(\tilde{y}_{1},\ldots\tilde{y}_{n-1},\tilde{x}_{n})\) on \(\mathbb{R}^{n}\), compactly supported inside \(\{\tilde{x}_{n}>0\}\). The same inequality applies to any rescaling of \(\mathbb{L}\) around \((0,\mathfrak{y})\), where the ball \(B_{(0,\mathfrak{y})}(r)\) needs to be dilated along with \(\mathbb{L}\). As the dilation scale goes to infinity, we can pass the inequality to the limit to deduce the tangent cone statement (19). Moreover by the special Lagrangian graph equation and the orientation convention for \(\mathbb{L}\), the \(n\)-form \(d\tilde{y}_{1}\wedge\ldots d\tilde{y}_{n-1}\wedge d\tilde{x}_{n}\) is non-negative on \(\mathbb{L}\cap B_{(0,\mathfrak{y})}(r)\). This property also passes to the tangent cone, and implies the non-negativity of the Jacobian of the projection \(\mathcal{C}\to\mathbb{R}^{n}\). We now prove Prop. 3.5. Proof.: Let \(\mathcal{C}\) denote a tangent cone at \((0,\mathfrak{y})\), then \(\mathcal{C}\) is a special Lagrangian current of phase \(\hat{\theta}\), hence locally area minimizing. By Almgren's codimension two regularity, \(\mathcal{C}\) can be decomposed into irreducible components, which are individually locally area minimizing currents without boundary, such that the smooth locus of each component is path connected. Clearly each component \(\mathcal{C}^{\prime}\) is also a minimal cone, with the cone apex at the origin inside \(T_{(0,\mathfrak{y})}(T^{*}T^{n})\). Since the support of \(\mathbb{L}\) is contained in the coamoeba, we know the \(x\)-projection of the tangent cone \(\mathcal{C}\) is contained in the union of the two quadrants \(\mathbb{R}^{n}_{\geq 0}\cup-\mathbb{R}^{n}_{\geq 0}\). Clearly the support of \(\mathcal{C}\) is invariant under the involution (18). Notice that the cotangent fibre \(\mathbb{R}^{n}_{y}\) is a special Lagrangians with phase \(\frac{n}{2}\pi\), which is different from \(\hat{\theta}=\frac{n-1}{2}\pi\). Thus \(\mathcal{C}\) has no \(n\)-dimensional component contained inside \(\mathbb{R}^{n}_{y}\), so each component \(\mathcal{C}^{\prime}\) contains some point with \(x\)-coordinate in \(\mathbb{R}^{n}_{\geq 0}\setminus\{0\}\). Without loss the \(n\)-th coordinate is positive. Consider the link of \(\mathcal{C}^{\prime}\cap\{x_{i}\geq 0,\forall i\}\). For each \(i=1,\ldots n-1\), as the parameter \(a_{i}\) increases from zero to infinity, the family of hyperplanes \(\{a_{i}x_{n}=x_{i}\}\) must first touch this link at some parameter value \(a_{i}\). In a unit ball around the point of touching, \(\mathcal{C}^{\prime}\) locally lies inside the half space \(\{x_{i}-a_{i}x_{n}\geq 0\}\), and the touching point lies on the boundary of the half space. By the strong maximum principle (_cf._ Prop. 3.19), \(\mathcal{C}^{\prime}\) has nontrivial \(n\)-dimensional measure inside the hyperplane \(\{x_{i}=a_{i}x_{n}\}\), so by the path connectedness of its smooth locus, the whole component \(\mathcal{C}^{\prime}\) lies inside the intersection of the hyperplanes \(\{x_{i}=a_{i}x_{n}\}\), for \(i=1,\ldots n-1\). Thus the \(x\)-projection of \(\mathcal{C}^{\prime}\) is the line along the direction \(\sum_{1}^{n-1}a_{i}\partial_{x_{i}}+\partial_{x_{n}}\). Since \(\mathcal{C}^{\prime}\) is a Lagrangian cone, this implies \(\sum_{1}^{n-1}a_{i}y_{i}+y_{n}=\text{const}\). In summary, any tangent cone component \(\mathcal{C}^{\prime}\) is supported on some \(n\)-plane of the form \[(x_{1},\ldots x_{n})\in\mathbb{R}\text{-span}(a_{1},\ldots a_{n}),\quad\sum_{ 1}^{n}a_{i}y_{i}=0,\quad(a_{i})\in\mathbb{R}^{n}_{\geq 0}\setminus\{0\}. \tag{20}\] By the constancy theorem, \(\mathcal{C}^{\prime}\) is some integer multiple of the \(n\)-plane, and \(\mathcal{C}\) is a sum of such components. Now by Lemma 3.6, for any choice of the parameter matrix \((a_{ij})\), the projection map of \(\mathcal{C}\) into the \(\big{(}\tilde{y}_{1},\ldots\tilde{y}_{n-1},\tilde{x}_{n}\big{)}\) coordinate \(n\)-plane has non-negative Jacobian, and the projection degree (counting multiplicity) is at most one. This forces \(\mathcal{C}\) to have only one component, with multiplicity one. We can be a little more precise on the local structure. **Corollary 3.7**.: The tangent plane at any given point \((0,\mathfrak{y})\in\operatorname{supp}(\mathbb{L})\) is of the form \[(x_{1},\ldots x_{n})\in\mathbb{R}\text{-span}(a_{1},\ldots a_{n}),\quad\sum_{1 }^{n}a_{i}(y_{i}-\mathfrak{y}_{i})=0,\quad(a_{i})\in\mathbb{R}_{\geq 0}^{n} \setminus\{0\}.\] Suppose \(a_{n}=1\). Take the \(n\times n\) matrix \[(a_{ij})=\begin{pmatrix}1&0&\cdots&-a_{1}\\ 0&1&\cdots&-a_{2}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&1\end{pmatrix}\] with inverse matrix \((a^{ij})\), and write the rotated coordinates \[\tilde{x}_{i}=a_{ij}x_{j},\quad\tilde{y}_{i}=a^{ji}y_{j}.\] Then there are real analytic functions \(f_{i}\) defined on a small coordinate ball around the origin in \(\mathbb{R}^{n}\), with \(f_{i}(0)=0\), such that around \((0,\mathfrak{y})\) the special Lagrangian is locally given by \[\begin{cases}\tilde{x}_{i}=\tilde{x}_{n}f_{i}(\tilde{x}_{n},\tilde{y}_{1}- \tilde{\mathfrak{y}}_{1},\ldots\tilde{y}_{n-1}-\tilde{\mathfrak{y}}_{n-1}), \quad i=1,\ldots n-1.\\ \tilde{y}_{n}=\tilde{\mathfrak{y}}_{n}+f_{n}(\tilde{x}_{n},\tilde{y}_{1}- \tilde{\mathfrak{y}}_{1},\ldots\tilde{y}_{n-1}-\tilde{\mathfrak{y}}_{n-1}). \end{cases} \tag{21}\] In particular, the boundary of the convex set \(\partial Du_{n}(0)\) is locally given by \[\tilde{y}_{n}=\tilde{\mathfrak{y}}_{n}+f_{n}(\tilde{x}_{n},\tilde{y}_{1}- \tilde{\mathfrak{y}}_{1},\ldots\tilde{y}_{n-1}-\tilde{\mathfrak{y}}_{n-1})\] and is therefore a _real analytic hypersurface_ in the cotangent fibre \(\mathbb{R}_{y}^{n}\). Proof.: The tangent space of \(\mathbb{L}\) can be read off from the tangent cone (20), after translating the origin back to \((0,\mathfrak{y})\). In the rotated coordinates, the tangent space is simply \[\tilde{x}_{1}=\ldots=\tilde{x}_{n-1}=0,\quad\tilde{y}_{n}=\tilde{\mathfrak{y} }_{n}.\] Since \(\mathbb{L}\) is a smooth special Lagrangian in the Euclidean space, it must be a real analytic submanifold, so locally around \((0,\mathfrak{y})\) it can be written as the graph of an analytic vector-valued function over the tangent space. Morever, when \(\tilde{x}_{n}=x_{n}=0\), then the point on the special Lagrangian lies over the vertex of \(\Delta_{n}\) at the origin, so \(\tilde{x}_{i}=0\) for all \(i=1,\ldots n-1\). Thus the Taylor expansion of the real analytic function \(\tilde{x}_{i}\) in the \(\tilde{x}_{n}\) variable has no constant term, so we obtain the graphical representation (21). From the tangent cone information, we can read off \(f_{i}(0)=0\). The claim on \(\partial Du_{n}(0)\) follows by recalling from Prop. 3.3 that the part of \(\mathbb{L}\) inside the cotangent fibre \(\mathbb{R}_{y}^{n}\) over the origin, is a copy of \(\partial Du_{n}(0)\). ### Asymptotic geometry We now prove Theorem 1.3 by induction on \(n\). We already know the \(n=2\) case from the explicit solution, so we will assume that \(n\geq 3\) and the statement holds already for all lower dimensions. In particular, the \(C^{k,\alpha}\) topology makes sense on \(\mathbb{L}_{n-1}\). **Lemma 3.8**.: _There is some \(C(\delta),C(\delta)^{\prime}\) large enough depending on \(n,g,\delta\), such that the portion of the special Lagrangian_ \[\mathbb{L}_{n}\cap\{y_{n}\leq-C(\delta)^{\prime}\}\cap\{\text{dist}(x,\text{ vertices})\gtrsim\delta\}\] _lies on a \(C^{k,\alpha}\)-small graph over \(\mathbb{L}_{n-1}\times\mathbb{R}_{y_{n}}\subset T^{*}T^{n-1}\times T^{*}S^{1}= T^{*}T^{n}\). The local \(C^{k,\alpha}\)-norm of the graph is bounded by \(C(\delta)e^{y_{n}/C(n)}\), where the exponential decay rate \(C(n)\) is independent of \(\delta\)._ Proof.: By Lemma 2.29, the \(x_{n}\) coordinate is exponentially small when \(y_{n}\) is very negative. Recall from Remark 2.32 that for any given small \(\delta>0\), we used the partial Legendre transform \(u_{n,m}\) to assign smooth charts for \(\mathbb{L}_{n}\), except in the \(O(\delta)\)-neighbourhood around the vertices. Now \(\mathbb{L}_{n-1}\) is also assigned with very similar coordinate charts by the partial Legendre transform \(u_{n-1,m}\) for \(u_{n-1}\) involving fewer variables. We regard \(u_{n-1,m}\) as a function of \(n\) variables \(x_{1},\ldots,x_{m},y_{m+1},\ldots y_{n}\), with no actual dependence on the \(y_{n}\) coordinate. Geometrically, this corresponds to taking the product of \(\mathbb{L}_{n-1}\) with \(\mathbb{R}_{y_{n}}\). By construction \[\arctan D^{2}u_{n,m}=\arctan D^{2}u_{n-1,m}=\frac{(m-1)}{2}\pi.\] We comment that for \(m=n-1\), the function \(u_{n-1,m}\) is just \(u_{n-1}\), and the corresponding chart is meant to cover the interior region in \(\Delta_{n-1}\). Our task is to compare \(u_{n,m}\) with \(u_{n-1,m}\) on the corresponding charts. By Prop. 2.30 both functions have higher derivative estimates depending on \(\delta\) in the chart, and the main task is to prove the exponential decay estimate. Let \(v=u_{n,m}-u_{n-1,m}\). We can show \[|v|\leq C^{\prime}e^{y_{n}/C}\] by the same argument as in Lemma 2.29. We will then derive an elliptic equation on \(v\). Applying the fundamental theorem of calculus to the function \(\arctan\) on symmetric matrices, \[\arctan(M+N)-\arctan M=\int_{0}^{1}\operatorname{Tr}((M+tN)^{2}+I)^{-1}N)dt,\] hence \[0=\arctan D^{2}u_{n,m}-\arctan D^{2}u_{n-1,m}=\int_{0}^{1}\operatorname{Tr}(( D^{2}u_{n-1,m}+tD^{2}v)^{2}+I)^{-1}D^{2}v)dt.\] Since \(D^{2}v\) and \(D^{2}u_{n-1,m}\) are already bounded, this is a uniformly elliptic equation, so the \(C^{0}\) exponential decay on \(v\) can be bootstrapped to all higher derivatives. This implies the geometric statement. We will need a quantitative version of Allard regularity theorem [1]. **Proposition 3.9**.: (Allard regularity) There is a small universal constant \(\epsilon_{0}\ll 1\) depending only on \(n,N\) such that the following holds, Let \(X\) be an \(n\)-dimensional multiplicity one stationary integral varifold inside \(B(p,r)\subset\mathbb{R}^{N}\). Assume that \(p\) lies on the support of \(X\), and the volume \(\mathcal{H}^{n}(X\cap B(p,r))\leq\omega_{n}(1+\epsilon_{0})\), then \(X\cap B(p,r/2)\) is a \(C^{1,\alpha}\) graph over the tangent plane through \(p\), with \(C^{1,\alpha}\) norm bounded by \(\frac{1}{100}\). It remains to have quantitative control of the geometry in the neighbourhood of the vertices. The same argument as in Prop. 3.1 gives the following: **Lemma 3.10**.: _Take any point \(p\in\mathbb{L}_{n}\), and let \(\delta\lesssim r\leq 1\). Then_ \[\text{Vol}(\mathbb{L}_{n}\cap\{\text{dist}(x,\text{vertex})\lesssim\delta\} \cap B(p,r))\leq Cr^{n-1}\delta.\] We now prove Part 1 of Theorem 1.3 on regularity. Proof.: By the \(C^{k,\alpha}\) regularity of \(\mathbb{L}_{n-1}\) in the inductive hypothesis, we have some radius parameter \(r_{0}<1\) depending on \(n,g\), such that the volume ratio for \(\mathbb{L}_{n-1}\times\mathbb{R}\) in any ball of radius \(\leq r_{0}\) around any point on \(\mathbb{L}_{n-1}\times\mathbb{R}\), is bounded by \(1+\frac{\epsilon_{0}}{3}\), where \(\epsilon_{0}\) is the Allard regularity constant. We choose \(\delta\ll\epsilon_{0}r_{0}\), and consider the points \(p=(x^{\prime},y^{\prime})\in\mathbb{L}_{n}\). Using Lemma 3.8, for \(y^{\prime}\leq-C(\delta)^{\prime}\) very negative, we have \[\text{Vol}(\mathbb{L}_{n}\cap B(p,r_{0})\cap\{\text{dist}(x,\text{vertex}) \gtrsim\delta\})\}\leq\omega_{n}r_{0}^{n}(1+\frac{\epsilon_{0}}{3}+C(\delta) e^{y^{\prime}/C}).\] By taking \(y^{\prime}\leq-C(n,g,\epsilon_{0},\delta)\) sufficiently negative, we can bound the RHS by \(\omega_{n}r_{0}^{n}(1+\frac{\epsilon_{0}}{2})\). On the other hand, by Lemma 3.10 and the choice of \(\delta\), \[\text{Vol}(\mathbb{L}_{n}\cap\{\text{dist}(x,\text{vertex})\lesssim\delta\} \cap B(p,r_{0}))\leq Cr_{0}^{n-1}\delta\ll r_{0}^{n}\epsilon_{0}.\] Summing over the two contributions, \[\text{Vol}(\mathbb{L}_{n}\cap B(p,r_{0}))\leq\omega_{n}r_{0}^{n}(1+\epsilon_{ 0}),\] so Allard regularity gives that \(\mathbb{L}_{n}\cap B(p,r_{0}/2)\) is a \(C^{1,\alpha}\) graph over the tangent plane through \(p=(x^{\prime},y^{\prime})\), with \(C^{1,\alpha}\) norm bounded by \(\frac{1}{100}\), which can be bootstrapped to \(C^{k,\alpha}\)-estimates by applying Schauder estimates to the minimal surface system. We have successively chosen the constants \(\epsilon_{0},r_{0},\delta,C(n,g,\epsilon_{0},\delta)\), so all these constants only depend on \(n,g\) in the end. The upshot is that we proved the quantitative \(C^{k,\alpha}\) regularity on balls of radius comparable to \(r_{0}\), in the region on \(\mathbb{L}_{n}\) where \(y_{n}\) is sufficiently negative depending only on \(n,g\). The same argument works when one of \(y_{1},\ldots y_{n},-\sum_{1}^{n}y_{i}\) is very negative, corresponding to the other codimension one faces of \(\Delta_{n}\). These regions cover all but a compact set on \(\mathbb{L}_{n}\). But Prop 3.4 gives the smoothness of \(\mathbb{L}_{n}\), which provides the \(C^{k,\alpha}\) boundedness on any compact region. **Corollary 3.11**.: There is a large enough \(C(n)\) depending on \(n,g\), such that the portion of the special Lagrangian \[\mathbb{L}_{n}\cap\{y_{n}\leq-C(n)\}\] lies on the graph of a normal vector field \(v\) over \(\mathbb{L}_{n-1}\times\mathbb{R}_{y_{n}}\subset T^{*}T^{n}\), with \(C^{k,\alpha}\) norm bounded by \(C(n,k,\alpha)\). Proof.: We use the notations in the above proof. By Lemma 3.8 and the above proof, \(\mathbb{L}_{n}\cap B(p,r_{0}/2)\) is a \(C^{1,\alpha}\)-small graph over \(\mathbb{L}_{n-1}\times\mathbb{R}\) away from the small subset \(\{\text{dist}(x,\text{vertex})\lesssim\delta\}\), as well as a small \(C^{1,\alpha}\)-small graph over the tangent plane at \(p\). Thus the tangent plane is itself a \(C^{1,\alpha}\)-small graph over \(\mathbb{L}_{n-1}\times\mathbb{R}\cap B(p,r_{0}/3)\), and so must be \(\mathbb{L}_{n}\cap B(p,r_{0}/3)\) We now prove Part 2 of Theorem 1.3 on the inductive asymptote. Proof.: We are left to prove the exponential decay in the inductive asymptote statement of Theorem 1.3, namely that for \(y_{n}\) sufficiently negative, the normal vector field \(v\) in Cor. 3.11 has local \(C^{k,\alpha}\) norm bounded by \(C^{\prime}e^{y_{n}/C(n)}\) for constants depending only on \(n,g\). We define for sufficiently large \(R\) \[f(R)=\sup_{p=(x,y),y_{n}\leq-R}\|v\|_{C^{k,\alpha}(B(p,r_{0})\cap\mathbb{L}_{n -1}\times\mathbb{R})}\,.\] Now \(v\) inherits an elliptic equation from the minimal surface, and we already know the boundedness of \(C^{k,\alpha}\)-norms, so for \(p\in\mathbb{L}_{n-1}\times\mathbb{R}_{y_{n}}\) with \(y_{n}\leq-R-r_{0}\), \[\|v\|_{C^{k,\alpha}(B(p,r_{0})\cap\mathbb{L}_{n-1}\times\mathbb{R})}\leq C \widehat{\int}_{B(p,r_{0})\cap\mathbb{L}_{n-1}\times\mathbb{R}}\,|v|.\] The \(L^{1}\) average can be split into the \(\{\operatorname{dist}(x,\operatorname{vertex})\gtrsim\delta\}\) region contribution, which is \(O(C(\delta)e^{y_{n}/C(n)})\) by Lemma 3.8, and the \(\{\operatorname{dist}(x,\operatorname{vertex})\lesssim\delta\}\) region contribution, which is \(O(\delta\tau_{0}^{-1}f(R))\), using the smallness of volume estimate from Lemma 3.10. In summary, there are constants depending only on \(n,g\) such that \[f(R+r_{0})\leq C^{\prime}e^{-R/C(n)}+(C^{\prime}\delta r_{0}^{-1})f(R).\] Choosing \(\delta\ll r_{0}\), the coefficient of \(f(R)\) is strictly smaller than one. Iterating this estimate gives exponential decay on \(f(R)\), namely the desired exponential decay estimate on the local \(C^{k,\alpha}\)-norms. ### Topology of the special Lagrangian We recall the notion of _real blow up_ from the work of Matessi [6]. We start from the coamoeba \(C_{std}\) (_cf._ (2)). At each vertex, the tangent cone of \(C_{std}\) is a copy of \(\mathbb{R}_{>0}^{n}\cup-\mathbb{R}_{>0}^{n}\), and the set of real lines contained in this tangent cone is an open subset of the real projective space \(\mathbb{RP}^{n-1}\). The real blow up is then \[\tilde{C}_{std}=\operatorname{Int}(\Delta_{n}\cup-\Delta_{n})\cup\bigcup_{ vertices}\{l\in\mathbb{RP}^{n-1}:\,l\text{ is contained in the tangent cone}\}.\] This can be given a natural real analytic structure. For instance, near the origin in \(\Delta_{n}\), the ratios \(\frac{x_{i}}{x_{j}}\) extend to local analytic functions on the real blow up. Our goal is to show **Proposition 3.12**.: Let \(n\geq 2\). The natural projection \(\mathbb{L}\to C_{std}\) lifts to a real analytic map \(\mathbb{L}\to\tilde{C}_{std}\), which is a homeomorphism. **Lemma 3.13**.: _At any point on the boundary of the convex set \(Du_{n}(0)\subset\mathbb{R}_{y}^{n}\), the outward pointing normal vector lies in \(\mathbb{R}_{>0}^{n}\subset\mathbb{R}_{x}^{n}=(\mathbb{R}_{y}^{n})^{*}\)._ Proof.: By definition \(Du_{n}(0)\) is the set of subgradients at the vertex. Since \(\Delta_{n}\) is contained in \(\mathbb{R}_{\geq 0}^{n}\), translation in the \(-\mathbb{R}\partial_{y_{i}}\) direction remains inside \(Du_{n}(0)\). Let \(\nu\) denote the outward pointing normal at any given point \(\mathfrak{y}\in\partial Du_{n}(0)\), then \(\langle\nu,-\partial_{y_{i}}\rangle\leq 0\) for any \(i=1,\ldots n\), namely \(\nu\in\mathbb{R}_{\geq 0}^{n}\). Suppose for contradiction that one of the coordinates of \(\nu\) is zero, say \(\nu_{n}=0\). By the convexity of the set \(Du_{n}(0)\), \[\langle y-\mathfrak{y},\nu\rangle\leq 0,\quad\forall y\in Du_{n}(0).\] Notice that for any \(a\geq 0\), we have \[y=\mathfrak{y}-a\partial_{y_{n}}\in Du_{n}(0),\quad\langle y-\mathfrak{y},\nu \rangle=0.\] so the ray \(\mathfrak{y}-\mathbb{R}_{\geq 0}\partial_{y_{n}}\) is contained inside the boundary \(\partial Du_{n}(0)\). But \(\partial Du_{n}(0)\) is a real analytic hypersurface, so by analytic continuation, the entire real line \(\mathfrak{y}+\mathbb{R}\partial_{y_{n}}\) is contained in \(\partial Du_{n}(0)\). This contradicts Lemma 2.23, which says that \(Du_{n}(0)\subset-(\mathbb{R}_{\geq 0}^{n})\). We conclude that all coordinates of \(\nu\) are strictly positive. **Lemma 3.14**.: _(Gauss map) Let \(n\geq 2\). The natural projection \(\mathbb{L}\to C_{std}\) lifts to a real analytic map \(\mathbb{L}\to\tilde{C}_{std}\). For any \(\mathfrak{y}\in\partial Du_{n}(0)\), this lifted map sends \((0,\mathfrak{y})\in\mathbb{L}\) to the point in \(\mathbb{RP}^{n-1}\) representing the line along the normal direction to \(\partial Du_{n}(0)\) at \(\mathfrak{y}\)._ Proof.: In Cor. 3.7, the parameter \((a_{i})\) for the tangent space is up to scaling the same as the normal vector to \(\partial Du_{n}(0)\). In particular \(a_{i}>0\) for any \(i=1,\ldots n\) by Lemma 3.13, and we can set \(a_{n}=1\). In the notation of Cor. 3.7, \[\tilde{x}_{i}=x_{i}-a_{i}x_{n},\quad i=1,\ldots n-1.\] By the graphical representation (21), the ratios \(\frac{x_{i}}{x_{n}}\) extend to real analytic functions over the locus \(x_{n}=0\), namely the part of \(\mathbb{L}\) lying over the vertex at the origin. At the point \((0,\mathfrak{y})\), the tangent space information shows that the value of \(\frac{x_{i}}{x_{n}}\) is \(a_{i}\). Since \(a_{i}>0\), the extension lands inside the correct subset of \(\mathbb{RP}^{n-1}\), so we obtain the desired lifting map \(\mathbb{L}\to\tilde{C}_{std}\). **Lemma 3.15**.: _The lifted map is injective._ Proof.: It suffices to prove injectivity above the vertex \(0\in\Delta_{n}\). Suppose the contrary, then two distinct points \(\mathfrak{y},\mathfrak{y}^{\prime}\in\partial Du_{n}(0)\) share the same normal vector. Using the convexity of \(Du_{n}(0)\), the line segment joining these two points is also contained in \(\partial Du_{n}(0)\). Then by real analyticity, the entire line through the two points is contained in \(\partial Du_{n}(0)\), contradicting again Lemma 2.23. **Lemma 3.16**.: _The lifted map is surjective._ Proof.: It suffices to show surjectivity over \(0\in\Delta_{n}\). Let \(\nu\in\mathbb{R}_{>0}^{n}\) represent a point in the open subset of \(\mathbb{RP}^{n-1}\) inside \(\tilde{C}_{std}\). Then \(\langle\nu,\cdot\rangle\) defines a linear function on \(Du_{n}(0)\). By Lemma 2.23, the closed convex set \(Du_{n}(0)\) is contained inside \(-(\mathbb{R}_{\geq 0}^{n})\), so the linear function must achieve an interior maximum at some boundary point \(\mathfrak{y}\). Then \(\nu\) is the normal vector at \(\mathfrak{y}\), hence \((0,\mathfrak{y})\in\mathbb{L}\) maps to the corresponding point in \(\tilde{C}_{std}\). **Lemma 3.17**.: _The lifted map is proper._ Proof.: By the interior gradient estimate for \(u_{n}\) on \(\operatorname{Int}(\Delta_{n})\), it suffices to focus on the vertex region, and prove that the preimage of the bounded sets in \(\tilde{C}_{std}\), \[\Lambda^{-1}\leq\frac{x_{i}}{x_{j}}\leq\Lambda,\quad|x|\leq\delta\ll\Lambda^{-1 }\ll 1,\quad\forall i,j, \tag{22}\] lies in a bounded region in \(\mathbb{L}\). By Cor. 21 and Lemma 3.13, at any point \((0,\mathfrak{y})\in\mathbb{L}\), the tangent space is of the form \[(x_{1},\ldots x_{n})\in\mathbb{R}\text{-span}(a_{1},\ldots a_{n}),\quad\sum_{ 1}^{n}a_{i}(y_{i}-\mathfrak{y}_{i})=0,\quad(a_{i})\in\mathbb{R}_{>0}^{n},\quad \max_{i}a_{i}=1.\] We will focus on the case where \(a_{n}=\max_{i}a_{i}=1\), and use the chart (21) from Cor. 3.7. (Otherwise we use some variant of the chart associated to \(\max_{i}a_{i}\)). By the \(C^{k,\alpha}\)-regularity of \(\mathbb{L}\) (_cf._ Thm. 1.3), the real analytic functions \(f_{i}\) in (21) have uniform \(C^{1}\) bounds on each coordinate chart, so the functions \(\frac{x_{i}}{x_{n}}\) on the coordinate charts have a _uniform Lipschitz constant_ independent of the position of \(\mathfrak{y}\). From Thm. 1.3, we obtain that any point \((x,y)\in\mathbb{L}\) in the preimage of \(\{|x|\leq\delta\}\) must lie within \(C|x|=O(\delta)\) distance to some \((0,\mathfrak{y})\in\mathbb{L}\), so in particular lies on one of these charts, and without loss \(a_{n}=\max_{i}a_{i}=1\). Suppose that \((x,y)\) lies in the preimage of (22), the uniform Lipschitz bound then gives \(|\frac{x_{i}}{x_{n}}-\frac{a_{i}}{a_{n}}|\leq C\delta\ll\Lambda^{-1}\) for \(i=1,\ldots n\), so \[(2\Lambda)^{-1}\leq a_{i}\leq 1,\quad i=1,\ldots,n.\] Notice \((a_{1},\ldots a_{n})\) is a normal vector to \(Du_{n}(0)\) at the boundary point \(\mathfrak{y}\), so \[\sum a_{i}(\mathfrak{y}_{i}-y_{i}^{\prime})\geq 0,\quad\forall y^{\prime}\in Du _{n}(0).\] Fixing a point \(y_{i}^{\prime}\in Du_{n}(0)\), and using \(\mathfrak{y}_{i}\leq 0\) from Lemma 2.23, we get \[(2\Lambda)^{-1}\sum\mathfrak{y}_{i}\geq\sum a_{i}\mathfrak{y}_{i}\geq\sum a_{ i}y_{i}^{\prime}\geq\sum y_{i}^{\prime}\geq-C,\] whence \(\sum\mathfrak{y}_{i}\geq-C\Lambda\) and \(\mathfrak{y}\) is bounded, and so must be \((x,y)\in\mathbb{L}\). Thus the lifted map \(\mathbb{L}\to\tilde{C}_{std}\) is a smooth and proper map which is also a bijection, and therefore a homeomorphism. We conclude Thm. 3.12. **Remark 3.18**.: If we know that \(\partial Du_{n}(0)\) is a strictly convex hypersurface, _i.e._ the principal curvatures are _strictly positive_ everywhere, then the lifted map \(\mathbb{L}\to\tilde{C}_{std}\) is a diffeomorphism. ### Appendix: Strong maximum principle Lemma 3.19 is a version of strong maximum principle for possibly singular minimal surfaces in any codimension, which must be well known, but we supply a proof for convenience. **Lemma 3.19**.: _(Strong maximum principle) Let \(n\geq 2\). Let \(X\) be an \(n\)-dimensional stationary integral varifold inside \(B_{1}\subset\mathbb{R}^{N}\), such that support contains the origin, and lies inside the half space \(x_{1}\geq 0\). Then_ \[\text{Mass}\big{(}B_{1}\cap\{x_{1}=0\}\big{)}\geq\epsilon_{0}(n),\] _for some universal constant \(\epsilon_{0}\) depending only on \(n\)._ Proof.: Let \(\chi_{k}(|x|)\) be a sequence of radial cutoff function on \(\mathbb{R}^{N}\) such that \[\begin{cases}0\leq\chi_{k}\leq 2^{-k},\quad|D\chi_{k}|\leq C,\quad|D^{2}\chi_{k }|\leq C2^{k}.\\ \chi_{k}=0,\quad|x|\geq\frac{1}{2^{k}},\\ \chi_{k}=2^{-k},\quad|x|\leq\frac{1}{2^{k+1}}.\end{cases}\] The gradient of \(\chi_{k}\) is supported on the annulus region \(\{2^{-k-1}\leq|x|\leq 2^{-k}\}\). Let \(\epsilon>0\) be a small parameter, and consider the function \[f=-x_{1}+\epsilon\chi_{k}(x),\quad f^{+}=\max(f,0).\] Since \(x_{1}\geq 0\) on the support of \(X\), we know \(f\leq 0\) near the boundary of \(B_{1}\). By choosing \(\epsilon\) generic, we can ensure that \(\{f=0\}\cap\operatorname{supp}(X)\) has zero \(\mathcal{H}^{n}\)-measure. By the stationarity of \(X\) and a standard approximation argument by \(C^{1}\) test functions, \[0=\int_{X}div_{T_{x}X}(f^{+}\nabla f)d\left\|X\right\|=\int_{f>0}|\nabla f|^{2 }d\left\|X\right\|+\int_{f>0}f\operatorname{Tr}_{T_{x}X}\text{Hess}(f)d\left\| X\right\|,\] so \[\int_{f>0}|\nabla f|^{2}d\left\|X\right\|\leq C2^{k}\epsilon\int_{\{f>0\}}f \left\|X\right\|.\] Thus by the Holder inequality, \[\big{(}\int_{f>0}|\nabla f|d\left\|X\right\|\big{)}^{2}\] \[\leq (\int_{f>0}|\nabla f|^{2}d\left\|X\right\|)\text{Mass}(\{f>0\})\] \[\leq C2^{k}\epsilon(\int_{\{f>0\}}f\left\|X\right\|)\text{Mass}(\{f>0\})\] \[\leq C2^{k}\epsilon\text{Mass}(\{f>0\})^{1+\frac{1}{n}}(\int_{X}|f^{ +}|^{\frac{n}{n-1}}d\left\|X\right\|)^{\frac{n-1}{n}}.\] But by the Michael-Simon-Sobolev inequality [8] for stationary varifolds, \[\big{(}\int_{X}|f^{+}|^{\frac{n}{n-1}}d\left\|X\right\|\big{)}^{\frac{n-1}{n} }\leq C\int_{X}|\nabla f^{+}|d\left\|X\right\|.\] Hence \[\big{(}\int_{X}|f^{+}|^{\frac{n}{n-1}}d\left\|X\right\|\big{)}^{\frac{n-1}{n} }\leq C2^{k}\epsilon\text{Mass}(\{f>0\})^{1+\frac{1}{n}}.\] From the support information on \(f^{+}\), we deduce \[\epsilon 2^{-k-1}\text{Mass}(\{x_{1}\leq\epsilon 2^{-k-1}\} \cap B_{2^{-k-1}})^{\frac{n-1}{n}}\leq\big{(}\int_{X}|f^{+}|^{\frac{n}{n-1}}d \left\|X\right\|\big{)}^{\frac{n-1}{n}}\] \[\leq C2^{k}\epsilon\text{Mass}(\{f>0\})^{1+\frac{1}{n}}\leq C2^{k} \epsilon\text{Mass}(\{x_{1}\leq\epsilon 2^{-k}\}\cap B_{2^{-k}})^{1+\frac{1}{n}}.\] Let \(a_{k}=\operatorname{Mass}(\{x_{1}\leq\epsilon 2^{-k}\}\cap B_{2^{-k}})\). We have proven \[a_{k+1}\leq C2^{2kn/(n-1)}a_{k}^{1+\frac{2}{n-1}}.\] Take \(p=1+\frac{1}{n-1}\), and suppose \(a_{0}<\epsilon_{0}(n)\) is sufficiently small depending on \(n\). Then we can inductively deduce the double exponential decay estimate \(a_{k}\leq a_{0}^{p^{k}}\). On the other hand, by assumption the origin lies on the support of \(X\), so by the monotonicity formula for stationary varifolds, \[a_{k}\geq\operatorname{Mass}(B(\epsilon 2^{-k}))\geq\omega_{n}(\epsilon 2^{-k} )^{n}.\] This is incompatible with the double exponential decay for \(k\to+\infty\). This contradiction implies that \(a_{0}\geq\epsilon_{0}(n)\). Since this estimate is independent of the small \(\epsilon\), we can take the \(\epsilon\to 0\) limit to deduce \[\operatorname{Mass}(\{x_{1}\leq 0\}\cap B_{1})\geq\epsilon_{0}(n)\] as required.
2303.03319
Quantum Algorithm for Path-Edge Sampling
We present a quantum algorithm for sampling an edge on a path between two nodes s and t in an undirected graph given as an adjacency matrix, and show that this can be done in query complexity that is asymptotically the same, up to log factors, as the query complexity of detecting a path between s and t. We use this path sampling algorithm as a subroutine for st-path finding and st-cut-set finding algorithms in some specific cases. Our main technical contribution is an algorithm for generating a quantum state that is proportional to the positive witness vector of a span program.
Stacey Jeffery, Shelby Kimmel, Alvaro Piedrafita
2023-03-06T17:45:12Z
http://arxiv.org/abs/2303.03319v1
# Quantum Algorithm for Path-Edge Sampling ###### Abstract We present a quantum algorithm for sampling an edge on a path between two nodes \(s\) and \(t\) in an undirected graph given as an adjacency matrix, and show that this can be done in query complexity that is asymptotically the same, up to log factors, as the query complexity of detecting a path between \(s\) and \(t\). We use this path sampling algorithm as a subroutine for \(st\)-path finding and \(st\)-cut-set finding algorithms in some specific cases. Our main technical contribution is an algorithm for generating a quantum state that is proportional to the positive witness vector of a span program. ## 1 Introduction Finding and detecting paths between two vertices in a graph are important related problems, both in and of themselves, and as subroutines in other applications, but there is still much to understand in this area. While classically these problems seem to be equivalent, an intriguing question is whether the same holds for quantum algorithms: there are cases where a quantum algorithm can _detect_ a path between \(s\) and \(t\) in significantly less time than any known quantum algorithm takes to _find_ such a path. In particular, path finding on a glued trees graph is one of Aaronson's top ten open problems in query complexity [13, 1], as the best known quantum algorithms that find an \(st\)-path in such graphs have exponentially worse running time than the best quantum algorithms for detecting one, and understanding how these problems are related could improve our understanding of why quantum computers achieve dramatic speedups for certain problems. As an example of more immediate practical interest: path finding in supersingular isogeny graphs is one approach to attacking cryptosystems based on supersingular isogenies [10, 1], but currently the best known attack of this form still takes exponential time [14] (see also [15]). In this paper, we consider the quantum query complexity of a somewhat intermediate problem: finding an edge on an \(st\)-path in an undirected graph.1 In the classical case, it seems hard to imagine how one could find an edge on an \(st\)-path without first finding an \(st\)-path, but we show that in the quantum case, one can sample an \(st\)-path edge with similar resources to what is needed to detect the existence of an \(st\)-path. In some cases, this can be done with significantly fewer queries than the best previously known path-finding algorithms. We show this ability to sample an edge on a path has some useful applications, including to sabotaging networks (finding \(st\)-cut sets) and to finding paths in certain graphs faster than existing path finding algorithms. Footnote 1: In this paper, we use _path_ to refer to a self-avoiding path, meaning a path with no repeated vertices. Previously, Durr, Heiligman, Hoyer and Mhalla [1] described an algorithm for connectivity in the adjacency matrix model that uses \(O(n^{3/2})\) queries for an \(n\)-vertex graph. Their algorithm works by keeping track of known connected components, and then uses a quantum search to look for any edge that connects any two components previously not known to be connected. While the authors use this algorithm to decide connectivity, we note that after \(O(n^{3/2})\) queries, the algorithm will produce (with high probability) a list of the connected components of the graph, as well as a set of edges for each component that is a witness to that component's connectivity (a spanning tree). This data can then be used to find a path from \(s\) to \(t\), if \(s\) and \(t\) are in the same component. This algorithm uses \(O(\log n)\) qubits and \(O(n\log n)\) classical bits, and applies to both directed and undirected graphs. However, the algorithm of Durr et al. does not take advantage of any structure in the graph. This is in contrast to an undirected path _detection_ quantum algorithm of Belovs and Reichardt [1], further analyzed and refined in [1, 23], which, for example, can detect a path between vertices \(s\) and \(t\) with \(\widetilde{O}(\sqrt{L}n)\) adjacency matrix queries when there is an \(st\)-path of length \(L\), and even better in the case of multiple short paths, or in the case of certain promises when there is no path. In fact, there are even sufficiently structured promises on the input for which this algorithm performs superpolynomially better than the best possible classical algorithm [23]. While this path detection algorithm runs faster than \(O(n^{3/2})\) in many cases, the algorithm does not output any information about the \(st\)-path - it simply determines whether a path exists. Our contribution is an algorithm that reproduces the query complexity of the Belovs-Reichardt undirected path detection algorithm, even for structured inputs - for example, our algorithm uses \(\widetilde{O}(\sqrt{L}n)\) queries when there is a path of length \(L\) - but now returns _some_ information about edges on an \(st\)-path: namely, a path edge.2 Specifically, our algorithm outputs an \(st\)-path edge sampled with probability that depends on the optimal \(st\)-flow between \(s\) and \(t\). This is how electrons would flow in an electrical network if edges in the graph were replaced by wires with resistors and a battery were connected between \(s\) and \(t\). For intuition, an edge is more likely to be sampled if it is on _more_ or _shorter_ paths. Thus, in the case of a single path between \(s\) and \(t\), our algorithm samples each edge in the path with equal probability (up to some error in total variation distance). When there are disjoint paths of different lengths, our algorithm is more likely to sample an edge on a short path than a long path - the probability of sampling from a particular path of length \(\ell\) is proportional to \(1/\ell\). (This means, unfortunately, that if there are many long paths, we might still be more likely to sample an edge on some long path than an edge on a short path). We prove that finding an \(st\)-path edge classically requires \(\Omega(n^{2})\) queries in the worst case, even if promised that there is a path of length \(L\), as long as \(L\geq 3\). Footnote 2: As we hinted at with our statement of advantages for the Belovs-Reichardt algorithm in the case of shorter and/or multiple paths, the Belovs-Reichardt algorithm for \(st\)-path detection actually has a complexity that depends on the structure of the graph in a more subtle way, replacing \(L\) with an upper bound on the _effective resistance_ between \(s\) and \(t\), which is _at most_ the length of the shortest path between \(s\) and \(t\). This more subtle analysis also applies to our edge finding algorithm. With the ability to quickly find edges on short paths, we can create an improved algorithm for _finding_\(st\)-paths in undirected graphs with a unique, short \(st\)-path. Given an adjacency matrix for an \(n\)-vertex graph, if there is a unique \(st\)-path, whose (possibly unknown) length is \(L\), we can find all of the edges in the path in \(\widetilde{O}(L^{1+o(1)}n)\) expected queries. When \(L=o(\sqrt{n})\), this is an improvement over the Durr et al. algorithm. In the general case that there is more than one \(st\)-path, we prove that we can find all edges in a single path in \(\widetilde{O}(L^{3/2}n)\) queries when \(L\) is the (possibly unknown) length of the _longest_ path (although our approach in this case does not use the edge sampling algorithm as a subroutine). When \(L=o(n^{1/3})\), this is an improvement over the Durr et al. algorithm. We additionally use our sampling algorithm to find \(st\)-cut sets, in the case that \(s\) and \(t\) are each part of a highly connected component, and there are only a few edges connecting those components. Because these few connecting edges are bottlenecks in the flow, there will be a lot of flow over those connecting edges, and so a high probability of sampling them, and hence finding an \(st\)-cut set. We describe a particular family of \(n\)-vertex graphs were we can find such a cut set in \(\widetilde{O}(n)\) queries, where any classical algorithm would require \(\Omega(n^{2})\) queries. Our edge sampling algorithm is a special case of a new span-program-based algorithm (Section 3) for generating quantum states called _span program witness states_ (or simply _witness_ states_). One of the key elements of the analysis of span program algorithms for deciding Boolean functions [10] is the positive witness (see Definition 2), which is a vector that witnesses that the function evaluates a particular input to \(1\). While in the usual span program algorithm, the output on input \(x\) is \(f(x)\), in our case, we output a quantum state proportional to the positive witness for input \(x\). In the case of the Belovs-Reichardt span program for \(st\)-connectivity [1], a positive witness is a linear combination of edges that are on paths between \(s\) and \(t\), where the amplitudes depend on the optimal \(st\)-flow (see Definition 4). Generating and then measuring such a state allows us to sample \(st\)-path edges. Our results more generally hold for the case where the input \(x\) defines a subgraph \(G(x)\) of some arbitrary graph \(G\), that is not necessarily a complete graph. Although we do not attempt to analyze time complexity in this work, we suspect that our query algorithms on graphs are also time efficient when there is an efficient way to perform a quantum walk on the underlying graph \(G\), as in [11]. For example, when \(G\) is the \(n\)-vertex complete graph (i.e. the oracle allows you to query elements of the full adjacency matrix for a \(n\)-vertex graph, as we have been assuming throughout this introduction), there is an efficient way to do this walk, and so in this case the time complexity of our algorithms is likely the same as the query complexity, up to log factors. ### Future Directions A natural future direction is to try to use our edge finding technique for path finding in more general settings than the ones we consider. One surprising aspect of our algorithm is that it does not necessarily find edges in the order in which they appear in the path, and instead often finds edges in the middle of a path with high probability. The form of our algorithm thus seems to circumvent a recent lower bound on path-finding in glued trees graphs that applies to algorithms that always maintain a path from the starting node to any vertex in the algorithm's state [12]. However, one reason to be pessimistic for this particular application is that in the glued trees graph, _all_ edges connected to the starting vertex are in some \(st\)-path. Still, we are hopeful that for some graphs, finding an edge in the middle of some \(st\)-path opens up the possibility of new divide-and-conquer approaches for path finding. We are only able to take advantage of the fact that we sample edges according to the optimal \(st\)-flow for very specific graphs, like those with a single path, or with bottleneck flows, but we hope that this edge sampling distribution will prove useful in additional applications. In recent independent work, Apers and Piddock [1] develop a similar edge sampling algorithm in the adjacency list model, which they use to analyze connections between electric flows and quantum walks, and they prove that walks that proceed via their edge sampling algorithm need only logarithmically many rounds before they have a high probability of reaching a target vertex, on trees. We believe that such edge sampling methods will likely find further applications. We have only applied our span program witness state generation algorithm to the span program for path detection. Span program algorithms exist for a wide range of graph problems, from bipartiteness [13] and cycle detection [13, 14], to triangle [15] and other subgraph detection [16], to other combinatorial search problems [1, 2]. Perhaps the span program witness states for these problems would be useful for certain applications. Beyond span program algorithms, dual adversary algorithms (which are equivalent to span programs for decision problems, but generalize to state conversion problems [10]) and multidimensional quantum walks [11, 12] all have a similar notion of witnesses in their design and analysis. Similar techniques might yield witness generation algorithms for these more general algorithm design paradigms. We suspect our path finding algorithms are not optimal, as for graphs with longest paths of length \(\Omega(n^{1/3})\), our algorithms do not outperform Durr et al.'s algorithm. We wonder whether it is possible to find paths using \(o(n^{3/2})\) queries whenever the longest path has length \(o(n)\), or to prove that this is not possible, perhaps by expanding on techniques for lower bounding path-finding on welded trees [12]. Finally, all of our algorithms apply only to undirected graphs, while the algorithm of [1] applies equally well to directed or undirected graphs. While there are span program algorithms for problems on directed graphs (see e.g. [1]), they do not exhibit the same speedups with short or many paths that the undirected span program algorithms possess. It would be interesting to better understand whether there are ways to obtain similar improvements in query complexity for directed graphs. Organization.In Section3 we present our main technical result: an algorithm for generating a state proportional to a span program witness for \(x\). In Section4, we show how to apply this to finding a path edge (Section4.1), and give an example of a particular family of graphs in which the classical complexity of finding a path edge is quadratically worse than our quantum algorithm (Theorem20). In Section4.2, we show how our edge finding algorithm can be applied to efficiently find an \(st\)-cut set in a particular family of graphs, and in Section4.3 we show how it can be applied to find an \(st\)-path in \(\widetilde{O}(nL^{1+o(1)})\) queries when there is a _unique_\(st\)-path of length \(L\) (Theorem25); and also give an algorithm for finding an \(st\)-path in general graphs in \(\widetilde{O}(nL^{3/2})\) queries when \(L\) is the length of the _longest_\(st\)-path (Theorem26). ## 2 Preliminaries We first introduce some basic notation. We let \(\|\cdot\|\) denote the \(l_{2}\) norm, \([m]\coloneqq\{1,2,3,\ldots,m\}\), and let \(\mathcal{L}(H,V)\) denote the set of linear operators from the vector space \(H\) to the vector space \(V\). ### Span Programs Span programs are a linear algebraic model of computation, introduced in [11], that have proven extremely useful for analyzing query [14, 15], space [16], and time complexity [1, 10, 15] in quantum algorithms. We follow Ref. [10] closely in our definitions. **Definition 1** (Span Program).: _For a finite set \(R\), a span program on \(R^{m}\) is a tuple \(\mathcal{P}=(H,\mathcal{V},|\tau\rangle,A)\) where_ 1. \(H\) _is a direct sum of finite-dimensional inner product spaces:_ \(H=H_{1}\oplus H_{2}\cdots H_{m}\oplus H_{\text{true}}\oplus H_{\text{false}}\)_, and for_ \(j\in[m]\) _and_ \(a\in R\)_, we have_ \(H_{j,a}\subseteq H_{j}\)_, such that_ \(\sum_{a\in R}H_{j,a}=H_{j}\)_;_ 2. \(\mathcal{V}\) _is a vector space;_ 3. \(|\tau\rangle\in\mathcal{V}\) _is a target vector; and_ 4. \(A\in\mathcal{L}(H,\mathcal{V})\)_._ _Given a string \(x\in R^{m}\), we use \(H(x)\) to denote the subspace \(H_{1,x_{1}}\oplus\cdots\oplus H_{m,x_{m}}\oplus H_{\text{true}}\), and we denote by \(\Pi_{H(x)}\) the orthogonal projector onto the space \(H(x)\)._ An important concept in the analysis of span programs and quantum query complexity is that of _witnesses_: **Definition 2** (Positive Witness).: _Given a span program \(\mathcal{P}=(H,\mathcal{V},|\tau\rangle,A)\) on \(R^{m}\) and \(x\in R^{m}\), \(|w\rangle\in H(x)\) is a positive witness for \(x\) in \(\mathcal{P}\) if \(A|w\rangle=|\tau\rangle\). If a positive witness exists for \(x\), we define the witness size of \(x\) in \(\mathcal{P}\) as_ \[w_{+}(x)=w_{+}(\mathcal{P},x)\coloneqq\min\left\{\||w\rangle\|^{2}:|w\rangle \in H(x)\text{ and }A|w\rangle=|\tau\rangle\right\}. \tag{1}\] _We say that \(|w\rangle\in H(x)\) is the optimal positive witness for \(x\) if \(\||w\rangle\|^{2}=w_{+}(\mathcal{P},x)\) and \(A|w\rangle=|\tau\rangle\)._ Our main algorithm produces a normalized version of this unique optimal positive witness, \(|w\rangle/\||w\rangle\|\). (To see that the optimal positive witness is unique, for contradiction assume that the optimal positive witness is not unique - then a linear combination of two optimal positive witnesses produces a witness with smaller witness size than either.) A span program \(\mathcal{P}\) encodes a function \(f:X\to\{0,1\}\) in the following way. We say \(f(x)=1\) if \(x\) has a positive witness, and \(f(x)=0\) if \(x\) does not have a positive witness. We say such a \(\mathcal{P}\) decides the function \(f\). We will also need the concept of an approximate negative witness. **Definition 3** (Negative Error, Approximate Negative Witness).: _Given a span program \(\mathcal{P}=(H,\mathcal{V},|\tau\rangle,A)\) on \(R^{m}\) and \(x\in R^{m}\), we define the negative error of \(x\) in \(\mathcal{P}\) as_ \[e_{-}(x,\mathcal{P})\coloneqq\min\left\{\|\langle\widetilde{\omega}|A\Pi_{H( x)}\|^{2}:\langle\widetilde{\omega}|\in\mathcal{L}(\mathcal{V},\mathbb{R}), \langle\widetilde{\omega}|\tau\rangle=1\right\}. \tag{2}\] _Note that \(e_{-}(x,\mathcal{P})=0\) if and only if \(\mathcal{P}\) decides a function \(f\) with \(f(x)=0\). Any \(\langle\widetilde{\omega}|\) such that \(\|\langle\widetilde{\omega}|A\Pi_{H(x)}\|^{2}=e_{-}(x,\mathcal{P})\) is called an approximate negative witness for \(x\) in \(P\). We define the approximate negative witness size of \(x\) as:_ \[\widetilde{w}_{-}(x,\mathcal{P})\coloneqq\min\left\{\|\langle\widetilde{ \omega}|A\|^{2}:\langle\widetilde{\omega}|\in\mathcal{L}(\mathcal{V},\mathbb{R }),\langle\widetilde{\omega}|\tau\rangle=1,\|\langle\widetilde{\omega}|A\Pi_{ H(x)}\|^{2}=e_{-}(x,\mathcal{P})\right\}. \tag{3}\] _We call an approximate negative witness \(\langle\widetilde{\omega}|\) that also minimizes \(\|\langle\widetilde{\omega}|A\|^{2}\) an optimal approximate negative witness._ We use the following notation for maximum positive and approximate negative witness sizes: \[W_{+}(\mathcal{P},f)=W_{+}\coloneqq\max_{x\in f^{-1}(1)}w_{+}(\mathcal{P},x),\qquad\widetilde{W}_{-}(\mathcal{P},f)=\widetilde{W}_{-}\coloneqq\max_{x\in f ^{-1}(1)}\widetilde{w}_{-}(x,\mathcal{P}). \tag{4}\] Note that we are restricting to \(1\)-inputs of \(f\). That is because our witness generation algorithm will assume that \(x\) is a \(1\)-input, unlike previous span-program-based algorithms that _decide_\(f\). ### Quantum Query Algorithms The algorithms we develop are query algorithms, where we can access a unitary oracle \(O_{x}\) for some \(x\in X\subseteq R^{m}\) such that \(O_{x}\) acts on the space \(\mathbb{C}^{m}\otimes\mathbb{C}^{q}\) as \(O_{x}|i\rangle|a\rangle=|i\rangle|x_{i}+a\mod q\rangle\). where \(q=|R|\), \(x_{i}\) is the value of the \(i^{\text{th}}\) element of \(x\) and \(|i\rangle\in\mathbb{C}^{m}\) and \(|a\rangle\in\mathbb{C}^{q}\) are standard basis states. The query complexity of an algorithm is the number of times \(O_{x}\) must be used, in the worst case over \(x\in X\). In our case, we will also consider the expected query complexity on input \(x\), which is the average number of times \(O_{x}\) must be used when given a particular input \(x\), where the randomness is due to random events in the course of the algorithm. ### Graph Theory and Connection to Span Programs Let \(G=(V,E)\) be an undirected graph.3 We will particularly consider graphs with specially labeled vertices \(s,t\in V\), such that there is a path from \(s\) to \(t\) in \(G\). Let \(\overrightarrow{E}=\{(u,v):\{u,v\}\in E\}\); that is \(\overrightarrow{E}\) is the set of directed edges corresponding to the edges of \(G\). Given a graph \(G=(V,E)\), for \(u\in V\), we denote by \(G_{u}^{-}\) the subgraph of \(G\) on the vertices \(V\setminus\{u\}\), and with overloading of of notation for \(S\subseteq E\), we denote by \(G_{S}^{-}\) the subgraph of \(G\) with edges \(S\) removed. (It will be clear from context whether we are removing edges or vertices from the graph.) Footnote 3: Our results easily extend to multigraphs, see [11], but for simplicity, we will not consider multigraphs here. On a graph \(G\) with \(s\) and \(t\) connected we will consider a _unit \(st\)-flow_, which is a linear combination of cycles and \(st\)-paths, formally defined as a function on \(\overrightarrow{E}\) with the following properties. **Definition 4** (Unit \(st\)-flow).: _Let \(G=(V,E)\) be an undirected graph with \(s,t\in V(G)\), and \(s\) and \(t\) connected. Then a unit \(st\)-flow on \(G\) is a function \(\theta:\overrightarrow{E}\rightarrow\mathbb{R}\) such that:_ 1. _For all_ \((u,v)\in\overrightarrow{E}\)_,_ \(\theta(u,v)=-\theta(v,u)\)_;_ 2. \(\sum_{v:(s,v)\in\overrightarrow{E}}\theta(s,v)=\sum_{v:(v,t)\in\overrightarrow {E}}\theta(v,t)=1\)_; and_ 3. _for all_ \(u\in V\setminus\{s,t\}\)_,_ \(\sum_{v:(u,v)\in\overrightarrow{E}}\theta(u,v)=0\)_._ **Definition 5** (Unit Flow Energy).: _Given a graph \(G=(V,E)\) and a unit \(st\)-flow \(\theta\) on \(G\), the unit flow energy of \(\theta\) is \(J(\theta)=\frac{1}{2}\sum_{e\in\overrightarrow{E}}\theta(e)^{2}\)._ **Definition 6** (Effective resistance).: _Let \(G=(V,E)\) be a graph with \(s,t\in V\). If \(s\) and \(t\) are connected in \(G\), the effective resistance of \(G\) between \(s\) and \(t\) is \(R_{s,t}(G)=\min_{\theta}J(\theta)\), where \(\theta\) runs over all unit \(st\)-unit flows of \(G\). If \(s\) and \(t\) are not connected in \(G\), \(R_{s,t}(G)=\infty\)._ Interpretation of the optimal flowThe \(st\)-flow with minimum energy is unique, and describes the electric current going through that edge if the graph represents a network of unit resistors and we put a potential difference between \(s\) and \(t\). The minimum energy flow has several other interpretations and connections to other graph properties. For reference, and for those who would like to build their intuition for this object, we have collected some of these relationships in Appendix A. Graph accessWe turn graph problems into oracle problems by letting a string \(x\in\{0,1\}^{m}\) specify a subgraph \(G(x)\) of \(G\). In particular, we associate each edge \(e\in E\) with a number in \([m]\). Then, given a string \(x\in\{0,1\}^{m}\), let \(G(x)=(V,E(x))\) be the subgraph of \(G\) that contains an edge \(e\in E\) if \(e\) is associated with the integer \(i\in[m]\) and \(x_{i}=1\), where \(x_{i}\) is the \(i\)th bit of \(x\). In this oracle problem, one is given access to an oracle \(O_{x}\) for \(x\) (or classically, given the ability to query the values of the bits of \(x\) one at a time), and a description of the parent graph \(G\) along with the association between bits of \(x\) and edges of \(G\), and the goal is to determine something about the graph \(G(x)\) using as few queries as possible. Let \(E_{i}\subset E\) be the set of edges associated with the \(i\)th bit of \(x.\) When not specified otherwise, one should assume that \(m=|E|\), and then associate each edge of \(G\) uniquely with a bit of the input string. In this case, when \(G\) is the complete graph, \(O_{x}\) is equivalent to query access to the adjacency matrix of a graph. When we consider subgraphs of the original graph (like \(G_{u}^{-}\)), we assume that the edges are associated with the same indices as in the original graph, unless otherwise specified. Most of the applications in this paper are related to the problem of detecting a path between \(s\) and \(t\) - more commonly called \(st\)-connectivity. We define \(st\)-conn\({}_{G}(x)\coloneqq 1\) if \(s\) and \(t\) are connected in \(G(x)\), and \(0\) otherwise. The following span program, which we denote by \(\mathcal{P}_{G_{st}}\), first introduced in Ref. [10] and used in the quantum setting in Ref. [1], decides \(st\)-conn\({}_{G}(x)\): for a graph \(G=(V,E)\), where \(m=|E|\), define the span program \(\mathcal{P}_{G_{st}}\) as: \[\begin{split}&\forall i\in[m],H_{i,1}=\text{span}\{|(u,v)\rangle :\{u,v\}\in E_{i}\},H_{i,0}=\emptyset\\ &\mathcal{V}=\text{span}\{|v\rangle:v\in V(G)\}\\ &|\tau\rangle=|s\rangle-|t\rangle\\ &\forall(u,v)\in\overrightarrow{E}:\ A|u,v\rangle=|u\rangle-|v \rangle.\end{split} \tag{5}\] For \(\mathcal{P}_{G_{st}}\), the negative approximate witness size is bounded by \(\widetilde{W}_{-}=O(n^{2})\)[11]. If \(s\) and \(t\) are connected in \(G(x)\), the optimal positive witness of \(x\) in \(\mathcal{P}_{G_{st}}\) is [1, 1] \[|\theta^{*}\rangle=\frac{1}{2}\sum_{e\in\overrightarrow{E}}\theta^{*}(e)|e\rangle, \tag{6}\] where \(\theta^{*}\) is the \(st\)-unit flow with minimal energy, so by Definitions 2 and 6, \(w_{+}(\mathcal{P}_{G_{st}},x)=\frac{1}{2}R_{s,t}(G(x))\). One of our main applications is to apply our witness state generation algorithm to the span program \(\mathcal{P}_{G_{st}}\), in which case, we produce a quantum state close to \(|\theta^{*}\rangle/\|\theta^{*}\rangle\|\) where \(\theta^{*}\) is the optimal unit \(st\)-flow on \(G(x)\). If we were to create \(|\theta^{*}\rangle/\|\theta^{*}\rangle\|\) exactly, and then measure in the standard basis, the probability that we obtain the edge \(e\) is \(\theta^{*}(e)^{2}/(2R_{s,t}(G(x)))\). Let \(q_{G(x),s,t}\) denote the distribution such that for \(\forall e\in\overrightarrow{E}\), \[q_{G(x),s,t}(e)=\theta^{*}(e)^{2}/(2R_{s,t}(G(x))). \tag{7}\] Additionally, this optimal flow \(\theta^{*}\) is a convex combination of (self-avoiding) \(st\)-paths, as we prove in Appendix A: **Lemma 7**.: _An \(st\)-path in \(G(x)\) is a sequence of distinct vertices \(\vec{u}=(u_{0},\ldots,u_{\ell})\) such that \(s=u_{0}\), \(t=u_{\ell}\), and for all \(i\in[\ell]\), \((u_{i-1},u_{i})\in\overrightarrow{E}(G(x))\). From \(\vec{u}\), we define_ \[|\rho_{\vec{u}}\rangle=\frac{1}{\sqrt{2}}\sum_{i=0}^{\ell-1}(|u_{i},u_{i+1} \rangle-|u_{i+1},u_{i}\rangle) \tag{8}\] _and refer to all such states as \(st\)-path states of \(G(x)\). Then if \(|\theta^{*}\rangle\) is the optimal positive witness for \(x\) in \(\mathcal{P}_{G_{s,t}}\), it is a linear combination of \(st\)-path states in \(G(x)\)._ A final pair of tools we use are a quantum algorithm that decides \(st\)-conn\({}_{G}(x)\) with fewer queries in the case of small effective resistance, without knowing the effective resistance ahead of time, and a quantum algorithm for estimating the effective resistance: **Lemma 8** ([13]).: _Fix \(\delta>0\) and a family of \(n\)-vertex graphs \(G\) with vertices \(s\) and \(t\). Then there is a quantum algorithm \(\mathtt{PathDetection}(O_{x},G,s,t,\delta)\) such that,_ 1. _The algorithm returns_ \(st\)_-conn_\({}_{G}(x)\) _with probability_ \(1-O(\delta)\)_._ 2. _On input_ \(x\)_, the algorithm uses_ \(O\left(n\sqrt{R_{s,t}(G(x))}\log\left(\frac{n}{R_{s,t}(G(x))\delta}\right)\right)\) _expected queries if_ \(st\)_-conn_\({}_{G}(x)=1\)_, and_ \(O\left(n^{3/2}\log 1/\delta\right)\) _expected queries if_ \(st\)_-conn_\({}_{G}(x)=0\)_._ **Lemma 9** ([14]).: _Fix \(\delta>0\) and a family of \(n\)-vertex graphs \(G\) with vertices \(s\) and \(t\). Then there is a quantum algorithm \(\mathtt{WitnessSizeEst}(O_{x},G,s,t,\epsilon,\delta)\) that, on input \(x\) such that \(st\)-conn\({}_{G}(x)=1\), with probability \(1-\delta\), outputs an estimate \(\hat{R}\) for \(R_{s,t}(G(x))\) such that_ \[\left|\hat{R}-R_{s,t}(G(x))\right|\leq\epsilon R_{s,t}(G(x)), \tag{9}\] _using \(\widetilde{O}\left(\sqrt{\frac{R_{s,t}(G(x))n^{2}}{\epsilon^{3}}}\log(1/ \delta)\right)\) expected queries; and on input \(x\) such that \(st\)-conn\({}_{G}(x)=0\), uses at most \(\widetilde{O}\left((n/\epsilon)^{3/2}\log(1/\delta)\right)\)._ Lemma 9 is a special case of [14, Theorem 3.8], which gives an algorithm for estimating the quantity \(w_{+}(x)\) from _any_ span program. If we apply this construction with the span program \(\mathcal{P}_{G_{s,t}}\), we can estimate its positive witness sizes, which are precisely \(\frac{1}{2}R_{s,t}(G(x))\). The algorithm described in [14, Theorem 3.8] assumes that the input is a \(1\)-input to \(st\)-conn\({}_{G}(x)\), but can easily be modified to always stop after at most \(\widetilde{O}\left((n/\epsilon)^{3/2}\log(1/\delta)\right)\) steps, regardless of the input, since \(R_{s,t}(G(x))\leq n\). The algorithm as stated also only works with bounded error, but the success probability can be amplified to \(1-\delta\) by repeating \(\log(1/\delta)\) times and taking the median estimate. ## 3 Witness Generation Our main technical result, on generating span program witness states is the following: **Theorem 10**.: _Given a span program \(\mathcal{P}\) that decides a function \(f\), and constants \(\epsilon,\delta\), there is an algorithm (Algorithm 1) that, given as input an oracle \(O_{x}\) such that \(f(x)=1\) with optimal positive witness \(|w\rangle\), outputs a state \(|\hat{w}\rangle/\|\hat{w}\rangle\|\) such that \(\left\|\left|w\right\rangle/\sqrt{w_{+}(x)}-|\hat{w}\rangle/\|\hat{w}\rangle \|\right\|^{2}\leq O(\epsilon)\) with probability \(1-O(\delta)\), and uses \(\widetilde{O}\left(\sqrt{\frac{w_{+}(x)\widetilde{W}_{-}}{\epsilon}}\log\left( \frac{1}{\delta}\right)\right)\) expected queries to \(O_{x}\)._ For comparison, a span program algorithm can _decide_\(f\) with bounded error in expected query complexity \(\widetilde{O}\left(\sqrt{w_{+}(x)\widetilde{W}_{-}}\right)\), so Theorem 10 gives a matching complexity for generating a witness state. As we will see in Section 4.1, in the case of the span program \(\mathcal{P}_{G_{s,t}}\) for \(st\)-connectivity on subgraphs of \(G\), this implies that we can sample an \(st\)-path edge in the same complexity used by the span program algorithm to decide if an \(st\)-path exists. A key subroutine for our witness state generation algorithm will be quantum phase estimation. In quantum phase estimation one implements a controlled version of a unitary \(U\) acting on a Hilbert space \(\mathcal{H}_{A}\) on an input state \(|\psi\rangle\in\mathcal{H}_{A}\). The state \(|\psi\rangle\) can be decomposed into its eigenbasis with respect to \(U\) as \(|\psi\rangle=\sum_{i}\alpha_{i}|\lambda_{i}\rangle\), where \(U|\lambda_{i}\rangle=e^{i\phi_{i}\pi}\) and we say \(\phi_{i}\) is the phase of the state \(|\lambda_{i}\rangle\). Then when phase estimation is performed with precision \(\Theta\) the probability that you measure a phase of \(0\) after the phase estimation procedure is approximately given by \(\sum_{i:|\phi_{i}|\leq\Theta}|\alpha_{i}|^{2}\), and the non-normalized state that results after measuring a phase of \(0\) is approximately \(\sum_{i:|\phi_{i}|\leq\Theta}\alpha_{i}|\lambda_{i}\rangle\). In other words, phase estimation can be used to project into the low phase space (with phase less than \(\Theta\)) with probability that depends on the amount of amplitude the original state had on low-phase eigenstates. For an accuracy parameter \(\epsilon\), the number of uses of \(U\) in phase estimation scales as \(O\left(\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\). A more rigorous description of the guarantees of phase estimation is given below in Lemma 11. The basic idea of the algorithm that we use to prove Theorem 10 is to apply phase estimation with a unitary \(U(\mathcal{P},x,\alpha)\), (which can be implemented with access to an oracle \(O_{x}\) and depends on a span program \(\mathcal{P}\), and a positive real parameter \(\alpha\)), on a state \(|\hat{0}\rangle\). We show that the eigenspectrum of \(|\hat{0}\rangle\) relative to \(U(\mathcal{P},x,\alpha)\) decomposes into two states, \(|\hat{0}\rangle\oplus\frac{1}{\alpha}|w\rangle\), which is a \(0\)-phase eigenstate of \(U(\mathcal{P},x,\alpha)\), and \(|\psi_{x,+}\rangle\), which has small overlap with the low-phase space of \(U(\mathcal{P},x,\alpha)\). If we do phase estimation with \(U(\mathcal{P},x,\alpha)\) on \(|\hat{0}\rangle\) with sufficiently small precision, and then if we measure a phase of \(0\), as discussed above, we will approximately project into the state \(|\hat{0}\rangle\oplus\frac{1}{\alpha}|w\rangle\). From there, if we make the measurement \(\{|\hat{0}\rangle\!\langle\hat{0}|,I-|\hat{0}\rangle\!\langle\hat{0}|\}\), and obtain outcome \(I-|\hat{0}\rangle\!\langle\hat{0}|\) the state will project into \(|w\rangle\), as desired. Next, there comes a balancing act for our choice of \(\alpha\). When \(\alpha\) is too small, \(|\hat{0}\rangle\) has small overlap with the span of \(|\hat{0}\rangle\oplus\frac{1}{\alpha}|w\rangle\), so we are not very likely to measure a phase of \(0\) when we do phase estimation with \(U(\mathcal{P},x,\alpha)\) on \(|\hat{0}\rangle\). However, when \(\alpha\) gets too large, while it becomes very likely to measure a phase of \(0\) and thus obtain the state \(|\hat{0}\rangle\oplus\frac{1}{\alpha}|w\rangle\), we will be unlikely to subsequently measure outcome \(I-|\hat{0}\rangle\!\langle\hat{0}|\). The sweet spot is when \(\alpha\approx\sqrt{w_{+}(x)}\), in which case both measurement outcomes we require have a reasonable probability of occurring. Since we don't know \(w_{+}(x)\) ahead of time, we must first estimate an appropriate value of \(\alpha\) to use, which we do by iteratively testing larger and larger values of \(\alpha\).4 Our test involves estimating the probability of measuring a phase of \(0\) when phase estimation with \(U(\mathcal{P},x,\alpha)\) is performed on \(|\hat{0}\rangle\), which we show provides an estimate of \(\alpha/\sqrt{w_{+}(x)}\). Footnote 4: There is a similar algorithm in [19] that estimates \(w_{+}(x)\), but it is more precise than we require. ### Proof of Theorem 10 Before introducing the algorithm we use to prove Theorem 10, we introduce some key concepts, lemmas, and theorems that will be used in the analysis. Let \(\tilde{H}=H\oplus\operatorname{span}\{|\hat{0}\rangle\}\), and \(\tilde{H}(x)=H(x)\oplus\operatorname{span}\{|\hat{0}\rangle\}\), where \(|\hat{0}\rangle\) is orthogonal to \(H\). Then we define \(\tilde{A}^{\alpha}\in\mathcal{L}(\tilde{H},\mathcal{V})\) as \[\tilde{A}^{\alpha}=\frac{1}{\alpha}|\tau\rangle\!\langle\hat{0}|-A. \tag{10}\] Let \(\Lambda^{\alpha}\in\mathcal{L}(\tilde{H},\tilde{H})\) be the orthogonal projection onto the kernel of \(\tilde{A}^{\alpha}\), and let \(\Pi_{x}\in\mathcal{L}(\tilde{H},\tilde{H})\) be the orthogonal projector onto \(\tilde{H}(x).\) Finally, let \(U(\mathcal{P},x,\alpha)=(2\Pi_{x}-I)(2\Lambda^{\alpha}-I)\). Note that \(2\Pi_{x}-I\) can be implemented with two applications of \(O_{x}\)[19, Lemma 3.1], and \(2\Lambda^{\alpha}-I\) can be implemented without any applications of \(O_{x}\). We will use parallelized phase estimation, as described in Ref. [14], which provides improved error bounds over standard phase estimation. In particular, given a unitary \(U\) acting on a Hilbert Space \(\mathcal{H}\), a precision \(\Theta>0\), and an accuracy \(\epsilon>0\), we can create a circuit \(D(U)\) that implements \(O(\log\frac{1}{\epsilon})\) parallel copies of the phase estimation circuit on \(U\), each to precision \(O(\Theta)\), that each estimate the phase of a single copy of a state \(|\psi\rangle\). That is, \(D(U)\) acts on the space \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) where \(b=O\left(\log\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\), and \(A\) labels the input state register, and \(B\) labels the registers that store the results of the parallel phase estimations. We use the circuit \(D(U)\) to check if an input state has high overlap with the low-valued eigenphase-space of \(U\)[17, 18, 19]. To characterize the low phase space of a unitary \(U\), let \(P_{\Theta}(U)\) (or just \(P_{\Theta}\) when \(U\) is clear from context) be the projection onto \(\operatorname{span}\{|u\rangle:U|u\rangle=e^{i\theta}|u\rangle\text{ with }|\theta|\leq\Theta\}\) (the eigenspace of \(U\) with eigenphases less than \(\Theta\)). Then the following lemma provides key properties of parallel phase estimation circuit \(D(U)\): **Lemma 11** ([17, 18, 19]).: _Let \(U\) be a unitary on a Hilbert Space \(\mathcal{H}_{A}\), and let \(\Theta,\epsilon>0\). We call \(\Theta\) the precision and \(\epsilon\) the accuracy. Then there is a circuit \(D(U)\) that acts on the space \(\mathcal{H}_{A}\otimes((\mathbb{C}^{2})^{\otimes b})_{B}\) for \(b=O\left(\log\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\), and that uses \(O\left(\frac{1}{\Theta}\log\frac{1}{\epsilon}\right)\) controlled calls to \(U\). Then for any state \(|\psi\rangle\in\mathcal{H}_{A}\),_ 1. \(D(U)(P_{0}|\psi\rangle)_{A}|0\rangle_{B}=(P_{0}|\psi\rangle)_{A}|0\rangle_{B}\)__ 2. \(\|P_{0}|\psi\rangle\|^{2}\leq\|(I_{A}\otimes|0\rangle\!\langle 0|_{B})D(U)(| \psi\rangle_{A}|0\rangle_{B})\|^{2}\leq\|P_{\Theta}|\psi\rangle\|^{2}+\epsilon\)_._ Iterative Quantum Amplitude Estimation is a robust version of amplitude estimation, which uses repeated applications of amplitude estimation to achieve improved error bounds: **Lemma 12** (Iterative Quantum Amplitude Estimation [16]).: _Let \(\delta>0\) and \(\mathcal{A}\) be a unitary quantum circuit such that on a state \(|0\rangle\), \(\mathcal{A}|\psi\rangle=\alpha_{0}|0\rangle|\psi_{0}\rangle+\alpha_{1}|1\rangle |\psi_{1}\rangle\). Then there is an algorithm that estimates \(|\alpha_{0}|^{2}\) to additive error \(\delta\) with success probability at least \(1-p\) using \(O\left(\frac{1}{\delta}\log\left(\frac{1}{p}\log\frac{1}{\delta}\right)\right)\)calls to \(\mathcal{A}\) and \(\mathcal{A}^{\dagger}\)._ A key mathematical tool in analyzing span program algorithms is the Effective Spectral Gap Lemma: **Lemma 13** (Effective Spectral Gap Lemma, [19]).: _Let \(\Pi\) and \(\Lambda\) be projections, and let \(U=(2\Pi-I)(2\Lambda-I)\) be the unitary that is the product of their associated reflections. If \(\Lambda|w\rangle=0\), then \(\|P_{\Theta}(U)\Pi|w\rangle\|\leq\frac{\Theta}{2}\|w\rangle\|\)._ We will need the following relationship between optimal positive witnesses and optimal negative approximate witnesses: **Theorem 14**.: _[_19_, Theorem 2.11]_ _Given a span program \(\mathcal{P}=(H,\mathcal{V},|\tau),A)\) on \(R^{\text{m}}\) and \(x\in R^{\text{m}}\), if \(|w\rangle\) is the optimal positive witness for \(x\) and \(\langle\widetilde{\omega}|\) is an optimal negative approximate witness for \(x\), then_ \[|w\rangle=w_{+}(x)\Pi_{H(x)}(\langle\widetilde{\omega}|A)^{\dagger}. \tag{11}\] As discussed following Theorem 10, we decompose the state \(|\hat{0}\rangle\) into a linear combination of two orthogonal states. They are \[|\psi_{x,0}\rangle= |\hat{0}\rangle+\frac{1}{\alpha}|w\rangle,\] \[|\psi_{x,+}\rangle= |\hat{0}\rangle-\frac{\alpha}{w_{+}(x)}|w\rangle, \tag{12}\] so we can write \(|\hat{0}\rangle\) as \[|\hat{0}\rangle=a_{0}|\psi_{x,0}\rangle+a_{+}|\psi_{x,+}\rangle,\quad\text{ where}\quad a_{0}=\frac{1}{1+\frac{w_{+}(x)}{\alpha^{2}}},\qquad a_{+}=\frac{1}{1+\frac{ \alpha^{2}}{w_{+}(x)}}. \tag{13}\] We first show that \(|\psi_{x,0}\rangle\) is a \(0\)-phase eigenvector of \(U(\mathcal{P},x,\alpha)\). Note that \(\tilde{A}^{\alpha}|\psi_{x,0}\rangle=\frac{1}{\alpha}(|\tau\rangle-|\tau))=0\) (see Eq. (10)), so recalling that \(\Lambda^{\alpha}\) is the orthogonal projector onto the kernel of \(\tilde{A}^{\alpha}\), we have \(\Lambda^{\alpha}|\psi_{x,0}\rangle=|\psi_{x,0}\rangle\). Furthermore, since \(\Pi_{x}\) is the orthogonal projector onto \(\tilde{H}(x)=H(x)\oplus\text{span}\{|\hat{0}\rangle\}\), it follows that \(\Pi_{x}|\psi_{x,0}\rangle=|\psi_{x,0}\rangle\), where we use that \(|w\rangle\) is a positive witness, so \(|w\rangle\in H(x)\). Thus \(U(\mathcal{P},x,\alpha)|\psi_{x,0}\rangle=|\psi_{x,0}\rangle\). On the other hand \(|\psi_{x,+}\rangle\) has low overlap with \(P_{\Theta}(U(\mathcal{P},x,\alpha))\) for small enough \(\Theta\) and \(\alpha\), as the following lemma shows. **Lemma 15**.: _If \(\alpha^{2}\geq 1/\widetilde{W}_{-}\), then \(\|P_{\Theta}(U(\mathcal{P},x,\alpha))|\psi_{x,+}\rangle\|\leq\Theta\alpha \sqrt{\widetilde{W}_{-}}\)._ Proof.: Let \(\langle\widetilde{\omega}|\) be an optimal negative approximate witness for \(x\) (see Definition 3), and let \[|v\rangle=|\hat{0}\rangle-\alpha(\langle\widetilde{\omega}|A)^{\dagger}. \tag{14}\] Using Theorem 14 and the fact that \(\Pi_{x}|\hat{0}\rangle=|\hat{0}\rangle\), we have that \[\Pi_{x}|v\rangle=|\hat{0}\rangle-\alpha\Pi_{H(x)}(\langle\widetilde{\omega}|A )^{\dagger}=|\hat{0}\rangle-\alpha\frac{|w\rangle}{w_{+}(x)}=|\psi_{x,+}\rangle. \tag{15}\] Now we will show \(\Lambda^{\alpha}|v\rangle=0\). Let \(|k\rangle\) be in the kernel of \(\tilde{A}^{\alpha}\), so \(\tilde{A}^{\alpha}|k\rangle=0\). Using Eq. (10) and rearranging, \[A|k\rangle=\frac{1}{\alpha}|\tau\rangle\langle\hat{0}|k\rangle. \tag{16}\] Then \[\langle v|k\rangle =\langle\hat{0}|k\rangle-\alpha\langle\widetilde{\omega}|A|k\rangle\] \[=\langle\hat{0}|k\rangle-\langle\hat{0}|k\rangle\langle\widetilde{ \omega}|\tau\rangle \tag{17}\] \[=0\] where we have used Eqs. (14) and (16) and the properties of optimal negative approximate witnesses. Thus \(|v\rangle\) is orthogonal to any element of the kernel of \(\tilde{A}^{\alpha}\), so \(\Lambda^{\alpha}|v\rangle=0\). Now we can apply Lemma 13 to \(|v\rangle\) to get: \[\begin{split}\|P_{\Theta}(U(\mathcal{P},x,\alpha))|\psi_{x,+} \rangle\|&=\|P_{\Theta}(U(\mathcal{P},x,\alpha))\Pi_{x}|v\rangle\| \\ &\leq\frac{\Theta}{2}\left\|v\right\rangle\|\\ &=\frac{\Theta}{2}\sqrt{1+\alpha^{2}\widetilde{w}_{-}(x, \mathcal{P})}\\ &\leq\Theta\alpha\sqrt{\widetilde{W}_{-}},\end{split} \tag{18}\] where in the first line we have used Eq. (15), and in the last, our assumption that \(\alpha^{2}\widetilde{W}_{-}\geq 1\). **Corollary 16**.: \(\|P_{0}(U(\mathcal{P},x,\alpha))|\psi_{x,+}\rangle\|=0\)_._ Proof.: Apply Lemma 15 with \(\Theta\) set to \(0\). To prove Theorem 10, we analyze the following algorithm: ``` Input : Error tolerance \(\delta\), accuracy \(\epsilon\), span program \(\mathcal{P}\) that decides a function \(f\), oracle \(O_{x}\) Output : A quantum state \(|\hat{w}\rangle/\||\hat{w}\rangle\|\) such that for the optimal positive witness \(|w\rangle\) for \(x\), \(\||w\rangle/\sqrt{w_{+}(x)}-|\hat{w}\rangle/\||\hat{w}\rangle\|\|^{2}\leq O(\epsilon)\) with probability \(1-O(\delta)\) 1\(\epsilon^{\prime}\leftarrow\min\{\epsilon,1/96\};\quad T\leftarrow\left\lceil \log\sqrt{W_{+}\widetilde{W}_{-}}\right\rceil;\quad p\leftarrow\min\left\{ \delta/\log(W_{+}\widetilde{W}_{-}),1/\sqrt{W_{+}\widetilde{W}_{-}}\right\}\) // Probing Stage for\(i=0\)to\(T\)do 2\(\alpha\gets 2^{i}/\sqrt{\widetilde{W}_{-}}\) 3\(\hat{a}\leftarrow\) Iterative Amplitude Estimation (Lemma 12) estimate (with probability of failure \(p\) and additive error \(1/48\)) of the probability of outcome \(|0\rangle_{B}\) in register \(B\) when \(D(U(\mathcal{P},x,\alpha))\) (see Lemma 11) acts on \(|\hat{0}\rangle_{A}|0\rangle_{B}\) with error \(\epsilon^{\prime}\), precision \(\sqrt{\frac{\epsilon^{\prime}}{\alpha^{2}W_{-}}}\) 4if\(\frac{15}{48}\leq\hat{a}\leq\frac{35}{48}\)then Break // State Generation Stage for\(j=1\)to\(\log(1/\delta)\)do 5 Apply \(D(U(\mathcal{P},x,\alpha))\) to \(|\hat{0}\rangle_{A}|0\rangle_{B}\) with error \(\epsilon^{\prime}\), precision \(\sqrt{\frac{\epsilon^{\prime}}{\alpha^{2}W_{-}}}\) 6 Make a measurement with outcome \(M=\{(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}\otimes|0\rangle\!\langle 0|_{B}\}\) on the resultant state ifMeasure outcome \(M\)then 7 Return the resultant state 8 9Return "failure" ``` **Algorithm 1**WitnessGeneration(\(\mathcal{P},O_{x},\delta,\epsilon\)) To analyze Algorithm 1, will need the following lemma and corollary. In Algorithm 1, we estimate the probability of measuring the outcome \(|0\rangle\) in the \(B\) register after doing phase estimation. In the following lemma, we prove this probability is closely related to \(a_{0}\) from Eq. (13). **Lemma 17**.: _Applying \(D(U(\mathcal{P},x,\alpha)))\) with error \(\epsilon\) and precision \(\sqrt{\frac{\epsilon}{\alpha^{2}\widetilde{W}_{-}}}\) (see Lemma 11) to input state \(|\hat{0}\rangle_{A}|0\rangle_{B}\) for \(\alpha\geq 1/\sqrt{\widetilde{W}_{-}}\) results in the outcome \(|0\rangle\) in the \(B\) register with probability in the range \([a_{0},a_{0}+2\epsilon]\)._ Proof.: Throughout the proof, let \(U=U(\mathcal{P},x,\alpha)\). The probability that we measure \(|0\rangle\) in register \(B\) after we apply \(D(U)\) with error \(\epsilon\) and precision \(\Theta\) to \(|\hat{0}\rangle_{A}|0\rangle_{B}\) is, by Lemma 11 Item 2, at most \[\|P_{\Theta}(U)|\hat{0}\rangle\|^{2}+\epsilon=\|a_{0}P_{\Theta}(U)|\psi_{x,0} \rangle+a_{+}P_{\Theta}(U)|\psi_{x,+}\rangle\|^{2}+\epsilon, \tag{19}\] by Eq. (13). Now \(P_{\Theta}(U)|\psi_{x,0}\rangle\) and \(P_{\Theta}(U)|\psi_{x,+}\rangle\) are orthogonal, since \[\langle\psi_{x,0}|P_{\Theta}(U)P_{\Theta}(U)|\psi_{x,+}\rangle=\langle\psi_{x,0 }|\psi_{x,+}\rangle=0, \tag{20}\] where we've used that \(P_{\Theta}(U)|\psi_{x,0}\rangle=|\psi_{x,0}\rangle\) and that \(|\psi_{x,0}\rangle\) and \(|\psi_{x,+}\rangle\) are orthogonal. Continuing from Eq.19 and using the orthogonality condition, we have, using \(\Theta=\sqrt{\frac{\epsilon}{\alpha^{2}\widetilde{W}_{-}}}\), \[\|P_{\Theta}(U)|\hat{0}\rangle\|^{2}+\epsilon =a_{0}^{2}\|P_{\Theta}(U)|\psi_{x,0}\rangle\|^{2}+a_{+}^{2}\|P_{ \Theta}(U)|\psi_{x,+}\rangle\|^{2}+\epsilon\] \[\leq a_{0}^{2}\|\psi_{x,0}\rangle\|^{2}+a_{+}^{2}\Theta^{2} \alpha^{2}\widetilde{W}_{-}+\epsilon\] by Lemma15, since \[\alpha^{2}\widetilde{W}_{-}\geq 1\] \[\leq a_{0}+a_{+}^{2}\epsilon+\epsilon\] \[\leq a_{0}+2\epsilon, \tag{21}\] where we have used that \(\|\psi_{x,0}\rangle\|^{2}=1/a_{0}\), and \(a_{+}\leq 1\) (see Eq.13). By Lemma11 Item2, the probability that we measure \(|0\rangle\) in register \(B\) after applying \(D(U(\mathcal{P},x,\alpha))\) on \(|\hat{0}\rangle_{A}|0\rangle_{B}\) with error \(\epsilon\) and any precision is at least \[\|P_{0}(U)|\hat{0}\rangle\|^{2}=\|a_{0}P_{0}(U)|\psi_{x,0}\rangle+a_{+}P_{0}(U )|\psi_{x,+}\rangle\|^{2}=a_{0}^{2}\|\psi_{x,0}\rangle\|^{2}=a_{0}, \tag{22}\] where we have used Corollary16. **Corollary 18**.: _In Algorithm1, if in an iteration of the Probing Stage, Iterative Amplitude Estimation does not fail at Line4 and subsequently causes a break at Line5, then_ \[a_{0}\in\left[\frac{1}{4},\frac{3}{4}\right],\qquad\frac{a_{0}^{2}w_{+}(x)}{ \alpha^{2}}\in\left[\frac{3}{16},\frac{1}{4}\right]. \tag{23}\] Proof.: If Iterative Amplitude Estimation does not fail at Line4 and causes a break at Line5, then we have an estimate \(\hat{a}\) that is in the range \([\frac{15}{48},\frac{35}{48}]\). Thus, because of the additive error of \(1/48\) in Iterative Amplitude Estimation, the probability of measuring outcome \(|0\rangle_{B}\) is in the range \([\frac{14}{48},\frac{36}{48}]\). By Lemma17, this same probability is in the range \([a_{0},a_{0}+2\epsilon^{\prime}]\), so in particular these two ranges overlap. Thus, since we choose \(2\epsilon^{\prime}\) to be at most \(1/48\), we have that \[a_{0}\in\left[\frac{13}{48},\frac{36}{48}\right]\subset\left[\frac{1}{4}, \frac{3}{4}\right]. \tag{24}\] Using \(a_{0}=(1+\frac{w_{+}(x)}{\alpha^{2}})^{-1}\) (see Eq.13), this implies the stated ranges for \(\frac{a_{0}^{2}w_{+}(x)}{\alpha^{2}}=a_{0}(1-a_{0})\). Now we prove the main performance guarantees of Algorithm1, bounding the success probability and the expected query complexity, thus proving Theorem10. Proof of Theorem10.: Letting \(U=U(\mathcal{P},x,\alpha)\), we analyze Algorithm1. We first show that the algorithm will produce the desired state if both the Probing Stage and the State Generation stage are successful. Then we will analyze the probability of this occurring, in order to bound the success probability of the algorithm. We say the Probing Stage is successful if in some iteration, Iterative Amplitude estimation, having not failed thus far, does not fail and then triggers a break at Line6, in which case we can apply Corollary18. Under these assumptions, we consider the outcome of a successful State Generation stage, when we achieve the measurement outcome \(M=(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}\otimes|0\rangle\!\langle 0|_{B}\). The non-normalized state \(|\hat{w}\rangle\) that is produced upon measurement outcome \(M\) is \[|\hat{w}\rangle =(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}\otimes|0\rangle\! \langle 0|_{B}D(U)|\hat{0}\rangle_{A}|0\rangle_{B}\] \[=a_{0}(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}D(U)|\psi_{x,0} \rangle_{A}|0\rangle_{B}+a_{+}(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A} \otimes|0\rangle\!\langle 0|_{B}D(U)|\psi_{x,+}\rangle_{A}|0\rangle_{B}\] \[=a_{0}(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}|\psi_{x,0} \rangle_{A}|0\rangle_{B}+a_{+}(I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}\otimes| 0\rangle\!\langle 0|_{B}D(U)|\psi_{x,+}\rangle_{A}|0\rangle_{B}\] \[=\frac{a_{0}}{\alpha}|w\rangle_{A}|0\rangle_{B}+\underbrace{a_{+} (I-|\hat{0}\rangle\!\langle\hat{0}|)_{A}\otimes|0\rangle\!\langle 0|_{B}D(U)|\psi_{x,+} \rangle_{A}|0\rangle_{B}}_{=|\xi\rangle}, \tag{25}\] where in the final equality, we used Lemma 11 Item 1, since \(P_{0}(U)|\psi_{x,0}\rangle=|\psi_{x,0}\rangle\). We would like to bound \(\Delta\), where \[\Delta\coloneqq\left\|\frac{|\hat{w}\rangle}{\left\|\hat{w} \rangle\right\|}-\frac{|w\rangle_{A}|0\rangle_{B}}{\sqrt{w_{+}(x)}}\right\|= \left\|\frac{\frac{a_{0}}{\alpha}|w\rangle_{A}|0\rangle_{B}+| \xi\rangle}{\left\|\hat{w}\rangle\right\|}-\frac{|w\rangle_{A}|0\rangle_{B}}{ \sqrt{w_{+}(x)}}\right\|\] \[\leq\left|\frac{a_{0}}{\alpha\left\|\hat{w}\right\rangle\right\|} -\frac{1}{\sqrt{w_{+}(x)}}\left|\hat{||}w\rangle\right\|+\frac{\left\|\xi \rangle\right\|}{\left\|\hat{w}\rangle\right\|}\right\|\qquad\text{by triangle ineq.}\] \[\leq\left|\frac{a_{0}\sqrt{w_{+}(x)}}{\alpha\left\|\hat{w} \right\rangle\right\|}-1\right|+\frac{\left\|\xi\rangle\right\|}{\left\|\hat {w}\rangle\right\|}. \tag{26}\] To bound \(\left\|\left|\xi\right\rangle\right\|\), we have \[\left\|\left|\xi\right\rangle\right\|^{2}=a_{+}^{2}\left\|(I-| \hat{0}\rangle\!\langle 0|)_{A}\otimes|0\rangle\!\langle 0|_{B}D(U)|\psi_{x,+} \rangle_{A}|0\rangle_{B}\right\|^{2}\leq \|I_{A}\otimes|0\rangle\!\langle 0|_{B}D(U)|\psi_{x,+} \rangle_{A}|0\rangle_{B}\|^{2}\] \[\leq \|P_{\Theta}|\psi_{x,+}\rangle\|^{2}+\epsilon^{\prime}\] \[\leq \Theta^{2}\alpha^{2}\widetilde{W}_{-}+\epsilon^{\prime}\leq 2 \epsilon^{\prime},\] where the first inequality is because a projection can only decrease the norm of a vector, and \(a_{+}\leq 1\); the second inequality is from by Lemma 11 Item 2, and the third inequality comes from Lemma 15 and our choice of \(\Theta\). Next, to bound \(\left\|\left|\hat{w}\right\rangle\right\|\), we use the triangle inequality on the final line of Eq. (25), and Eq. (27) to get \[\frac{a_{0}\sqrt{w_{+}(x)}}{\alpha}-\sqrt{2\epsilon^{\prime}}\leq\left\| \left|\hat{w}\right\rangle\right\|\leq\frac{a_{0}\sqrt{w_{+}(x)}}{\alpha}+ \sqrt{2\epsilon^{\prime}}. \tag{28}\] By our choice of \(\epsilon^{\prime}\), we have \(2\epsilon^{\prime}\leq 1/48\), and also applying Corollary 18 to Eq. (28), we have \[\frac{1}{4}<\sqrt{3/16}-\sqrt{1/48}\leq\left\|\left|\hat{w}\right\rangle\right\| \leq\sqrt{1/4}+\sqrt{1/48}<\frac{3}{4}. \tag{29}\] Rearranging Eq. (28) and applying Eq. (29), we have \[\left|\frac{a_{0}\sqrt{w_{+}(x)}}{\alpha\left\|\hat{w}\right\rangle\right\|}- 1\right|\leq\frac{\sqrt{2\epsilon^{\prime}}}{\left\|\left|\hat{w}\right\rangle \right\|}. \tag{30}\] Then plugging Eqs. (27), (29) and (30) into Eq. (26) we have: \[\Delta\leq\frac{2\sqrt{2\epsilon^{\prime}}}{\left\|\left|\hat{w}\right\rangle \right\|}<8\sqrt{2\epsilon^{\prime}}=O(\epsilon). \tag{31}\] Now we analyze the probability that both the Probing Stage and State Generation Stage are successful, resulting in the state \(|\hat{w}\rangle/\left\|\left|\hat{w}\right\rangle\right\|\) as in Eq. (26). First note that there is a value of \(\alpha\) (if we iterate in the Probing Stage long enough), that will cause us to break out of the Probing Stage if Iterative Amplitude Estimation does not fail. In particular, when \(w_{+}(x)/\alpha^{2}\in[1/2,2]\), then from Eq. (13) \(a_{0}\in[1/3,2/3]\). Thus by Lemma 17 and since \(2\epsilon^{\prime}\leq 1/48\), the probability of of outcome \(|0\rangle_{B}\) is in \([16/48,33/48]\), which in Line 5 causes us to leave the Probing Stage if Iterative Amplitude Estimation does not fail. This occurs for some value of \(\alpha\), as we are doubling \(\alpha\) at each iteration of the Probing Stage, causing \(w_{+}(x)/\alpha^{2}\) to decrease, and initially we have \(w_{+}(x)/\alpha^{2}=w_{+}(x)\widetilde{W}_{-}\geq 1\).5 Thus if no error occurs, the condition of Line 5 will be satisfied after some number \(L\) of rounds such that \(L\in O(\log(w_{+}(x)\widetilde{W}_{-}))=O(\log(W_{+}\widetilde{W}_{-}))\). As the probability of failing a single Iterative Amplitude Estimation round is \(p\leq\delta/\log(W_{+}\widetilde{W}_{-})\) (see Line 1), the probability of leaving the Probing Stage when Line 5 is satisfied (rather than before or after) is at least \[(1-p)^{L}=1-O(\delta). \tag{32}\] Assuming that we have successfully left the Probing Stage without failure, we next calculate the probability of getting a measurement outcome \(M\) during the at most \(\log(1/\delta)\) iterations of the State Generation Stage. The probability of getting outcome \(M\) is lower bounded by (from Eq. (29)) \[\||\hat{w}\rangle\|^{2}\geq 1/16. \tag{33}\] Thus the probability of success in the State Generation Stage is \[1-(15/16)^{\log(1/\delta)}=1-O(\delta). \tag{34}\] Combining Eqs. (32) and (34), our probability of successfully producing a state \(|\hat{w}\rangle/\,\||\hat{w}\rangle\|\) as in Eq. (26) is \[(1-O(\delta))(1-O(\delta))=1-O(\delta). \tag{35}\] To calculate the expected query complexity, we first note that if we terminate in round \(t\in\{0,\ldots,\left\lceil\log\sqrt{W_{+}}\right\rceil\}\) of the Probing Stage, we use \[\begin{split}&\sum_{i=0}^{t}O\left(\frac{2^{i}}{\sqrt{\epsilon}} \log\left(\frac{1}{\epsilon}\right)\log\left(\frac{1}{p}\right)\right)+O \left(\log\left(\frac{1}{\delta}\right)\frac{2^{t}}{\sqrt{\epsilon}}\log\left( \frac{1}{\epsilon}\right)\right)\\ =& O\left(\frac{2^{t}}{\sqrt{\epsilon}}\log\left( \frac{1}{\epsilon}\right)\log\left(\frac{1}{p\delta}\right)\right)\end{split} \tag{36}\] queries, which comes from the cost of Iterative Amplitude Estimation (Lemma 12) applied to phase estimation (Lemma 11) in each round of the Probing Stage up to the \(t^{\text{th}}\) round, plus the cost of phase estimation in the State Conversion Stage. The probability that we terminate in any round \(t\) when we have an estimate \(\hat{a}\) that is not in the range \([\frac{15}{48},\frac{35}{48}]\) is at most \(p\). Using Eq. (36) the the total contribution to the average query complexity from all such rounds is at most \[\begin{split}\lceil\log\sum_{t=0}^{\sqrt{W_{+}\widetilde{W}_{-}} }\rceil O\left(p\frac{2^{t}}{\sqrt{\epsilon}}\log\left(\frac{1}{\epsilon} \right)\log\left(\frac{1}{p\delta}\right)\right)=O\left(p\sqrt{\frac{W_{+} \widetilde{W}_{-}}{\epsilon}}\log\left(\frac{1}{\epsilon}\right)\log\left( \frac{1}{p\delta}\right)\right).\end{split} \tag{37}\] where in the sum we have actually included all rounds, not just those that satisfy when \(\hat{a}\) is not in the range \([\frac{15}{48},\frac{35}{48}]\), which is acceptable since we are deriving an upper bound on the expected query complexity. If we terminate at a round \(t^{*}\) when \(\hat{a}\) is in the range \([\frac{15}{48},\frac{35}{48}]\), which happens when Iterative Amplitude Estimation does not fail at Line 4 and then causes a break at Line 5, from Eq. (23) we have \(\frac{w_{+}(x)}{\alpha^{2}}\in\left[\frac{1}{3},4\right]\), and \(2^{t^{*}}=\alpha\sqrt{\widetilde{W}_{-}}\) so \(\sqrt{w_{+}(x)\widetilde{W}_{-}}/2\leq 2^{t^{*}}\leq\sqrt{3w_{+}(x) \widetilde{W}_{-}}\). Because we double \(\alpha\) at each iteration, there are only a constant number of rounds where we will find \(\hat{a}\) in the appropriate range, and we trivially upper bound the probability of terminating at any such round by 1. Using Eq. (36), these rounds add \[O\left(\sqrt{\frac{w_{+}(x)\widetilde{W}_{-}}{\epsilon}}\log\left(\frac{1}{ \epsilon}\right)\log\left(\frac{1}{p\delta}\right)\right) \tag{38}\] to the total expected query complexity. Combining Eqs. (37) and (38), and using that we set \(p\) to be \(O\left(1/\sqrt{W_{+}\widetilde{W}_{-}}\right)\) (Line 1), we find the expected query complexity is \[O\left(\sqrt{\frac{w_{+}(x)\widetilde{W}_{-}}{\epsilon}}\log\left(\frac{1}{ \epsilon}\right)\log\left(\frac{1}{p\delta}\right)\right)=\widetilde{O}\left( \sqrt{\frac{w_{+}(x)\widetilde{W}_{-}}{\epsilon}}\log\left(\frac{1}{\delta} \right)\right). \tag{39}\] ## 4 Graph Applications ### Finding an Edge on a Path In this section, we consider the problem of finding an edge on an \(st\)-path in \(G(x)\), which we denote \(st\)-edge\({}_{G}(x)\). That is, given query access to a string \(x\) that determines a subgraph \(G(x)=(V,E(x))\) of an \(n\)-vertex graph \(G\), as described in Section 2.3 (if \(G\) is a complete graph, \(x\) is just the adjacency matrix of \(G(x)\)), with \(s,t\in V\) such that there is at least one path from \(s\) to \(t\) in \(G(x)\), output an edge \(e\in E(x)\) that is on a (self-avoiding) path from \(s\) to \(t\). Classically, it is hard to imagine that this problem is much easier than finding a path, and indeed, in our classical lower bound in Theorem 20 we force the algorithm to learn a complete path before it can find any edge on the path. However, we find that quantumly, when there are short or multiple paths, this problem is easier than any path finding algorithms known. This opens up the possibility of improved quantum algorithms for cases where it is not necessary to know the complete path, like the \(st\)-cut set algorithm of Section 4.2. **Theorem 19**.: _Fix \(p>0\), and a family of \(n\)-vertex graphs \(G\) with vertices \(s\) and \(t\). There is a quantum algorithm (Algorithm 2) that solves \(st\)-edge\({}_{G}(x)\) with probability \(1-O(p)\) and uses \(\widetilde{O}\left(\frac{n\sqrt{R_{st}(G(x))}}{p}\right)\) expected queries on input \(x\). More precisely, with probability \(1-O(p)\), the algorithm samples from a distribution \(\hat{q}\) such the total variation distance between \(\hat{q}\) and \(q_{G(x),s,t}\) is \(O(\sqrt{p})\), where \(q_{G(x),s,t}(u,v)\) (defined in Eq. (7)) is proportional to \(\theta^{*}(u,v)^{2}\), where \(\theta^{*}\) is the optimal unit \(st\)-flow on \(G(x)\)._ To obtain this result, we run our witness state generation algorithm (Algorithm 1) using the span program for \(st\)-connectivity, \(\mathcal{P}_{G_{st}}\) and an oracle \(O_{x}\) that defines a graph \(G(x)\) with a path between \(s\) and \(t\). When successful, the output will be a quantum state that is approximately proportional to the optimal flow state, Eq. (6), which itself is a superposition of edges on paths by Lemma 7. Then from Eq. (7), when we then measure in the standard basis, the probability of obtaining an edge \(e\) should be close to \(q_{G(x),s,t}(e)\), and with high probability, we will measure some edge on a path. Proof of Theorem 19:.: We analyze Algorithm 2. If \(\texttt{WitnessGeneration}(\mathcal{P}_{G_{st}},O_{x},\epsilon,\delta)\) (see Algorithm 1) does not fail, which happens with probability \(1-O(\delta)=1-O(p)\), then by Theorem 10, \[|\hat{\theta}\rangle=|\theta^{*}\rangle/\|\theta^{*}\rangle\|+|\eta\rangle \tag{40}\] for some \(|\eta\rangle\) such that \(\|\eta\rangle\|^{2}=O(\epsilon)\) and from Eq. (6), \(|\theta^{*}\rangle=\frac{1}{2}\sum_{e\in\overrightarrow{E}}\theta^{*}(e)|e\rangle\) where \(\theta^{*}\) is the optimal unit \(st\)-flow in \(G(x)\), so \(\|\theta^{*}\rangle\|=\sqrt{\mathcal{R}_{s,t}(G(x))}\). Let \(P_{E(x),s,t}\) be the projection onto the set of edges in \(\overrightarrow{E}(x)\) that are on (self-avoiding) paths from \(s\) to \(t\). The probability that we measure such an edge when we measure \(|\hat{\theta}\rangle\) in the standard basis is the square of \[\left\|P_{E(x),s,t}|\hat{\theta}\rangle\right\|\geq\left\|P_{E(x),s,t}|\theta^ {*}\rangle/\left\|\theta^{*}\rangle\right\|\right\|-\left\|P_{E(x),s,t}|\eta \rangle\right\|=1-O(\sqrt{\epsilon}), \tag{41}\] where we have used the triangle inequality, and the fact that \(P_{E(x),s,t}|\theta^{*}\rangle=|\theta^{*}\rangle\), by Lemma 7. Continuing, we have probability \[\left\|P_{E(x),s,t}|\hat{\theta}\rangle\right\|^{2}\geq\left(1-O(\sqrt{\epsilon} )\right)^{2}=1-O(\sqrt{\epsilon}). \tag{42}\] Thus our total probability of success of measuring an edge on a path is \((1-O(\delta))(1-O(\sqrt{\epsilon})\). Since we are setting \(\epsilon\) to \(p^{2}\) and \(\delta\) to \(p\), our total probability of success is \(1-O(p)\). Let \(\hat{q}\) be the output distribution of Algorithm 2. By the relationship between total variation distance and trace norm, we have that \(d(\hat{q},q_{G(x),s,t})\), the total variation distance between \(\hat{q}\) and \(q_{G(x),s,t}\), is at most the trace norm of \(|\hat{\theta}\rangle\) and \(|\theta^{*}\rangle/\||\theta^{*}\rangle\|\) (see e.g. [10]) so \[d(\hat{q},q_{G(x),s,t}) \leq\sqrt{1-\left|\langle\hat{\theta}|\theta^{*}\rangle/\left\| \theta^{*}\rangle\right\|\right|^{2}}\] \[=\sqrt{1-\left|\langle\hat{\theta}|\hat{\theta}\rangle-\langle \hat{\theta}|\eta\rangle\right|^{2}}\] \[\leq\sqrt{1-\left(1-\||\eta\rangle\|\right)^{2}}\] \[\leq\sqrt{2\left\|\eta\right\|\eta\rangle\|}=O(\epsilon^{1/4})=O (\sqrt{p}). \tag{43}\] By Theorem 10, the expected query complexity of WitnessGeneration, and thus Algorithm 2 is \[\widetilde{O}\left(\sqrt{\frac{w_{+}(x)\widetilde{W}_{-}}{\epsilon}}\log\left( \frac{1}{\delta}\right)\right)=\widetilde{O}\left(\frac{\sqrt{R_{s,t}(G(x))}n }{p}\right) \tag{44}\] where we have used the fact that, for \(\mathcal{P}_{G_{st}}\), \(w_{+}(x)=R_{s,t}(G(x))\) and \(\widetilde{W}_{-}=O(n^{2})\)[1, 19]; and set \(\epsilon\) to \(p^{2}\) and \(\delta\) to \(p\), as in Algorithm 2. We can use Theorem 19 to prove the following separation between the quantum and classical query complexity of finding an edge on a path: **Theorem 20**.: _Let \(G=(V,E)\) with \(s,t\in V\) be an \(n\)-vertex complete graph, and suppose we are promised that \(G(x)\) has a path of length \(L\) for \(L\in[3,n/4]\) between \(s\) and \(t\) (\(L\) may depend on \(x\) and need not be known ahead of time). Then \(st\text{-}\text{edge}_{G}(x)\) can be solved in \(\widetilde{O}(n\sqrt{L})\) expected quantum queries on input \(x\), while any classical algorithm has query complexity \(\Omega(n^{2})\)._ Proof.: For the quantum algorithm, we apply Theorem 19 with bounded probability of error \(p=\Omega(1)\), and use the fact that \(R_{s,t}(G)=O(L)\). For the classical lower bound, we reduce the following problem to path edge finding: Given a string \(x\) of \(N=2^{\ell}\) bits, \((x_{\sigma})_{\sigma\in\{0,1\}^{\ell}}\) such that there is a unique \(\sigma^{*}\) with \(x_{\sigma^{*}}=1\), output \(\sigma_{1}^{*}\) That is, we would like to output the first bit of the index of the unique 1-valued bit of \(x\). By an adversary argument similar to a standard OR lower bound, the bounded error randomized query complexity of this problem is \(\Omega(N)\). We will show how to solve this problem with an algorithm for finding a path edge on a graph like the one depicted in Figure 1. For \(x\in\{0,1\}^{N}\), let \(G(x)\) be a graph on \(n=\Theta(2^{\ell/2})\) vertices in which there is a unique \(st\)-path of length \(L\), for some odd \(L\), as shown in Figure 1. The vertex \(s\) is connected by a path of length \((L-3)/2\) to a vertex that is additionally connected to a set of \(2^{(\ell-1)/2}\) vertices, \(S_{s}^{(0)}=\{u_{0,\sigma}:\sigma\in\{0,1\}^{(\ell-1)/2}\}\). In a symmetric manner, \(s\) is also connected by another disjoint path of length \((L-3)/2\) to a vertex that is additionally connected to a set of \(2^{(\ell-1)/2}\) vertices, \(S_{s}^{(1)}=\{u_{1,\sigma}:\sigma\in\{0,1\}^{(\ell-1)/2}\}\). In the same way, \(t\) is connected by a pair of disjoint paths of length \((L-3)/2\) to a pair of vertices, additionally connected to \(S_{t}^{(0)}=\{v_{0,\sigma}:\sigma\in\{0,1\}^{(\ell-1)/2}\}\) and \(S_{t}^{(1)}=\{v_{1,\sigma}:\sigma\in\{0,1\}^{(\ell-1)/2}\}\) respectively. All edges described so far (the black edges in Figure 1) are always present in \(G(x)\) (we simulate querying the associated input bits by just outputting 1). We now describe edges whose presence in \(G(x)\) is determined by \(x\). For \(b\in\{0,1\}\), there is a potential edge between every pair of vertices \(u_{b,\sigma}\in S_{s}^{(b)}\) and \(v_{b,\sigma^{\prime}}\in S_{t}^{(b)}\), with the label \(x_{b\sigma\sigma^{\prime}}\), meaning exactly one of these is present in \(G(x)\) - the one with \(\sigma^{*}=b\sigma\sigma^{\prime}\). All remaining possible edges are never present in \(G(x)\) (we simulate querying their associated input bits by just outputting 0). We can find the first bit of \(\sigma^{*}\) by running the edge finding algorithm on \(G(x)\). Assuming the output is correct, there are the following possibilities: 1. If the algorithm outputs an edge from the middle part of the graph, then it must be the one labelled by \(x_{\sigma^{*}}\), so \(\sigma^{*}\) is learned entirely. 2. If the algorithm outputs an edge from the left-hand side of the graph, it is on a path between \(s\) and \(S_{s}^{(b)}\) for some \(b\in\{0,1\}\), and we know that \(\sigma_{1}^{*}=b\). 3. If the algorithm outputs an edge from the right-hand side of the graph, it is on a path between \(t\) and \(S_{t}^{(b)}\) for some \(b\in\{0,1\}\), and we know that \(\sigma_{1}^{*}=b\). In all cases, we have learned \(\sigma_{1}^{*}\). This gives a lower bound on path-edge finding of \(\Omega(N)=\Omega(2^{\ell})=\Omega(n^{2})\). Figure 1: The solid black lines show the edges that are present in \(G(x)\) for any \(x\). In addition, \(G(x)\) contains a single edge between a vertex in \(S_{s}^{(b)}\) and \(S_{t}^{(b)}\), where \(b=\sigma_{1}^{*}\), as in the dashed red edge, resulting in a single path of length \(L\). ### Finding an \(st\)-cut set Given a graph \(G(x)\) containing a path from \(s\) to \(t\), an \(st\)-cut set is a set of edges in \(G(x)\) such that when those edges are removed from \(G(x)\), there is no longer a path from \(s\) to \(t\). The \(st\)-cut set problem is that of finding an \(st\)-cut set. This problem has applications to detecting weak points in networks in order to figure out how to strengthen a network, or conversely, for sabotaging networks. We first note that for graphs with a single \(st\)-path, Theorem19 can immediately be used to find an \(st\)-cut set, since any edge on the path is an \(st\)-cut set. However, we can also analyze more complex situations, as the following, in which we have an upper bound on the effective resistance of the graph, and a lower bound on the optimal unit \(st\)-flow going through any edge in the \(st\)-cut set: **Theorem 21**.: _For functions \(R,g:\mathbb{N}\rightarrow\mathbb{R}_{>0}\), let \(G=(V,E)\) with \(s,t\in V\) be a family of \(n\)-vertex graphs, and suppose we are additionally promised that \(R_{s,t}(G(x))\leq R(n)\), and there exists an \(st\)-cut set \(C\subseteq E(x)\) such that for each \(\{u,v\}\in C\), \(\theta^{*}(u,v)^{2}\geq g(n)\) where \(\theta^{*}\) is the optimal unit \(st\)-flow in \(G(x)\). Then there is a quantum algorithm that outputs a set \(C^{\prime}\) such that \(C\subseteq C^{\prime}\) with bounded error, and has worst-case query complexity \(\widetilde{O}\left(\frac{R(n)^{2}n}{g(n)^{3/2}}\right)\)._ We can assume without loss of generality that the \(C\) in Theorem21 is a minimal \(st\)-cut. While we are not guaranteed that the set \(C^{\prime}\) output by the algorithm referred to in Theorem21 is minimal, it is still an \(st\)-cut as long as it contains \(C\), since its removal will disconnect \(s\) and \(t\). To prove Theorem21, we will use the following variation of the well-known "coupon collector" problem. **Lemma 22**.: _Consider repeatedly sampling a random variable \(Z\) on a finite set \(\mathcal{S}\). Let \(C\subseteq\mathcal{S}\) be such that for each \(e\in C\), \(\Pr[Z=e]\geq B\). Let \(T\) be the number of samples to \(Z\) before we have sampled each element of \(C\) at least once. Then \(\mathbb{E}[T]=O\left(\frac{\log|C|}{B}\right)\)._ Proof.: For \(i\in\{1,\ldots,|C|\}\), the probability that \(Z\) is a new element of \(C\), after \(i-1\) elements have already been collected, is \(p_{i}\geq(|C|-(i-1))B\). Let \(T_{i}\) be the number of samples to \(Z\) after sampling \((i-1)\) elements of \(C\), until we sample \(i\) elements of \(C\), so \(T_{i}\) is a geometric random variable with \[\mathbb{E}[T_{i}]=1/p_{i}\leq((|C|-(i-1))B)^{-1}. \tag{45}\] From this we can compute \[\mathbb{E}[T]=\sum_{i=1}^{|C|}\mathbb{E}[T_{i}]\leq\sum_{i=1}^{|C|}\frac{1}{(|C |-(i-1))B}=\frac{1}{B}\sum_{j=1}^{|C|}\frac{1}{j}=\Theta\left(\frac{\log|C|}{B }\right). \tag{46}\] Proof of Theorem21.: We use parameters \(T^{\prime}\) and \(\epsilon\), to be defined shortly, and \(\delta=1/4\). Our strategy is to repeatedly run \(\texttt{WitnessGeneration}(\mathcal{P}_{G_{st}},O_{x},\epsilon,\delta)\) (Algorithm1) to produce an approximate witness state, and then measure the resultant state in the standard basis to get an edge \(e\), which we add to \(C^{\prime}\). We repeat this \(T^{\prime}\) times, before outputting \(C^{\prime}\). Let \(Z\) be the random variable on \(E\cup\{\texttt{Failure}\}\) representing the measured output of one call to Algorithm1. We set \(\epsilon=\Theta\left(\frac{g(n)}{R(n)}\right)\) small enough so that if the algorithm does not fail, we produce a state \(|\theta^{*}\rangle/\|\theta^{*}\rangle\|+|\eta\rangle\) where \(\|\|\eta\rangle\|^{2}\leq g(n)/R(n)\) (see Eq.40 and following discussion). Then the probability that we sample an edge \(e^{\prime}\in C\) when we measure in the standard basis is \[\left\|\langle e^{\prime}|\left(|\theta^{*}\rangle/\|\theta^{*} \rangle\|+|\eta\rangle\right)\right\|^{2} =\|2\theta^{*}(e^{\prime})/\sqrt{R_{s,t}(G(x))}-\langle e^{\prime} |\eta\rangle\|^{2}\] \[\geq\|2\sqrt{g(n)/R(n)}-\sqrt{g(n)/R(n)}\|^{2}\] \[=\Omega(g(n)/R(n)). \tag{47}\] Since the probability of one call to Algorithm 1 not failing is \(1-\delta=\Omega(1)\), for every \(e^{\prime}\in C\), we have \(\Pr[Z=e^{\prime}]\geq B\) for some \(B=\Omega(g(n)/R(n))\). Thus, by Lemma 22, the expected number of calls to Algorithm 1 before \(C\subseteq C^{\prime}\) is at most: \[\mathbb{E}[T]=O\left(\frac{R(n)}{g(n)}\log|C|\right)=O\left(\frac{R(n)}{g(n)} \log n\right). \tag{48}\] By Markov's inequality, if we set \(T^{\prime}=100\mathbb{E}[T]\), the algorithm will succeed with bounded error. By Theorem 10, each call to Algorithm 1 has expected query complexity \[\widetilde{O}\left(\sqrt{\frac{R_{s,t}(G(x))n^{2}}{\epsilon}}\right)= \widetilde{O}\left(n\sqrt{\frac{R(n)}{g(n)/R(n)}}\right)=\widetilde{O}\left( \frac{nR(n)}{\sqrt{g(n)}}\right), \tag{49}\] so the total expected query complexity is \[\widetilde{O}\left(T^{\prime}\frac{nR(n)}{\sqrt{g(n)}}\right)=\widetilde{O} \left(\frac{R(n)}{g(n)}\frac{nR(n)}{\sqrt{g(n)}}\right)=\widetilde{O}\left( \frac{nR(n)^{2}}{g(n)^{3/2}}\right). \tag{50}\] We can get a worst case algorithm by stopping after \(100\) times the expected number of steps, if the algorithm is still running, and outputting the current \(C^{\prime}\). We have no guarantee on the correctness of \(C^{\prime}\) in that case, but by Markov's inequality, this only happens with probability \(1/100\). We can use Theorem 21 to prove the following result for finding an \(st\)-cut set in a particular family of graphs with expander subgraphs and a single \(st\)-cut edge. **Corollary 23**.: _Let \(G=(V,E)\) with \(s,t\in V\) be a family of \(n\)-vertex graphs, and suppose we are additionally promised that \(G(x)\) consists of two disjoint, \(d\)-regular (for \(d\geq 3\)), constant expansion subgraphs, each on \(n/2\) vertices, where \(s\) and \(t\) are always put in separate subgraphs, plus a single additional edge connecting the two subgraphs. Then there is a quantum algorithm that finds the \(st\)-cut edge with bounded error in worst-case \(\widetilde{O}(n)\) queries, while any classical algorithm has query complexity \(\Omega(n^{2})\)._ Proof.: For a classical algorithm, even if the algorithm had complete knowledge of the two subgraphs, there would be \(\Omega(n^{2})\) possible locations for the connecting edge, reducing the problem to search, requiring \(\Omega(n^{2})\) queries. For the quantum algorithm, note that the maximum effective resistance between any two points in a \(d\)-regular (for \(d\geq 3\)), constant expansion graph on \(n\)-vertices is \(O(1/d)\)[10]. Thus \(R_{s,t}(G(x))=\Omega(1)\). Additionally, since there is only one edge \(e^{\prime}\) connecting the two subgraphs, the optimal unit \(st\)-flow on \(e^{\prime}\), \(\theta^{*}(e^{\prime})\), must be equal to \(1\). Applying Theorem 21 with \(R(n)=O(1)\) and \(g(n)=\Omega(1)\), we get a worst-case bounded error quantum query complexity \(\widetilde{O}(n)\). ### Path Finding In this section, we consider the problem of finding an \(st\)-path in \(G(x)\), which we denote \(st\)-\(\textsc{path}_{G}(x)\). That is, given query access to a string \(x\) that determines a subgraph \(G(x)=(V,E(x))\) of an \(n\)-vertex graph \(G\), as described in Section 2.3 (if \(G\) is a complete graph, \(x\) is just the adjacency matrix of \(G(x)\)), with \(s,t\in V\) such that there is at least one path from \(s\) to \(t\) in \(G(x)\), output a path from \(s\) to \(t\). A path is a sequence of _distinct_ vertices \(\vec{u}=(u_{0},\ldots,u_{\ell})\) such that \(s=u_{0}\), \(t=u_{\ell}\), and for all \(i\in[\ell]\), \((u_{i-1},u_{i})\in\overrightarrow{E}(G(x))\). To solve \(st\)-\(\textsc{path}_{G}\), one might expect that we could simply apply Algorithm 2 multiple times, storing each edge's endpoints and identifying vertices of the endpoints of found edges to reduce the size of the graph, until a path is found. However, such an algorithm could run into challenges that could produce slow running times. For example, in a graph where there are many \(st\)-paths, the algorithm could spend too much time sampling edges from different paths, rather than focusing on completing a single path. In the case of a single \(st\)-path, such a strategy would not take advantage of the fact that once one edge on the path is found, the problem reduces to two connectivity subproblems (from \(s\) to the found edge, and from \(t\) to the found edge) that each typically have significantly smaller query complexities than the original problem. Thus we develop two algorithms that allow us to prove tighter expected query complexity bounds than Ref. [5] for the case of short longest \(st\)-paths, one in the case of a single \(st\)-path, and one for generic graphs. Before getting into quantum algorithms for path detection, we note the following corollary of Theorem 20, via a reduction to path finding from path-edge finding, that characterizes the classical query complexity of path finding in the case of short longest \(st\)-paths: **Corollary 24**.: _Let \(G=(V,E)\) with \(s,t\in V\) be an \(n\)-vertex complete graph and suppose we are promised that \(G(x)\) has a path of length \(L\) for \(L\in[3,n/4]\) between \(s\) and \(t\). Then \(\operatorname{\mathit{st-path}}_{G}(x)\) has randomized query complexity \(\Omega(n^{2})\)._ #### 4.3.1 Graph with a Single Path When the graph \(G(x)\) is known to have a single \(st\)-path, we will we use a divide-and-conquer algorithm to find the path. To show that the divide-and-conquer approach is useful, we first consider the simpler algorithm (as described above) that uses Theorem 19 to find an edge \(\{u,v\}\) on the path, and then once that edge is found, the algorithm is run on a new graph where vertices \(u\) and \(v\) are identified. This process is continued until the edge \(\{s,t\}\) is found. Thus if the length of the path is initially \(L\), after an edge is found, the path length will be \(L-1\), and then \(L-2\) in the next iteration, etc. Ignoring error, and assuming the algorithm finds an edge in each round, by Theorem 19, the query complexity at the \(i\)th round will be \(\widetilde{O}(n\sqrt{L-i}).\) Over the course of the \(L\) rounds, the total query complexity will be \[\sum_{i=0}^{L-1}\widetilde{O}(n\sqrt{L-i})=\widetilde{O}\left(nL^{3/2}\right). \tag{51}\] For \(L\geq n^{2/3}\), this algorithm does not even outperform the best classical algorithm, and for \(L\geq n^{1/3}\) it does not outperform the quantum algorithm of Ref. [5]. We instead consider the following divide-and-conquer approach, described in detail in Algorithm 3. We use Algorithm 2 to find a set of edges, some of which are very likely to be on the path. Then we use Lemma 8 to verify which of those edges is actually on the path, and Lemma 9 to ensure we choose an edge near the center of the path, so we are left with two subproblems of approximately half the size. Finally we make two recursive calls to find the unique path from \(s\) to the found edge, and the unique path from \(t\) to the found edge. **Theorem 25**.: _Let \(p\geq 0\), and \(G=(V,E)\) with \(s,t\in V\) be a family of \(n\)-vertex graphs, and suppose we are promised that \(G(x)\) contains a single \(st\)-path of some length \(L\) (\(L\) may depend on \(x\) and need not be known ahead of time). Then there is a quantum algorithm (Algorithm 3) that with probability \(1-O(p)\) solves \(\operatorname{\mathit{st-path}}_{G}(x)\) and uses \(\widetilde{O}(nL^{1+o(1)}\log^{2}(1/p))\) expected queries on input \(x\)._ Proof.: We first analyze the probability of error, then we prove the correctness of Algorithm 3, assuming that no errors are made, and finally, we analyze the query complexity. We will stop the algorithm after \(O(n)\) recursive calls. Since each recursive call returns an edge, and any path has length at most \(n\), this termination will not affect the success probability. We then bound our probability of error by \(O(p/n^{4})=O(p)\), by showing that the failure probability in each recursive call is \(O(p/n^{5})\). ``` Input : Failure tolerance \(p>0\), oracle \(O_{x}\) for the graph \(G(x)=(V,E(x))\), \(s,t\in V\) such that there is a unique path from \(s\) to \(t\). Output : With probability \(1-O(p)\), a set of edges whose vertices form a path from \(s\) to \(t\) in \(G(x)\). // Base Cases if\(s=t\)then Return \(\emptyset\) if\(\{s,t\}\in E(x)\)then Return \(\{s,t\}\) // Finding Possible Edges on Path \(\varepsilon_{1}\leftarrow\frac{1}{\log n}\) // Any \(\varepsilon_{1}=o(1)\) that is inverse polylog\((n)\) would suffice \(S\leftarrow\emptyset\) \(\ell\leftarrow\frac{2\log(n^{5}/p)}{\varepsilon_{1}}\) for\(i=1\)to\(\ell\)do \(e\leftarrow\texttt{EdgeFinder}(O_{x},\varepsilon_{1},G,s,t)\) (Algorithm 2) if\(e\neq\)"Failure" and \(e=(u,v)\in\overrightarrow{E}(x)\)then\(S=S\cup\{(u,v),(v,u)\}\) // Finding a possible edge that is actually on a path \(\delta\gets p/(\ell n^{5})\) for\((u,v)\in S\)do Initialize \(\texttt{PathDetection}(O_{x},G^{-}_{\{u,v\}},s,u,\delta)\) (Lemma 8) Initialize \(\texttt{PathDetection}(O_{x},G^{-}_{\{u,v\}},v,t,\delta)\) \(flag\leftarrow\) True whileflagdo Run in parallel each PathDetection algorithm initialized in the prior for loop, until each algorithm applies \(O_{x}\) once or terminates (or do nothing for those algorithms that have terminated previously) for\((u,v)\in S\)do if\(\texttt{PathDetection}(O_{x},G^{-}_{\{u,v\}},s,u,\delta)\) and \(\texttt{PathDetection}(O_{x},G^{-}_{\{u,v\}},v,t,\delta)\) have both terminated in this iteration of the while loop and both detected paths then \(\varepsilon_{2}\leftarrow\sqrt{\varepsilon_{1}}\), \(\varepsilon_{3}\gets 2\sqrt{\varepsilon_{1}}\) \(\tilde{k}\leftarrow\)WitnessSizeEst\((O_{x},G,s,u,\varepsilon_{2},\delta)\) (Lemma 9) // estimate of dist. \(s\) to \(u\) if\(|\tilde{k}-L/2|\leq\varepsilon_{3}L\)then \((u^{*},v^{*})\leftarrow(u,v)\) \(flag\leftarrow\) False // Recursive call Return \(\{(u^{*},v^{*})\}\cup\texttt{SinglePathFinder}(O_{x},p,G,s,u^{*})\cup\texttt{ SinglePathFinder}(O_{x},p,G,v^{*},t)\) ``` **Algorithm 3**SinglePathFinder\((O_{x},p,G,s,t)\) We say a failure occurs (in some recursive call) if any of the following happens: 1. Any one of the at most \(4\ell\) PathDetection algorithms errs. This has probability \(O(\ell\delta)=O(p/n^{5})\), by our choice of \(\delta=p/(\ell n^{5})\). 2. One of the at most \(O(\ell)\) calls to WitnessSizeEst produces an estimate that is not within the desired relative error. This has probability \(O(\ell\delta)=O(p/n^{5})\). 3. None of the \(\ell\) iterations of EdgeFinder produces an edge that is on the \(st\)-path, and moreover, that is within \((\varepsilon_{3}-\varepsilon_{2})L=\sqrt{\varepsilon_{1}}L\) of the middle of the path. The absence of this type of failure is sufficient to guarantee that the condition on Line 20 will be satisfied, as long as WitnessSizeEst is also successful. We analyze the probability of the last event, assuming the first two do not occur. Let \(e_{0},\ldots,e_{L-1}\) denote the path edges, in order, in the unique \(st\)-path in \(G(x)\). For one of the \(\ell\) runs of EdgeFinder, the probability that it does not output "Failure" is \(\varepsilon_{1}\). Conditioned on the output of EdgeFinder not being "Failure," by Theorem 19, we sample from a distribution \(\hat{q}\) that is \(\sqrt{\varepsilon_{1}}\)-close in total variation distance to the uniform distribution over edges on the \(st\)-path. Thus, the probability that we sample an edge in the set \[R=\{e_{k}:k\in[L/2-(\varepsilon_{3}-\varepsilon_{2})L,L/2+(\varepsilon_{3}- \varepsilon_{2})L]\}, \tag{52}\] where \(e_{k}\) is the \(k^{\text{th}}\) path edge, is: \[\hat{q}(R)\geq\frac{|R|}{L}-\sqrt{\varepsilon_{1}}=2(\varepsilon_{3}- \varepsilon_{2})-\sqrt{\varepsilon_{1}}=2(2\sqrt{\varepsilon_{1}}-\sqrt{ \varepsilon_{1}})-\sqrt{\varepsilon_{1}}=\sqrt{\varepsilon_{1}}. \tag{53}\] Thus, using \(\varepsilon_{1}\leq 1/2\), each of the \(\ell\) samples has probability at least \((1-\varepsilon_{1})\sqrt{\varepsilon_{1}}\geq\sqrt{\varepsilon_{1}}/2\) of being a path edge in the correct range, \(R\). Using Hoeffding's bound, the probability that none of them is a path edge in the correct range is thus at most: \[e^{-2\ell(\sqrt{\varepsilon_{1}}/2)^{2}}=e^{-\ell\varepsilon_{1}/2}=e^{-\log( n^{5}/p)}=O(n^{-5}p) \tag{54}\] by our choice of \(\ell=2\log(n^{5}/p)/\varepsilon_{1}\). The total probability of failure in one round is thus at most \(O(p/n^{5})\). We prove correctness using induction on \(L\), the length of the path, assuming no failure occurs. For the base case, if \(L=0\) or \(L=1\), we will correctly return the path in Lines 1 and 2. For the inductive case, let \(L^{\prime}\geq 1\). We assume SinglePathFinder works correctly for all lengths \(L\) such that \(0\leq L\leq L^{\prime}\). Now consider a graph with \(L=L^{\prime}+1\). Then assuming no failure, we will sample at least one edge \((u,v)\) in the set \(R=\{e_{k}:k\in[L/2-(\varepsilon_{3}-\varepsilon_{2})L,L/2+(\varepsilon_{3}- \varepsilon_{2})L]\}\) (not doing so is a failure of the type specified by Item 3 in the list above). Then if there are no errors in the PathDetection algorithms, Line 16 will be satisfied when \((u,v)\) corresponds to an edge in the path where \(u\) is closer to \(s\) and \(v\) is closer to \(t\). This is because we have removed \(\{u,v\}\) from the graph when we are running PathDetection, and since there is a unique \(st\)-path, there will only be a path from \(s\) to \(u\) and not from \(s\) to \(v\), and likewise for \(t\). Then for every edge \((u,v)\) that we have correctly found using PathDetection to be on a path, we apply WitnessSizeEst (see Lemma 9) to estimate \(R_{s,u}(G(x))\). If \((u,v)=e_{k}\), then \(e_{0},\ldots,e_{k-1}\) is the unique \(su\)-path in \(G\), and it has length \(k\), and so \(R_{s,u}(G(x))=k\), and thus WitnessSizeEst is actually estimating \(k\). Assuming \((u,v)\in R\), (and we know this holds for at least one such edge), we have \(|k-L/2|\leq(\varepsilon_{3}-\varepsilon_{2})L\). Then since we assume WitnessSizeEst does not fail, it outputs an estimate \(\tilde{k}\) of \(k\), such that \(|\tilde{k}-k|\leq\varepsilon_{2}k\leq\varepsilon_{2}L\). Together, these conditioned imply \(|\tilde{k}-L/2|\leq\varepsilon_{3}L\), which will trigger the **while** loop to halt. It is possible that we will break out of the loop for an edge not in \(R\), but at the least we know that if no failure occurs, we will will certainty break out of the **while** loop with an edge \((u^{*},v^{*})\) on the path. Now that we have the edge \((u^{*},v^{*})\), to find the rest of the path, we just need to find the rest of the path from \(s\) to \(u^{*}\) and from \(v^{*}\) to \(t\). But both of these problems will have path lengths between \(0\) and \(L^{\prime}\), so by inductive assumption, the recursive calls in Line 23 will be correct, and will return the edges on the paths. Turning to our analysis of the expected query complexity, we first bound the contribution to the expected query complexity in the case of a failure. As just discussed, a failure occurs with probability \(O(p/n^{4})\). Even in case of failure, each of our \(O(n\log(n/p))=O(n^{2}\log(1/p))\) calls to EdgeFinder, PathDetection, and WitnessSizeEst still has expected query complexity at most \(\widetilde{O}(n^{1.5}(1/\varepsilon_{1}+1/\varepsilon_{2}^{3/2})\log(1/ \delta))=O(n^{2}\log(1/p))\) (for _any_\(x\)), for a total query cost of \(O(n^{4}\log^{2}(1/p))\). Thus, the error case contributes an additive \(O(p\log^{2}(1/p))=O(1)\) to the expected query complexity. Next, we create a recurrence relation for the expected query complexity, assuming no failure occurs. Let \(\mathbb{E}[T_{L}]\) be the expected query complexity of Algorithm 3 on a graph with \(n\) vertices, when there is a single path, and that path has length \(L\). For \(k\in\{0,\ldots,L-1\}\), let \(\tilde{q}_{L}(k)\) be the probability that the path edge that we find, \((u^{*},v^{*})\), is \(e_{k}\). Because we assume no subroutine call fails, we can assume that \(\tilde{k}\) is an estimate of \(k\) with relative error \(\varepsilon_{2}\), so \(|\tilde{k}-k|\leq\varepsilon_{2}k\leq\varepsilon_{2}L\). From the conditional statement in Line 20, we also have \(|\tilde{k}-L/2|\leq\varepsilon_{3}L\). Taken together, these imply: \[|k-L/2|\leq(\varepsilon_{2}+\varepsilon_{3})L=(\sqrt{\varepsilon_{1}}+2\sqrt{ \varepsilon_{1}})L=3\sqrt{\varepsilon_{1}}L. \tag{55}\] Thus with certainty (assuming no failure occurs), we will exit the **while** loop with \((u^{*},v^{*})=e_{k}\), for \(k\in[(1/2-3\sqrt{\varepsilon_{1}})L,(1/2+3\sqrt{\varepsilon_{1}})L]\), so: \[\mathbb{E}[T_{L}]=\widetilde{O}(\ell n\sqrt{L}/\varepsilon_{1}) +\widetilde{O}(\ell n\sqrt{L}\log(1/\delta)) +\widetilde{O}\left(\ell\frac{n\sqrt{L}}{\varepsilon_{2}^{3/2}} \log(1/\delta)\right)\\ +\sum_{k=\lceil(1/2-3\sqrt{\varepsilon_{1}})L\rceil}^{\lfloor(1/ 2+3\sqrt{\varepsilon_{1}})L\rfloor}\tilde{q}_{L}(k)\left(\mathbb{E}[T_{k}]+ \mathbb{E}[T_{L-k-1}]\right), \tag{56}\] where the first three terms come from: (1) running EdgeFinder (Algorithm 2, Theorem 19) \(\ell\) times; (2) at most \(O(\ell)\) parallel PathDetection (Lemma 8) algorithms; and (3) running WitnessSizeEst (Lemma 9) \(O(\ell)\) times; and the final term from the two recursive calls. To get a function that is strictly increasing in \(L\), let \(T^{\prime}_{L}\coloneqq\max_{k\leq L}\mathbb{E}[T_{k}]\), so in particular \(\mathbb{E}[T_{L}]\leq T^{\prime}_{L}\), and \(T^{\prime}_{L}\) also satisfies the recursion in Eq. (56) (with \(=\) replaced by \(\leq\)). Then we have, for any \(k\in[(1/2-3\sqrt{\varepsilon_{1}})L,(1/2+3\sqrt{\varepsilon_{1}})L]\), \[\mathbb{E}[T_{k}]+\mathbb{E}[T_{L-k-1}]\leq 2T^{\prime}_{(1/2+3\sqrt{ \varepsilon_{1}})L}. \tag{57}\] Thus, continuing from Eq. (56), and also using \(1/\varepsilon_{1}=\log n\) and \(1/\varepsilon_{2}=1/\sqrt{\varepsilon_{1}}=\sqrt{\log n}\), \(\ell=2\log(n^{5}/p)/\varepsilon_{1}=O(\log(1/p)\log^{2}n)\), and \(\log(1/\delta)=O(\log(\ell n/p))=\log(1/p)\text{polylog}(n,\log(1/p))\), we get \[\mathbb{E}[T_{L}]\leq T^{\prime}_{L}\leq\widetilde{O}\left(n\sqrt{L}\log^{2}( 1/p)\right)+2T_{(1/2+3\sqrt{\varepsilon_{1}})L}. \tag{58}\] To analyze this recurrence, we add up the number of queries made in every recursive call. At the \(i^{\text{th}}\) level of recursion, there are \(2^{i}\) recursive calls, and each one makes \(\widetilde{O}\left(n\sqrt{L/b^{i}}\log^{2}(1/p)\right)\) queries itself, where \(b=(1/2+3\sqrt{\varepsilon_{1}})^{-1}\), before recursing further. Thus \[\mathbb{E}[T_{L}] \leq\widetilde{O}\left(n\sqrt{L}\log^{2}(1/p)\right)+\sum_{i=1}^{ \log_{b}L}2^{i}\sqrt{\frac{L}{b^{i}}}\cdot\widetilde{O}\left(n\log^{2}(1/p) \right) \tag{59}\] \[\leq\widetilde{O}\left(n\sqrt{L}\log^{2}(1/p)\right)+\widetilde{O }\left(n\sqrt{L}\log^{2}(1/p)\right)\left(2/\sqrt{b}\right)^{\log_{b}L}.\] Letting \(\eta\coloneqq\frac{1}{1+\frac{1}{6}\sqrt{\frac{1}{1-\eta}}}=O(1/\sqrt{\log n})\) since \(\varepsilon_{1}=1/\log n\), so that \(b=2(1-\eta)\), we have: \[\log\left(2/\sqrt{b}\right)^{\log_{b}L} =\left(1-\frac{1}{2}\log b\right)\frac{\log L}{\log b}=\left(\frac {1}{\log b}-\frac{1}{2}\right)\log L\] \[=\left(\frac{1}{1-\log\frac{1}{1-\eta}}-\frac{1}{2}\right)\log L =\left(\frac{1}{2}+\frac{\log\frac{1}{1-\eta}}{1-\log\frac{1}{1-\eta}}\right) \log L \tag{60}\] \[\text{so }\left(2/\sqrt{b}\right)^{\log_{b}L} =L^{\frac{1}{2}+o(1)},\] where we used \(\frac{\log\frac{1}{1-\eta}}{1-\log\frac{1}{1-\eta}}=o(1)\), since \(\log\frac{1}{1-\eta}=o(1)\), which follows from \(\eta=o(1)\). Thus, continuing from Eq. (59), we have: \[\mathbb{E}[T_{L}]=\widetilde{O}\left(n\sqrt{L}\log^{2}(1/p)\right)L^{\frac{1} {2}+o(1)}=\widetilde{O}\left(nL^{1+o(1)}\log^{2}(1/p)\right). \tag{61}\] We note that while our approach in Theorem 25 outperforms the simpler, non-divide-and-conquer algorithm analyzed in Eq. (51), it performs worse than the algorithm of Ref. [13] for graphs with \(L=\Omega(n^{1/2-o(1)})\). Thus, one could run Algorithm 3 until \(O\left(n^{3/2}\right)\) queries had been made, and if a path had not yet been found, switch to the algorithm of Ref. [13]. #### 4.3.2 Path Finding in Arbitrary Graphs For the more general case of \(st\text{-}\textsc{path}_{G}(x)\) when \(G(x)\) is not known to only have one \(st\)-path, while it is possible that an algorithm similar to Algorithm 3 would work, we have not been able to bound the running time effectively. This is because in the case of a single path, once you find an intermediate edge on the path, the longest paths from \(s\) and \(t\) to that edge must be shorter than the length of the longest path from \(s\) to \(t\). This ensures that subproblems take shorter time than the original problem. With multiple paths, we no longer have that guarantee. However, we provide an alternative approach that, while not as fast as Algorithm 3, still provides an improvement over the algorithm of [13] for graphs in which all (self-avoiding) paths from \(s\) to \(t\) are short. Our approach does not make use of our path-edge sampling algorithm as a subroutine, and instead uses the path detection algorithm of Lemma 8 to decide whether there are paths through various subgraphs, and then uses that information to find each edge in a path in order from \(s\) to \(t.\) In this way, we avoid the problem of subproblems being larger than the original problem, since if the longest path from \(s\) to \(t\) has length \(L\), and the first edge we find on the path is \((s,u)\), then longest path from \(u\) to \(t\) that doesn't go through \(s\) must have length at most \(L-1.\) However, we lose the advantage of a divide-and-conquer approach. To find the first edge on a path, we use a group testing approach. We divide the neighbors of \(s\) in \(G\) into two sets, \(S_{1}\) and \(S_{2}\) and run path detection algorithms in parallel on two subgraphs of \(G(x)\), one with edges from \(s\) removed, except those to vertices in \(S_{1}\) (that is, \(G^{-}_{\{(s,u)\in E:u\in S_{1}\}}\)), and one with edges from \(s\) removed, except those to vertices in \(S_{2}\). We will detect which of these subgraphs contains a path, and we will know there is a path whose first edge goes from \(s\) to a vertex in the corresponding set (\(S_{1}\) or \(S_{2}\)). Then we divide that set into half again, and repeat, until we have narrowed down our set to one vertex \(u\), that must be the first vertex on a path from \(s\) to \(t\). At this point we have learned the first edge on a path from \(s\) to \(t\). We then consider \(G^{-}_{s}\), which is \(G\) with vertex \(s\) removed, and recursively iterate this procedure to learn the first edge on a path from \(u\) to \(t\). ``` Input : Failure tolerance \(p\), oracle \(O_{x}\) for the graph \(G(x)=(V,E(x))\), \(s,t\in V\) such that there is a path from \(s\) to \(t\). Output : With probability \(1-O(p)\), a sequence of edges whose vertices form a path from \(s\) to \(t\) in \(G(x)\) 1\(\delta\gets p/(n^{4}\log n)\)// Base Case 2if\(s=t\)then Return \(\emptyset\)// Finding the first edge in a path from \(s\) to \(t\) 3\(S,E_{s}\leftarrow\{\{s,v\}:\{s,v\}\in E\}\) 4while\(|S|>1\)do 5 Divide \(S\) into two approximately equal, disjoint sets, \(S_{1}\) and \(S_{2}\) 6 Run in parallel the following two algorithms such that the queries implemented by each algorithm stays within 1 of the other at all times, until one outputs 1 \(\texttt{PathDetection}(O_{x},G^{-}_{E_{s}\setminus S_{1}},s,t,\delta)\) (Lemma 8) \(\texttt{PathDetection}(O_{x},G^{-}_{E_{s}\setminus S_{2}},s,t,\delta)\) (Lemma 8) if\(G^{-}_{E_{s}\setminus S_{1}}\) algorithm output 1then 7\(S\gets S_{1}\) 8else 9\(S\gets S_{2}\) 10if neither PathDetection call outputs \(1\)then return "Failure" 11 Return \(((s,u))\frown\texttt{GeneralPathFinder}(O_{x},G^{-}_{s},u,t,p)\)// \(\frown\) indicates concatenation of sequences ``` **Algorithm 4**\(\texttt{GeneralPathFinder}(O_{x},G,s,t,p)\) **Theorem 26**.: _Let \(p\geq 0\), and \(G=(V,E)\) with \(s,t\in V\) be a family of \(n\)-vertex graphs, and suppose we are promised that there is a path from \(s\) to \(t\) in \(G(x)\). On input \(x\), if the longest \(st\)-path in \(G(x)\) has length \(L\) (\(L\) need not be known ahead of time), there is a quantum algorithm (Algorithm 4) that returns the edges on a path with probability \(1-O(p)\) and uses \(\widetilde{O}(nL^{3/2}\log(1/p))\) expected queries._ We note that Algorithm 4 performs worse than the algorithm of Ref. [1] for graphs with \(L>n^{1/3}\). Thus, one could run this algorithm until \(O\left(n^{3/2}\right)\) queries had been made, and if a path had not yet been found, switch to the algorithm of Ref. [1]. Proof.: We first analyze the probability of error in Algorithm 4. Over the course of the algorithm, there will be \(O(n)\) recursive calls (since each recursive call returns an edge). We bound our probability of error to \(O(p/n^{3})=O(p)\), by showing that the failure probability in each recursive call is \(O(p/n^{4})\). We consider a recursive call to have an error if any of the \(O(\log n)\) calls to PathDetection fails. Because of our choice of \(\delta=p/(n^{4}\log n)\), each call fails with probability \(O(p/(n^{4}\log n)\), so the probability that all such calls succeed is \[(1-O(p/(n^{4}\log n))^{O(\log n)}=1-O(p/n^{4}), \tag{62}\] so the probability that at least one call fails is \(O(p/n^{4})\) and the probability that any call fails is \(O(p/n^{3})\). Even in the case of a failure, the expected query complexity of the algorithm is at most \(O(n^{3}\log(1/p))\), since at most \(O(n\log n)\) calls to PathDetection are made over the course of the algorithm, each of which has expected query complexity \(O(n^{3/2}\log(n/p))\) (for _any_\(x\)). Thus, the overall contribution to the expected query complexity of Algorithm 4 in the error case is at most \(O((p/n^{3})n^{3}\log(1/p))=O(1)\). Thus, we can analyze the expected query complexity of Algorithm 4 assuming no errors occur. When the longest path length between \(s\) and \(t\) is \(L\), then at least one of the pair of PathDetection subroutines that are run in parallel will have expected query complexity \(\widetilde{O}(n\sqrt{L}\log(1/\delta))\). This is because, as long as there is not an error, the first edge in a path with length at most \(L\) must be contained in either \(S_{1}\) or \(S_{2}\), so there will be a path in one of the two parallel subroutines, it will halt after \(\widetilde{O}(n\sqrt{L}\log(1/\delta))\) expected queries, since, for any \(G^{\prime}\), \(R_{s,t}(G^{\prime})\) is upper bounded by the length of any \(st\)-path in \(G^{\prime}\). Let \(\mathbb{E}[T_{L}]\) be the expected query complexity of Algorithm 4 when all \(st\)-paths in \(G(x)\) have length at most \(L\). Then a recurrence relation for the expected query complexity is \[\mathbb{E}[T_{L}]=\widetilde{O}(n\sqrt{L}\log(1/p))+\mathbb{E}[T_{L-1}],\qquad T _{0}=O(1). \tag{63}\] The \(\widetilde{O}(n\sqrt{L}\log(1/p))\) comes from the \(O(\log n)\) iterations of PathDetection, each of which has expected query complexity at most \(\widetilde{O}(n\sqrt{L}\log(1/\delta))=\widetilde{O}(n\sqrt{L}\log(1/p))\). Solving this recurrence, we find that \[\mathbb{E}[T_{L}]=\widetilde{O}(nL^{3/2}\log(1/p)). \tag{64}\] Finally, we prove the correctness of Algorithm 4 using induction on the length of the longest path from \(s\) to \(t\) assuming that no errors are made. For the base case, if \(L=0\) we will correctly return the path in Line 2. For the inductive case, let \(k\geq 0\), and assume GeneralPathFinder works correctly for all graphs whose longest path length from \(s\) to \(t\) is \(L\), where \(0\leq L\leq k\). Now consider a graph with \(L=k+1.\) Then as long as none of the \(2\lceil\log n\rceil\) iterations of PathDetection in Line 6 fail, we will find an edge \(\{s,u\}\) on a path from \(s\) to \(t\). This is because at each iteration of Line 6, we find a set of vertices that we know contains the second vertex (first vertex after \(s\)) in a path from \(s\) to \(t\). At each iteration, the number of vertices in the set for which we have this knowledge decreases by a factor of 2, until we have a set with just one vertex, which must be the next vertex in our path after \(s\). Once we have found the first edge \(\{s,u\}\) of the path, we have a new problem of finding a \(ut\)-path on a graph with \(s\) removed. But because the longest path from \(s\) to \(t\) was at most \(L\), the longest path from \(u\) to \(t\) that does not go through \(s\) must be at most \(L-1\), so by our inductive assumption, the recursive call to GeneralPathFinder in Line 7, which finds a \(ut\)-path on the graph with vertex \(s\) removed, will be correct. AcknowledgementsWe thank Jana Sotakova and Mehrdad Tahmasbi for insightful discussions about path finding via edge sampling. SK and SJ were sponsored by the Army Research Office and this work was accomplished under Grant Number W911NF-20-1-0327. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. SJ is supported by NWO Klein project number OCENW.Klein.061, and the European Union (ERC, ASC-Q, 101040624). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. SJ is a CIFAR Fellow in the Quantum Information Science Program.
2307.06796
Defeating Proactive Jammers Using Deep Reinforcement Learning for Resource-Constrained IoT Networks
Traditional anti-jamming techniques like spread spectrum, adaptive power/rate control, and cognitive radio, have demonstrated effectiveness in mitigating jamming attacks. However, their robustness against the growing complexity of internet-of-thing (IoT) networks and diverse jamming attacks is still limited. To address these challenges, machine learning (ML)-based techniques have emerged as promising solutions. By offering adaptive and intelligent anti-jamming capabilities, ML-based approaches can effectively adapt to dynamic attack scenarios and overcome the limitations of traditional methods. In this paper, we propose a deep reinforcement learning (DRL)-based approach that utilizes state input from realistic wireless network interface cards. We train five different variants of deep Q-network (DQN) agents to mitigate the effects of jamming with the aim of identifying the most sample-efficient, lightweight, robust, and least complex agent that is tailored for power-constrained devices. The simulation results demonstrate the effectiveness of the proposed DRL-based anti-jamming approach against proactive jammers, regardless of their jamming strategy which eliminates the need for a pattern recognition or jamming strategy detection step. Our findings present a promising solution for securing IoT networks against jamming attacks and highlights substantial opportunities for continued investigation and advancement within this field.
Abubakar Sani Ali, Shimaa Naser, Sami Muhaidat
2023-07-13T15:05:37Z
http://arxiv.org/abs/2307.06796v1
# Defeating Proactive Jammers Using Deep Reinforcement Learning for Resource-Constrained IoT Networks ###### Abstract Traditional anti-jamming techniques like spread spectrum, adaptive power/rate control, and cognitive radio, have demonstrated effectiveness in mitigating jamming attacks. However, their robustness against the growing complexity of internet-of-thing (IoT) networks and diverse jamming attacks is still limited. To address these challenges, machine learning (ML)-based techniques have emerged as promising solutions. By offering adaptive and intelligent anti-jamming capabilities, ML-based approaches can effectively adapt to dynamic attack scenarios and overcome the limitations of traditional methods. In this paper, we propose a deep reinforcement learning (DRL)-based approach that utilizes state input from realistic wireless network interface cards. We train five different variants of deep Q-network (DQN) agents to mitigate the effects of jamming with the aim of identifying the most sample-efficient, lightweight, robust, and least complex agent that is tailored for power-constrained devices. The simulation results demonstrate the effectiveness of the proposed DRL-based anti-jamming approach against proactive jammers, regardless of their jamming strategy which eliminates the need for a pattern recognition or jamming strategy detection step. Our findings present a promising solution for securing IoT networks against jamming attacks and highlights substantial opportunities for continued investigation and advancement within this field. Jamming, anti-jamming, cognitive radio, deep reinforcement learning ## I Introduction Cognitive radio networks (CRNs) have emerged as a revolutionary paradigm in wireless communication, offering intelligent means to optimize the available spectrum resources through dynamic channel identification [1]. Neverthless, the open nature of wireless communication channels exposes CRNs to potential security breaches, particularly jamming attacks which can degrade network performance and significantly reduce the throughput [2]. Traditional jamming countermeasures, such as frequency hopping or direct sequence spread spectrum (DSSS), have inherent limitations, especially when confronted with advanced jammers that are capable of detecting and disrupting these techniques [3]. Although, game-theoretical strategies have been explored to address this issue, such techniques assume impractical preconditions like a priori knowledge of the perturbation pattern and can falter when faced with rapidly changing jamming strategies [4, 5, 6]. Deep reinforcement learning (DRL), a blend of reinforcement learning and deep learning, has been spotlighted due to its adaptability to dynamic environments and ability to learn from raw data, without the need for pre-existing knowledge. In the context of anti-jamming systems, DRL has been employed in various ways in multiple works. For instance, the authors of [7] proposed a deep anti-jamming reinforcement learning algorithm (DARLA) that used raw spectrum data as the environmental state, addressing the anti-jamming problem in a dynamic environment. Similarly, the work in [8] proposed a sequential deep reinforcement learning algorithm (SDRLA) to improve anti-jamming performance. Other research has introduced wideband autonomous cognitive radios [9], transformer encoder-like Q-networks [10], and unmanned aerial vehicle (UAV) jammers modeled as partially observable Markov decision processes [11]. Some studies have also used the signal-to-interference-plus-noise ratio (SINR) to enhance anti-jamming techniques [12, 13]. However, the aforementioned studies relied on supplementary equipment or data such as raw spectrum data or SINR which can be energy-inefficient and difficult to acquire, rendering them unsuitable for resource-constrained internet-of-things (IoT) networks. In our prior study [14], we introduced a novel approach that uses a single vector of clear channel assessment (CCA) information as the state input. This simplifies the environmental state representation, hence, reducing the computational complexity of the neural network. Our previous work also was a departure from the approach presented in [8] as it involved a generic DRL agent capable of effectively operating within dynamic jamming pattern environments without requiring a preliminary pattern recognition process. However, despite these capabilites, the CCA-based method faces some challenges, particularly related to the information extraction from WLAN network interface cards (NICs) and its efficacy against random channel hopping jamming. In this paper, we strive to overcome these challenges by proposing an improved anti-jamming scheme. In specific, we exploit a novel radio frequency (RF)-jamming detection testbed [15], utilize the spectrum sensing capabilities of WLAN NICs, and apply ML algorithms to detect and avoid jamming attacks. Additionally, we conduct a comprehensive investigation of different agent alternatives to optimize the anti-jamming performance in dynamic pattern jamming scenarios. ## II System Model and Formulation In this section, we describe the system, jammer, and signal models under jamming attack as illustrated in Fig. 1. We consider the UNII-1 band of the 5GHz radio spectrum and assume that the radio environment consists of one user (a transmitter-receiver pair) against one jammer. A novel aspect of our model is the presence of an agent at the transmission end, which formulates real-time anti-jamming strategies. These strategies are then shared with the receiver through a reliable control link. We also assume that the transmitter possesses broad-band spectrum sensing capabilities [14]. For ease of analysis, we segment the continuous time into discrete time slots, assuming that both the user and the jammer operate within the same time slot. In each time slot \(t\), the user selects a frequency \(f_{T,t}\) from the range \([f_{L},f_{U}]\) for data transmission to the receiver, using power \(P_{T,t}\). Concurrently, the jammer attempts to interrupt this transmission by selecting a frequency \(f_{J,t}\) and power \(P_{J,t}\) according to a predefined jamming pattern. ### _Jammer Model_ To investigate proactive jamming attack mitigation, we adopt a range of jamming strategies to effectively counter such threats. Specifically, we employ four distinct approaches: constant, sweeping, random, and dynamic jamming techniques. In this model, we assume that the jammer jams a single frequency \(f_{J,t}\) with a varying distance \(d_{JT}\) between the jammer and transmitter and varying jamming powers \(P_{J,t}\). Given the proactive nature of the jammer, it is assumed to be unaware of the current state of the channel. In the case of the constant jamming strategy, at the beginning of a transmission session, the jammer picks one of the available channels of the RF spectrum to jam consistently. Operating in a manner similar to the constant jammer, the combined jammer possesses the ability to disrupt multiple channels. However, it should be noted that not all channels can be jammed simultaneously by this particular type of jammer. On the other hand, in the sweeping jamming strategy, the jammer starts jamming the RF spectrum starting from \(f_{L}\) (i.e. \(f_{J,t}=f_{L}\)) and gradually increasing its jamming frequency until it reaches \(f_{U}\) (i.e. \(f_{J,t}=f_{U}\)) in a sweeping fashion. The change of frequency from one to the adjacent occurs at the beginning of each time slot. In contrast, in the random jamming strategy, the jammer randomly selects a frequency \(f_{J,t}\) from the set of the available frequencies \(\{f_{L},\cdots,f_{U}\}\) and jams at the beginning of every time slot. Finally, in the dynamic pattern jamming strategy, the jammer has the capability of selecting one of the three aforementioned jamming strategies (i.e. constant, sweeping, or random) at the beginning of each transmission session. ### _Signal Model_ The received discrete baseband signal \(r[n]\) at the receiver after matched filtering and sampling at the symbol intervals can be expressed as follows \[r\left[n\right]=\sqrt{P_{T}^{rx}}\;x\left[n\right]+\sqrt{P_{J}^{rx}}\;j\left[n \right]+w\left[n\right], \tag{1}\] where \(x[n]\) and \(j[n]\) represent the discrete-time baseband signals transmitted by the transmitter and the jammer, respectively. Furthermore, \(w[n]\) denotes the zero-mean additive white Gaussian noise (AWGN) with variance \(\sigma^{2}\). Finally, \(P_{T}^{rx}\) and \(P_{J}^{rx}\) represent the average received power from the transmit and the jamming signals, respectively, which can be written as follows \[P_{J}^{rx}=\phi^{JR}P_{J,t}, \tag{2}\] and \[P_{T}^{rx}=\phi^{TR}P_{T,t}, \tag{3}\] where \(\phi^{JR}=\gamma_{0}d_{JR}^{-\epsilon}\) and \(\phi^{TR}=\gamma_{0}d_{TR}^{-\epsilon}\) are the channel power gains of the jammer-receiver and transmitter-receiver links, respectively. Also, \(\gamma_{0}\) represents the channel power gain at a reference distance of 1m. \(d_{JR}\) and \(d_{TR}\) are the distances of the jammer-receiver and transmitter-receiver links, respectively. Finally, \(\epsilon\geq 2\) denotes the path loss exponent. ### _Problem Formulation_ The received SINR can be therefore expressed as follows \[\Theta=\frac{P^{R}}{P_{J}^{rx}+\sigma^{2}}, \tag{4}\] where \(P^{R}\) is the power received from the transmitted signal at the receiver. Consider \(\Theta_{th}\) as the SINR threshold required for successful transmission. The objective at time slot \(t\) is to maximize the normalized throughput, defined as \(\mathcal{U}(f_{T,t})=\delta(\theta\geq\theta_{th})\), where \(\delta(x)\) is a function that equals 1 if \(x\) is true, and 0 otherwise. ## III Proposed DRL-Based Approach In this section, we introduce a DRL-based anti-jamming scheme that obtains its state information by scanning the entire spectrum. ### _MDP Formulation_ We utilize the received power feature from the generated dataset to represent the state vector \(\mathbf{P_{t}}\). Specifically, the state vector is represented as \(\mathbf{P_{t}}=[p_{t,1},p_{t,2},\cdots,p_{t,N_{\kappa}}]\), where \(p_{t,i}\) Fig. 1: System topology is composed of the transmitter, receiver, and jammer. The transmitter tries to communicate with the receiver in the presence of a jamming attack. is the received power at time \(t\) for frequency \(i\). The size of the state space is \(|\mathcal{S}|=N_{c}\). In our formulation, the action \(a_{t}\in\{f_{1},f_{2},\cdots,f_{N_{c}}\}\) represents the selection of frequency \(i\) at time slot \(t\). Similarly, the action space size is \(|\mathcal{A}|=N_{c}\). The transmitter-receiver pair aims to achieve successful transmission with a low channel switching cost \(\Gamma\). Therefore, the reward at time slot \(t\) can be expressed as \[r_{t}=\begin{cases}\mathcal{U}(f_{T,t})-\Gamma\delta(a_{t}\neq a_{t-1})&\text {if }f_{T,t}\neq f_{J,t}\\ 0&\text{if }f_{T,t}=f_{J,t}.\end{cases} \tag{5}\] The reward function presented in (5) takes into account the throughput factor and ignores the energy consumption factor. This is due to the fact that in the current anti-jamming strategy, the transmit power is fixed. Furthermore, the normalization of the reward values to 1 and 0 is valid since the considered jammer is proactive. Based on this, upon obtaining the reward \(r_{t}\), the environment transitions to the next state \(s_{t+1}\) based on a transition probability \(p(s_{t+1}|s_{t},a_{t})\). This probability represents the likelihood of transitioning from state \(s_{t}\) to state \(s_{t+1}\) given the action \(a_{t}\). The initial state is denoted by \(s_{0}\) and the terminal state is the state at which the agent ceases decision-making, which is denoted by \(s_{T}\). The goal of the agent is to find the optimal policy, \(\pi(s)=\arg\max_{a}Q(s,a)\), that maps the state to the best action. The optimal policy is found by learning the optimal action-value function, \(Q^{*}(s,a)\), using an RL algorithm such as DRL. ### _Agent Design_ We train five different agents to determine the most suitable strategy for power-constrained devices. These agents include DQN, DQN with fixed targets, DDQN, Dueling DQN, and DDQN with prioritized replay. Each agent has a unique combination of neural network architecture, experience replay mechanism, and target network update frequency. By training and evaluating the performance of these agents, we aim to identify the most appropriate approach for power-constrained devices in effectively countering proactive jamming attacks. #### Iii-B1 Dqn The DQN algorithm is a model-free, online, off-policy RL method in which a value-based RL agent is employed to train a Q-network that estimates and returns future rewards [16, 17]. The selection of this type of agent is motivated by the fact that our observation space is continuous, and our action space is discrete. Our DQN algorithm implementation is presented in Algorithm 1. The implemented DQN agent uses a function approximator in the form of a neural network, whose weights \(\theta_{Q}\) are updated with every iteration. The Q-network is used to determine the Q-value of the action. The Q-network comprises two hidden layers, as illustrated in Fig. 2, and a ReLU activation function \(f(x)=\text{max}(0,x)\) is chosen [18]. The experience reply buffer \(\mathcal{D}\) stores the agent's experience, which is the transition pair at time-step \(t\) and is defined as \((s_{t},a_{t},r_{t},s_{t+1})\). The stochastic gradient descent (SGD) algorithm [19] is used during training to update the weights \(\theta_{t}\) at every time-step \(t\). #### Iii-B2 DQN with Fixed Targets This variant of DQN updates the target network less frequently, reducing the risk of oscillations and instability during learning. The algorithm is similar to the DQN, but the target network is updated less frequently. This can be achieved by increasing the value of \(C\) (the number of steps between target network updates). The neural network architecture and other components remain unchanged from the DQN architecture. #### Iii-B3 Ddqn The Double Deep Q-Network (DDQN) is an improvement over DQN that reduces the overestimation of Q-values by using two separate networks to estimate the current and target Q-values. The neural network architecture and other components remain unchanged from the DQN architecture. #### Iii-B4 Dueling DQN This algorithm is similar to the DQN, but with a different neural network architecture that decouples the estimation of state values and action advantages, potentially leading to better performance and stability. To implement this, the architecture of the Q-network in Fig. 2 is modified to include two separate streams for state values and action advantages, and then these streams are combined to obtain the final Q-values. The other components remain unchanged from the DQN architecture. #### Iii-B5 DQN with Prioritized Replay This approach combines DQN with prioritized experience replay, which samples more important experiences more frequently during learning, potentially improving learning efficiency. To implement this, the uniform sampling of experiences from the replay buffer Fig. 3: Overview of the training and deployment phases of the proposed DRL-based anti-jamming approach. Fig. 2: Architecture of the proposed DDQN Q-network. is replaced with prioritized sampling based on the absolute TD-error of each experience. Additionally, the loss function \(L_{t}(\theta_{t})\) is updated to include importance-sampling weights to correct for the bias introduced by the prioritized sampling. The neural network architecture remains unchanged from the DQN architecture. ### _Training and Deployment of the Agent_ In this section, we detail the training and deployment of our proposed DRL-based anti-jamming approach, which aims to mitigate jamming attacks in power-constrained devices. Fig. 3 presents an overview of the training and deployment phases of the proposed DRL-based anti-jamming approach. The training phase involves the setup of the system, loading the corresponding data from the spectral scan dataset, obtaining the _received power (dBm)_ feature of each channel, and training the agents based on the reward value obtained from the selected channel. At the beginning, a system setup is made to specify the type of jammer (i.e., sweeping, random, constant, or dynamic pattern jammers), the jamming power, and the distance. Based on this setup, the corresponding data is loaded from the spectral scan dataset. Depending on the type of jammer, the _received power (dBm)_ feature of each channel is obtained. For instance, if the jammer is constant, and the jamming frequency is 5180 MHz at 20 cm with a jamming power of 10 dBm, then the dataset with the corresponding **filename** will be loaded. This ensures that the 5180 MHz frequency will have the highest received power compared to the other frequencies. Based on this state information, the agent will select a channel and receive a reward value based on the selected channel, as defined in (5). Using this reward value, the agent's network is trained and then the environment transitions to the next state. It is worth noting that this process repeats until convergence or a terminal state is reached. During the deployment phase, the trained agent is implemented within the environment it was originally trained on. However, in this phase, the agent does not undergo further training as it exploits the knowledge gained from the training phase. Given a system setup and the current channel \(f_{T,t}\), the agent takes in the state vector, which describes the whole spectrum, as input and selects the best channel \(f_{T,t+1}\) to switch to. If the selected channel \(f_{T,t+1}\) is the same as the current channel \(f_{T,t}\), then transmission continues on \(f_{T,t}\). If \(f_{T,t+1}\neq f_{T,t}\), a channel switch announcement (CSA) is carried out, and the subsequent transmission switches to \(f_{T,t+1}\). This process keeps repeating until all data is transmitted or the terminal state is reached. ## IV Results and Discussions To evaluate the proposed DRL-based anti-jamming solution, we aim to investigate its performance under dynamic pattern jamming, where the jammer randomly selects one of the three jamming patterns namely, sweep, random, and combined at the beginning of each transmission session. This evaluation is important as our primary objective is to develop a generic anti-jamming agent capable of mitigating various jamming patterns. We perform the simulations using a custom-based simulator designed based on the collected dataset in [15]. Also, unless otherwise stated, the simulation parameters used in our study are presented in Table I. Furthermore, we tune the hyper-parameters of the proposed DRL-based anti-jamming scheme during training to achieve a good policy for the agent, as shown in Table II. Finally, we investigate the effects of the \(\Gamma\) parameter on the total throughput of the proposed framework, and we compare the results obtained by using different values of \(\Gamma\). \begin{table} \begin{tabular}{c c} \hline \hline **Parameter** & **Value** \\ \hline RF spectrum band & 5GHz UNII-1 \\ Bandwidth of communication signal & 20 MHz \\ Bandwidth of jamming signal & 20 MHz \\ Number of channels \(N_{c}\) & 8 \\ Initial channel center frequency \(f_{T,0}\) & 5.180 GHz \\ Distance between channel frequencies & 20 MHz \\ Distance between jammer and transmitter \(d_{JT}\) & 20 cm \\ Jamming power \(P_{J,t}\) & 10d Bm \\ \hline \hline \end{tabular} \end{table} TABLE I: Simulation Parameters Fig. 4 depicts the learning performance of the DRL-based anti-jamming agents under dynamic pattern jamming, with different values of \(\Gamma\). We observe that DQN with fixed Q-targets, DDQN, and DDQN with prioritized replay achieve a mean reward of approximately 100, while Dueling DQN achieves a mean reward of around 95. However, the DQN agent only manages to obtain a mean reward of approximately 86, and this failure persists for all values of \(\Gamma\). Unlike in our prior work, [14], in this work all the agents were able to learn the dynamics of the system and evade the jammer. Importantly, we note that all the trained DRL agents, except for DQN, can learn a policy to escape the dynamic pattern jamming. Moreover, we observe that for all types of jammers, the DRL agents can make intelligent channel selection decisions to evade jamming. Interestingly, the DDQN with prioritized replay achieved the most stable learning convergence across all values of \(\Gamma\). In Fig. 5, we present the normalized mean throughput of the legitimate user under various jamming patterns. We observe that, for all values of \(\Gamma\), all the evaluated agents, except DQN, have the ability to completely evade dynamic pattern jamming. Moreover, for all agents, we observe a reduction in throughput as the value of \(\Gamma\) increases, with a greater reduction for higher values of \(\Gamma\). As seen in the case of the learning performance, the DDQN with prioritized replay achieved a consistently high throughput over all values of \(\Gamma\). The impact of \(\Gamma\) on the channel switching behavior of the agents is demonstrated in Fig. 6. It is observed that the agents switch channels 100% of the time, regardless of the values of \(\Gamma\). This indicates that in order to evade dynamic pattern jamming, the agents develop a policy that maps the states to the optimal action and ignores the jamming pattern. This leads to continuous channel switching even under values of \(\Gamma>0\). In other words, the agents choose to be penalized by the channel switching cost and experience a reduction in overall throughput instead of remaining on a single channel and losing 1/8 of their total throughput. Finally, we study the convergence times and inference speeds of the five DRL agents as shown in Table III. During training, the DQN agent demonstrated the fastest convergence speed among all the agents, with an average convergence time of 388.28 seconds. The speed of convergence and inference in DRL agents is determined by the complexity of the learning algorithm and the efficiency of the exploration strategy. DQN, with its simpler learning algorithm and efficient exploration, converges faster. On the other hand, DDQN with prioritized replay memory involves more complex computations and a more sophisticated memory management system, which slows down both the convergence and the inference speed. Overall, all the algorithms investigated showed good per \begin{table} \begin{tabular}{c c} \hline \hline **Parameter** & **Value** \\ \hline Number of training episodes \(|\mathcal{E}|\) & 100 \\ Number of time-steps \(|\mathcal{T}|\) & 100 \\ Discount factor \(\gamma\) & 0.95 \\ Initial exploration rate \(\zeta\) & 1 \\ Exploration decay \(\delta\) & 0,005 \\ Minimum exploration rate \(\zeta_{\text{min}}\) & 0.01 \\ Experience buffer size \(\mathcal{D}\) & 10000 \\ Minimum batch size \(K\) & 32 \\ Averaging window size & 10 \\ Early termination criterion & Average reward = 90 \\ Channel switching cost \(\Gamma\) & [0, 0.05, 0.1, 0.15] \\ \hline \hline \end{tabular} \end{table} TABLE II: DRL Hyper-parameters. Fig. 4: Learning performance of the investigated DRL-based anti-jamming agents under dynamic pattern jamming with \(\Gamma=0,0.05,0.1,0.15\). Fig. 5: Normalized throughput performance of the DRL-based anti-jamming agent under dynamic pattern jamming. Fig. 6: Impact of channel switching cost (\(\Gamma\)) on the DRL-based anti-jamming agent under dynamic pattern jamming. formance in jamming detection and avoidance. The inference speed of the algorithms varied, with DQN being the fastest during training. Among all DRL-based approaches, DDQN with prioritized replay memory offers the best trade-off between throughput and speed. ## V Conclusions This paper investigates the intelligent anti-jamming problem within a dynamic jamming environment. In our endeavor to construct a more practical scheme, we incorporated a jamming detection testbed and jamming data acquired from actual WLAN network interface cards. Utilizing this dataset, we developed a custom simulation and introduced a DRL agent with a fully connected neural network architecture to navigate the intricate decision-making problem inherent to anti-jamming. With our proposed scheme, the agent is capable of learning the most effective anti-jamming strategy through a continuous process of trial and error, testing various actions, and observing their environmental impact. We used simulation results from a variety of environmental settings to corroborate the effectiveness of the proposed DRL-based anti-jamming scheme. It's important to note, however, that a high-power wideband jammer leaves no room for evasion. Consequently, future research will involve creating an anti-jamming technique focused on confronting the jammer at the same frequency, as opposed to evasion or concealment.
2305.01858
De Rham cohomology for supervarieties
We study the de Rham cohomology and the Hodge to de Rham spectral sequence for supervarieties.
Alexander Polishchuk
2023-05-03T02:03:31Z
http://arxiv.org/abs/2305.01858v2
# De Rham cohomology for supervarieties ###### Abstract. We study the de Rham cohomology and the Hodge to de Rham spectral sequence for supervarieties. Partially supported by the NSF grant DMS-2001224, and within the framework of the HSE University Basic Research Program and by the Russian Academic Excellence Project '5-100'. characteristic \(2\), in Sec. 2.2-- supervarieties of dimension \(n|2\), and in Sec. 2.3--an example showing that in dimension \(1|3\), the differential \(d:(\mathcal{O}_{X})_{-}\to(\Omega^{1}_{X})_{-}\) does not always induce injection on \(H^{1}\). _Acknowledgments._ I am grateful to Vera Serganova and to Johan de Jong for useful discussions. Part of this work was done while the author visited the Simons Center for Geometry and Physics. I'd like to thank this institution for hospitality and excellent working conditions. _Conventions_. For a \(\mathbb{Z}/2\)-graded object \(X\) we denote by \(X_{+}\) and \(X_{-}\) its even and odd parts. We use the parity of differential forms such that the de Rham differential is even, so the de Rham complex can be viewed as a complex of \(\mathbb{Z}/2\)-graded sheaves. ### De Rham cohomology of a supervariety Let \(X\) be a smooth superscheme over a field \(k\). Then we have the de Rham complex \(\Omega^{\bullet}_{X}=(\Omega^{\bullet}_{X},d)\), which we view as a bounded below complex of \(k\)-modules in Zariski topology. Therefore, we can define de Rham cohomology of \(X\) as its hypercohomology: \[H^{\bullet}_{dR}(X):=H^{\bullet}(X,\Omega^{\bullet}_{X}).\] Note that as soon as \(X\) has nonzero odd dimension, the complex \(\Omega^{\bullet}_{X}\) is not bounded. Let \(\mathcal{N}\subset\mathcal{O}_{X}\) denote the ideal generated by odd functions, and let \(X_{0}\) denote the bosonization of \(X\), so \(\mathcal{O}_{X_{0}}=\mathcal{O}_{X}/\mathcal{N}\). We have a natural surjective map of complexes of \(k\)-vector spaces \[\Omega^{\bullet}_{X}\to\Omega^{\bullet}_{X_{0}}.\] Let us denote by \(K\) its kernel. **Theorem 1.1**.: _The complex \(K\) is acyclic. Hence, one has a natural isomorphism \(H_{dR}(X)\rTo H_{dR}(X_{0})\)._ Proof.: Let us introduce the decreasing algebra filtration \((F^{\bullet}_{\mathcal{N}})\) on \(\Omega^{\bullet}_{X}\) by letting \(F^{i}_{\mathcal{N}}\) be the ideal (with respect to the algebra structure on \(\Omega^{\bullet}_{X}\)) generated by \[(\mathcal{N}^{i},\mathcal{N}^{i-1}d\mathcal{N},\mathcal{N}^{i-2}(d\mathcal{N })^{2},\dots,(d\mathcal{N})^{i}).\] Then \(K^{\bullet}=F^{1}_{\mathcal{N}}\). We claim that for \(i\geq 1\), the complex \(\mathrm{gr}^{i}_{F}:=F^{i}_{\mathcal{N}}/F^{i+1}_{\mathcal{N}}\) is exact. Let us consider the \(\mathcal{O}_{X_{0}}\)-algebra \(\mathcal{A}:=\mathrm{gr}^{i}_{F}\,\Omega^{\bullet}\), equipped with the differential \(d_{\mathcal{A}}\) induced by \(d\). Note that \(\mathrm{gr}^{0}_{F}=\Omega^{\bullet}_{X_{0}}\). We have an embedding of \(\mathcal{O}_{X_{0}}\)-algebras \[\mathcal{A}_{0}:=\Omega^{\bullet}_{X_{0}}\otimes\bigwedge^{\bullet}(\mathcal{ N}/\mathcal{N}^{2})\hookrightarrow\mathcal{A}.\] Let us set \[\mathcal{A}_{n}:=\mathcal{A}_{0}\cdot\mathrm{gr}^{\leq n}_{F}\subset\mathcal{ A}.\] Then \((\mathcal{A}_{n})\) is an increasing algebra filtration, and we have an algebra isomorphism \[\overline{\mathcal{A}}:=\bigoplus_{n}\mathcal{A}_{n}/\mathcal{A}_{n-1}\simeq \mathcal{A}_{0}\otimes S^{\bullet}(\mathcal{N}/\mathcal{N}^{2})\simeq \Omega^{\bullet}_{X_{0}}\otimes K(\mathcal{N}/\mathcal{N}^{2}),\] where \(K(\mathcal{N}/\mathcal{N}^{2}):=\bigwedge^{\bullet}(\mathcal{N}/\mathcal{N}^ {2})\otimes S^{\bullet}(\mathcal{N}/\mathcal{N}^{2})\). Furthermore, the differential \(d_{\mathcal{A}}\) sends \(\mathcal{A}_{n}\) to \(\mathcal{A}_{n+1}\), and the induced differential \(\mathcal{A}_{n}/\mathcal{A}_{n-1}\to\mathcal{A}_{n+1}/\mathcal{A}_{n}\) on \(\overline{\mathcal{A}}\) is induced by the Koszul differential on \(K(\mathcal{N}/\mathcal{N}^{2})\). Finally, we observe that \(\bigoplus_{n}(\mathcal{A}_{n}\cap\mathrm{gr}^{\geq 1}_{F})/(\mathcal{A}_{n-1}\cap \mathrm{gr}^{\geq 1}_{F})\) gets identified with \(\Omega^{\bullet}_{X_{0}}\otimes K_{\geq 1}(\mathcal{N}/\mathcal{N}^{2})\), where \(K_{\geq 1}(\mathcal{N}/\mathcal{N}^{2})\subset K(\mathcal{N}/\mathcal{N}^{2})\) is the subcomplex of elements of weight \(\geq 1\) (i.e., the augmentation ideal). Now our assertion follows from the exactness of \(K_{\geq 1}(\mathcal{N}/\mathcal{N}^{2})\) (in fact, it is homotopic to zero). _Remark 1.2_.: Note that Theorem 1.1 holds in any characteristic. In the complex analytic context (and hence, in characteristic zero) Theorem 1.1 can also be deduced from the fact that \(\Omega^{\bullet}_{X}\) is a resolution of the constant sheaf on \(X\). ### Relative de Rham cohomology, actions of supergroups and the Gauss-Manin connection More generally, if \(f:X\to S\) is a smooth proper morphism of superschemes then we have the relative de Rham complex \(\Omega^{\bullet}_{X/S}\), and we can form the corresponding relative de Rham cohomology sheaves of \(\mathcal{O}\)-modules on \(S\), \[H^{i}_{dR}(X/S):=\underline{H}^{i}Rf_{*}(\Omega^{\bullet}_{X/S}).\] Assume that \(S\) is Noetherian. Since the sheaves \(R^{q}(\Omega^{p}_{X/S})\) are coherent, it follows that \(H^{i}_{dR}(X/S)\) are coherent sheaves. It is easy to see that the formation of \(H^{i}_{dR}(X/S)\) is compatible with flat base changes \(S^{\prime}\to S\). In particular, if we start with \(X\), proper and smooth over the ground field \(k\), then for any Noetherian commutative \(k\)-superalgebra \(A\), we can consider the base change \(X_{A}\to\operatorname{Spec}(A)\), and we have natural isomorphisms \[H^{i}_{dR}(X)\otimes A\simeq H^{i}_{dR}(X_{A}/A).\] This shows that if an algebraic \(k\)-supergroup \(G\) acts on \(X\) then \(H^{i}_{dR}(X)\) have natural structure of \(G\)-representations. Indeed, for \(A\) as above, the group \(G(A)\) acts on \(X_{A}\) and this gives its action on \(H^{i}_{dR}(X_{A}/A)\simeq H^{i}_{dR}(X)\otimes A\). **Proposition 1.3**.: _Assume the characteristic is zero. If the underlying usual algebraic group \(G_{0}\) is connected then \(H^{i}_{dR}(X)\) is trivial as a \(G\)-representation._ Proof.: It suffices to show that the corresponding Lie superalgebra acts trivially. Thus, it is enough to show that for every global vector field \(v\) (even or odd), the automorphism \(f\mapsto f+v(f)\epsilon\) of \(X_{D}\), where \(D=k[\epsilon]/(\epsilon^{2})\) (\(\epsilon\) is either even or odd), induces the trivial automorphism of \(H^{i}_{dR}(X_{D}/D)\). But this immediately follows from the fact that the corresponding automorphism of the relative de Rham complex \(\Omega^{\bullet}_{X_{D}/D}\), given by \(\eta\mapsto\eta+L_{v}(\eta)\epsilon\) (where \(L_{v}\) is the Lie derivative) is homotopic to the identity via the homotopy \(\eta\mapsto i_{v}(\eta)\) (the contraction by \(v\)). In the case when \(f:X\to S\) is a smooth proper morphism, where \(S\) is smooth over a field \(k\), similarly to the classical case (see e.g., [7, Sec. IV]) one can construct the _Gauss-Manin connection_ on \(H^{i}_{dR}(X/S)\). Namely, consider the filtration \(F^{i}\) on the absolute de Rham complex \(\Omega^{\bullet}_{X}\), where \(F^{i}\) is the image of \(f^{*}\Omega^{i}_{S}\otimes\Omega^{\bullet-i}_{X}\to\Omega^{\bullet}_{X}\). Then we have natural identifications \[\operatorname{gr}^{i}_{F}\Omega^{\bullet}_{X}\simeq f^{*}\Omega^{i}_{S} \otimes\Omega^{\bullet}_{X/S}[-i].\] The connection on \(H^{i}_{dR}(X/S)\) is defined as the connecting homomorphism \[\nabla:H^{i}_{dR}(X/S)\simeq R^{i}f_{*}\operatorname{gr}^{0}_{F}\to R^{i+1}f _{*}\operatorname{gr}^{1}_{F}\simeq\Omega^{1}_{S}\otimes H^{i}_{dR}(X/S).\] Its integrability is proved by considering the entire page of the corresponding spectral sequence. As in the classical case, this implies that for smooth \(S\), the coherent sheaves \(H^{i}_{dR}(X/S)\) on \(S\) are in fact locally free (using [9, Sec. 2.2.1]), and that the formation of the \(H^{i}_{dR}(X/S)\) is compatible with arbitrary base change. Note that the sheaves \(R^{q}f_{*}\Omega^{p}_{X}\) are not necessarily locally free (they are in the even case, in characteristic zero): for example, there exist families of supercurves \(f:X\to S\) such that \(R^{1}f_{*}\mathcal{O}_{X}\) is not locally free (see [5, Sec. 3.3]). ### Spectral sequence: general observations From now on we assume that \(X\) is proper and smooth over the ground field \(k\). As in the even case, we associate with a complex of sheaves \(\Omega^{\bullet}_{X}\) the spectral sequence \[E^{pq}_{1}(X)=H^{q}(X,\Omega^{p}_{X})\implies H^{p+q}_{dR}(X),\] to which we refer as _Hodge to de Rham_ spectral sequence. By Theorem 1.1, its limit \(H^{\bullet}_{dR}(X)\) is identified with \(H^{\bullet}_{dR}(X_{0})\) In the even case, assuming that the characteristic is zero, the Hodge to de Rham spectral sequence is known to degenerate at the page \(E_{1}\). There is not much we can say about this spectral sequence for general supervarieties, even in characteristic zero. Already the example of \(X\) of dimension \(0|n\) shows that it does not have to degenerate at \(E_{1}\). We will prove that it degenerates at \(E_{2}\) if \(X\) is split or has dimension \(n|2\) (see Theorems 2.1 and 2.4). We will also show that there exist \(X\) of dimension \(1|3\) such that the Hodge to de Rham spectral sequence of \(X\) does not degenerate at \(E_{2}\) (see Theorem 2.15). At present, we do not know an example of \(X\) in characteristic zero such that \(E_{2}(X)\) is not finite-dimensional. **Proposition 1.4**.: _We always have an isomorphism \(E_{2}^{00}(X)\simeq H^{0}_{dR}(X_{0})\) and an injection_ \[E_{2}^{10}(X)\hookrightarrow H^{0}(\Omega^{1}_{X_{0}}). \tag{1.1}\] Proof.: The first assertion is clear since the spectral sequence degenerates at \(E_{2}^{00}\). It also degenerates at \(E_{2}^{10}\), so we have an embedding \(E_{2}^{10}(X)\to H^{1}_{dR}(X)\simeq H^{1}_{dR}(X_{0})\). This map factors as the composition \[E_{2}^{10}(X)\to H^{0}(\Omega^{1}_{X_{0}})\to H^{1}_{dR}(X_{0}),\] which implies the claimed injection. For any morphism \(f:X\to Y\) between smooth proper varieties, the pull-back gives a morphism of spectral sequences \[f^{*}:E_{n}^{pq}(Y)\to E_{n}^{pq}(X).\] Applying this to the embedding \(i:X_{0}\to X\), we get a morphism of spectral sequences \[i^{*}:E_{n}^{pq}(X)\to E_{n}^{pq}(X_{0})\] A curious fact is that although the morphism \(H_{dR}(X)\to H_{dR}(X_{0})\) is always an isomorphism, if \(X\) is not projected, the morphism \(E_{\infty}^{pq}(X)\to E_{\infty}^{pq}(X_{0})\) is not necessarily an isomorphism (for example, this happens for \(X=G(1|1,2|2)\), see Example 2.13 below). The next result shows that the distribution of dimensions of \(E_{\infty}^{pq}(X)\) for constant \(p+q\), is skewed towards smaller \(p\) compare to \(E_{\infty}^{pq}(X_{0})\). **Proposition 1.5**.: _For every \(m\geq 0\), \(p\geq 0\),_ \[\sum_{i\geq p}\dim E_{\infty}^{i,m-i}(X)\leq\sum_{i\geq p}h^{i,m-i}(X_{0})\] _(and this becomes an equality for \(p=0\)). Furthermore, for every \(m\), the natural map_ \[E_{\infty}^{0,m}(X)\to E_{\infty}^{0,m}(X_{0})\] _is surjective._ Proof.: The first inequality follows immediately from the fact that the filtration \((F_{p}(X))\) on \(H^{n}_{dR}(X)\) induced by the Hodge to de Rham spectral sequence satisfies \[F_{p}(X)\subset F_{p}(X_{0}).\] The second assertion also follows, as we can identify this map with \[H^{m}_{dR}(X)/F_{1}H^{m}_{dR}(X)\to H^{m}_{dR}(X_{0})/F_{1}H^{m}_{dR}(X_{0}).\] **Corollary 1.6**.: _Assume the characteristic is zero. Then for every \(i\geq 0\), the composition_ \[E_{\infty}^{0,i}(X)\to H^{i}(X,\mathcal{O}_{X})\to H^{i}(X_{0},\mathcal{O}_{X _{0}})\] _is surjective._ _Remark 1.7_.: Similarly, in characteristic zero, for any \(i\geq 0\) and any \(m\geq 0\), the natural map of hypercohomology of truncated de Rham complexes, \[H^{i}(X,[\mathcal{O}_{X}\to\Omega^{1}_{X}\to\ldots\to\Omega^{m}_{X}])\to H^{i }(X_{0},[\mathcal{O}_{X_{0}}\to\Omega^{1}_{X_{0}}\to\ldots\to\Omega^{m}_{X_{0} }])\] is surjective. **Proposition 1.8**.: _Assume that \(X\) is projected. Then for every \(i\geq 1\), the natural maps \(E_{i}^{pq}(X)\to H^{q}(\Omega_{X_{0}}^{p})\) are surjective, and they induce isomorphisms_ \[E_{\infty}^{pq}(X)\rTo^{\tilde{\ \ }}H^{q}(\Omega^{p}(X_{0})).\] _In particular, \(E_{2}^{10}(X)\simeq H^{0}(\Omega_{X_{0}}^{1})\)._ Proof.: Let \(i:X_{0}\to X\) denote the embedding and let \(p:X\to X_{0}\) be a projection (so \(p\circ i=\operatorname{id}_{X_{0}}\)). Then we have morphisms of spectral sequences with the composition equal to the identity. The surjectivity immediately follows from this. The fact that we get an isomorphism on the infinite page follows from the equality of dimensions \(\dim E_{\infty}(X)=\sum_{pq}h^{q}(\Omega_{X_{0}}^{p})\). **Proposition 1.9**.: _(i) Let \(L\) be an even line bundle on \(X\). Then there exists a morphism of spectral sequences_ \[c_{1}(L)\cup\cdot:E_{n}^{pq}(X)\to E_{n}^{p+1,q+1}(X),\] _such that the map on \(E_{1}\) is given by the cup product with \(c_{1}(L)\in H^{1}(\Omega_{X}^{1})\), compatible with the cup product by \(c_{1}(L)_{dR}\in H^{2}_{dR}(X)\) on the limit \(H^{4}_{dR}(X)\)._ _(ii) Assume in addition that the characteristic is zero and_ \(X\) _admits an ample line bundle. Let_ \(d=\dim X_{0}\)_. Then for any_ \(i\geq j\geq 1\)_, and any_ \(n\geq 1\)_, we have for_ \(p+q=d-i\)_,_ \[\operatorname{rk}(E_{n}^{pq}\to H^{q}(\Omega_{X_{0}}^{p}))\leq\operatorname{ rk}(E_{n}^{p+j,q+j}\to H^{q+j}(\Omega_{X_{0}}^{p+j})).\] Proof.: (i) One can realize \(c_{1}(L)\) by a Cech cocycle \(d\log(f_{ij})\) with values in \(\Omega^{1,cl}\) (sheaf of closed \(1\)-forms), where \((f_{ij})\) are transition functions for \(L\), and use the multiplication by this cocycle on the Cech complex of \(\Omega^{\bullet}\). (ii) This follows from (i) together with the fact that the multiplication map \[c_{1}(L)^{j}\cup\cdot:H^{q}(\Omega_{X_{0}}^{p})\to H^{q+j}(\Omega_{X_{0}}^{p+j})\] is injective under our assumptions. _Remark 1.10_.: It is natural to ask whether the approach of Deligne-Illusie via the reduction to finite characteristic (see [4]) sheds any light on the Hodge to de Rham spectral sequence in the super case. Namely, for \(X/k\) in characteristic \(p\), the Frobenius kills odd functions, so it can be viewed as a morphism \(F:X\to X_{0}\). Thus, \(F_{*}\Omega_{X}^{\bullet}\) becomes a complex of \(\mathcal{O}_{X_{0}}\)-modules. Now it is easy to see that the analog of the Cartier isomorphism (given by the same formulas) is \[C^{-1}:\Omega_{X_{0}}^{\tilde{\ \ }}\rTo^{\tilde{\ }}\underline{H}^{i}(F_{*} \Omega_{X}^{\bullet}).\] The same method as in [4] shows that if the characteristic is \(>\dim X_{0}\) and \(X\) lifts to characterstic zero, then the second spectral sequence for the hypercohomology of \(F_{*}\Omega_{X}^{\bullet}\) (starting from \(E_{2}=\bigoplus H^{q}(\underline{H}^{i}(F_{*}\Omega_{X}^{\bullet}))\)) degenerates at \(E_{2}\). This agrees with Theorem 1.1 but does not give any additional information on the Hodge to de Rham spectral sequence for \(X\) (because in the Cartier isomorphism \(\Omega_{X}^{i}\) gets replaced by \(\Omega_{X_{0}}^{i}\)). ### A result on even nilpotent extensions Corollary 1.6 has the following classical counterpart (for usual schemes), which does not seem to be widely known. **Theorem 1.11**.: _Let \(Y\) be a proper scheme over a field \(k\) of characteristic zero, and let \(Y_{0}\) be the corresponding reduced scheme. Assume that \(Y_{0}\) is smooth over \(k\). Then for any \(i\) the natural map \(H^{i}(Y,\mathcal{O}_{Y})\to H^{i}(Y_{0},\mathcal{O}_{Y_{0}})\) is surjective._ Proof.: The map in question fits into a commutative square of natural maps where in the first line we consider cohomology for the infinitesimal sites (see [6]). It is known that the top horizontal arrow is an isomorphism (see [6, Sec. 5.3]). Also, it is well known that \(H^{i}((Y_{0})_{inf},\mathcal{O}_{Y_{0}})\) is isomorphic to the de Rham cohomology of \(Y_{0}\), so by the Hodge-to-de Rham degeneration for \(Y_{0}\), the right vertical arrow is surjective. This implies that the composed map \(H^{i}(Y_{inf},\mathcal{O}_{Y})\to H^{i}(Y_{0},\mathcal{O}_{Y_{0}})\) is surjective. Since it factors through \(H^{i}(Y,\mathcal{O}_{Y})\), the assertion follows. ## 2. Some computations ### Using a special derivation in the split case and in characterstic \(2\) **Theorem 2.1**.: _Assume \(X\) is split, and the ground field \(k\) has characteristic zero. Then the Hodge to de Rham spectral sequence of \(X\) degenerates at \(E_{2}\) and there are natural isomorphisms._ Proof.: Let \(\mathcal{O}_{X}=\bigwedge^{\bullet}(E)\), where \(E\) is a bundle over \(X_{0}\). Then we have an action of \(\mathbb{G}_{m}\) on \(X\), such that \(\mathcal{O}_{X_{0}}\) is \(\mathbb{G}_{m}\)-invariant, and \(E\) has weight \(1\). Let \(\xi\) be the corresponding global (even) vector field on \(X\). In local coordinates \((x_{\bullet},\theta_{\bullet})\) one has \(\xi=\sum_{i}\theta_{i}\partial_{\theta_{i}}\). Now we claim that for each \(n\geq 0\), one has \[\Omega_{X}^{n}=\bigoplus_{i\geq 0}\Omega_{i}^{n},\] where \(\Omega_{i}^{n}\subset\Omega_{X}^{n}\) is the subsheaf of \(\eta\) such that \(L_{\xi}(\eta)=i\eta\), where \(L_{\xi}\) is the Lie derivative with respect to \(\xi\). Indeed, this is a local statement, which is clear in local coordinates. Furthermore, for each \(i\geq 0\), \(\Omega_{i}^{\bullet}\) is preserved by the de Rham differential (since \(dL_{\xi}=L_{\xi}d\)), and \[\Omega_{0}^{\bullet}\simeq\Omega_{X_{0}}^{\bullet}.\] Finally, the contraction operator \(i_{\xi}\) gives a homotopy from \(i\cdot\mathrm{id}\) to \(0\) on \(\Omega_{i}^{\bullet}\). Hence, each complex \(\Omega_{i}^{\bullet}\) for \(i>0\) is contractible. This implies that the \(E_{1}\) page of the spectral sequence associated with each \(\Omega_{i}^{\bullet}\) for \(i>0\) is also contractible, so the \(E_{2}\) page corresponding to each \(\Omega_{i}^{\bullet}\) with \(i>0\) is zero. This implies our assertion. _Remark 2.2_.: When \(X\) varies in a flat family, there is no semicontinuity for the spaces \(E_{2}^{p,q}(X)\) (since they are obtained by taking cohomology twice), so the split case does not provide any nontrivial conclusions for the behavior of \(E_{2}^{p,q}(X)\) for general \(X\). For example, it is possible that for some supervariety \(X\) one has \(E_{2}^{p,q}(\operatorname{gr}X)=0\) for some \((p,q)\), while \(E_{2}^{p,q}(X)\neq 0\) (where \(\operatorname{gr}X\) is the split supervariety associated with \(X\)), see Example 2.13 below. When the characteristic of the ground field is \(2\) then the vector field \(\xi=\sum_{i}\theta_{i}\partial_{\theta_{i}}\) used in Theorem 2.1 does not depend on coordinates. Namely, it corresponds to the derivation \(\xi\) such that \(\xi(f)=0\) for even \(f\) and \(\xi(\eta)=\eta\) for odd \(\eta\). This immediately leads to the following result. **Proposition 2.3**.: _Assume that the ground field \(k\) has characterstic \(2\). Then the odd part of the de Rham complex \((\Omega_{X}^{\bullet})_{-}\) is homotopic to zero. Hence, \(E_{2}(X)_{-}=0\)._ Proof.: The homotopy from \(\mathrm{id}\) to \(0\) on \((\Omega_{X}^{\bullet})_{-}\) is given by the contraction operator \(i_{\xi}\) ### Odd dimension \(2\) In this section we consider the case when \(X\) is of dimension \(n|2\). We assume that the characteristic of \(k\) is zero. For each \(p,q\), let us set \[\delta_{pq}(X):=\dim\operatorname{coker}(H^{q}(\Omega_{X}^{p})\to H^{q}( \Omega_{X_{0}}^{p})).\] Note that by Corollary 1.6, we have \(\delta_{0q}=0\). **Theorem 2.4**.: _Assume that \(X\) has dimension \(n|2\), and the characteristic of \(k\) is zero. Then (i) one has \((E_{2}^{pq})_{-}=0\); (ii) the Hodge to de Rham spectral sequence of \(X\) degenerates at \(E_{2}\); (iii) one has_ \[\dim E_{2}^{pq}=h^{pq}(X_{0})+\delta_{p+1,q-1}(X)-\delta_{p,q}(X).\] The proof will be based on the following result, where we use the filtration \(F^{\bullet}\Omega_{X}^{\bullet}\) introduced in the proof of Theorem 1.1. **Proposition 2.5**.: _Assume that \(X\) has dimension \(n|2\), and the characteristic of \(k\) is zero. Then the complexes \((\Omega_{X}^{\bullet})_{-}\) and \((F^{2}\Omega_{X}^{\bullet})_{+}\) are homotopic to zero._ **Lemma 2.6**.: _Assume we have a commutative diagram with exact rows in some abelian category,_ _Assume that \(f_{1}\) and \(f_{3}\) are split injections and the second row is split exact, then \(f_{2}\) is also a split injection._ The proof is straightforward and is left to the reader. Below we will write \((\Omega_{X}^{\bullet})_{\pm}=\Omega_{\pm}^{\bullet}\) for brevity. For \(k\geq 2\), let us set \[G^{k}\Omega_{\pm}^{n}:=(\mathcal{N}^{2}F^{k-2}\Omega^{n})_{\pm}+F^{k+2}\Omega_ {\pm}^{n},\] where the parity of \(k\) is the same as the parity of the space. For \(k=1\), we set \(G^{1}\Omega_{-}^{n}=F^{3}\Omega_{-}^{n}\). **Lemma 2.7**.: _(i) For every \(k\geq 1\) and every \(n\), the following sequence is exact:_ \[0\to G^{k}\Omega_{\pm}^{n}/d(F^{k+2}\Omega_{\pm}^{n-1})\to F^{k}\Omega_{\pm}^{ n}/d(F^{k}\Omega_{\pm}^{n-1})\xrightarrow{\delta}\Omega_{X_{0}}^{n-k+1}\otimes S ^{k}(\mathcal{N}/\mathcal{N}^{2})\to 0, \tag{2.1}\] _where we use \(+\) for even \(k\) (resp., \(-\) for odd \(k\)), and \(\delta\) is given by_ \[\delta(\alpha_{n-k+1}\cdot n_{1}dn_{2}\cdot\ldots\cdot dn_{k})=\alpha_{n-k+1} |_{X_{0}}\otimes(\overline{n}_{1}\cdot\overline{n}_{2}\cdot\ldots\cdot \overline{n}_{k}),\] _with \(\alpha_{n-k+1}\in\Omega^{n-k+1}\). (ii) For every \(k\geq 1\), the following map induced by \(d\) is a split injection:_ \[F^{k}\Omega_{\pm}^{n}/d(F^{k}\Omega_{\pm}^{n-1})\to F^{k}\Omega_{\pm}^{n+1}/ \mathcal{N}^{2}\cdot F^{k-2}\Omega_{\pm}^{n+1}, \tag{2.2}\] _where for \(k=1\) we replace \(\mathcal{N}^{2}F^{-1}\Omega_{-}^{n+1}\) by zero, and we use \(+\) for even \(k\) (resp., \(-\) for odd \(k\)),_ Proof.: We assume that \(k\geq 1\) is odd. The case of even \(k\geq 2\) is completely analogous. (i) We use the induction on \(n\geq-1\) (for \(n=-1\) the assertion is clear). Let us write \(S^{i}=S^{i}(\mathcal{N}/\mathcal{N}^{2})\) for brevity. Consider the commutative diagram with exact rows (where the first row is exact by the induction assumption) It shows that the natural embedding induces an isomorphism (2.3) Next, we have a commutative diagram with exact rows, where \(\kappa\) is the Koszul differential. Since \(\kappa\) is injective, we deduce exactness of the sequence of the cokernels of the vertical maps. Taking into account the isomorphism (2.3), we get exactness of (2.1). (ii) We use the decreasing induction on \(k\). Note that \(F^{n+3}\Omega_{-}^{n}=0\), so the base of the induction is clear. Assume first that \(k\geq 3\). Consider the commutative diagram with exact rows, (2.4) Using Lemma 2.6 (with \(f_{3}=\operatorname{id}\)), we see that it is enough to prove that the left vertical arrow is a split injection. But the same map appears as the middle vertical arrow in the commutative diagram with exact rows, (2.5) The leftmost vertical arrow is split injective by the induction assumption, and the rightmost vertical arrow is split injective since the Koszul differential \(\kappa\) is so. By Lemma 2.6, it remains to prove that the second row is split exact. But the splitting is induced by the well defined map \[\begin{split}&\Omega_{X_{0}}^{n-k+2}\otimes\mathcal{N}/ \mathcal{N}^{2}\otimes S^{k-1}\to\mathcal{N}\cdot F^{k-1}\Omega_{+}^{n}/ \mathcal{N}^{2}F^{k}\Omega_{-}^{n}\,;\\ & df_{1}\ldots df_{n-k+2}\otimes n_{1}\otimes(n_{2}\ldots n_{k} )\mapsto d\widetilde{f}_{1}\ldots df_{n-k+2}n_{1}dn_{2}\ldots dn_{k},\end{split}\] where \(\widetilde{f}_{i}\in(\mathcal{O}_{X})_{+}\) are liftings of \(f_{i}\in\mathcal{O}_{X_{0}}\). In the case \(k=1\), we modify the above argument as follows: we replace \(\mathcal{N}^{2}F^{-1}\Omega_{-}^{n+1}\) by zero in the diagram (2.4) and replace the diagram (2.5) by As before, we see that the bottom row is split exact, and the leftmost vertical arrow is split injective by the induction assumption. Hence, the middle vertical arrow is split exact, which implies the assertion. Proof of Proposition 2.5.: For \(k=1\) (resp., \(k=2\)), Lemma 2.7(ii) implies that the map \(\Omega_{-}^{n}/d\Omega_{-}^{n-1}\to\Omega_{-}^{n+1}\) (resp., \(F^{2}\Omega_{+}^{n}/dF^{2}\Omega_{+}^{n-1}\to F^{2}\Omega_{+}^{n+1}\)) is a split injection, which immediately gives the result. Proof of Theorem 2.4.: Part (i) immediately follows from Proposition 2.5. To prove parts (ii) and (iii), it is enough to prove the inequalities \[\dim E_{2}^{pq}=\dim(E_{2}^{pq})_{+}\leq h^{pq}(X_{0})+\delta_{p+1,q-1}(X)- \delta_{p,q}(X).\] Indeed, summing up these inequalities, we would get \[\dim E_{2}(X)\leq\dim H_{dR}(X_{0})=\dim E_{\infty}(X),\] hence the spectral sequence degenerates at \(E_{2}\) and the above inequalites are in fact equalities. Since the natural map \((E_{2}^{pq})_{+}\to H^{q}(\Omega_{X_{0}}^{p})\) factors through a quotient of \(H^{q}(\Omega_{X}^{p})\), it has corank \(\geq\delta_{p,q}(X)\). Thus, it suffices to show that \[\dim\ker((E_{2}^{pq})_{+}\to H^{q}(\Omega_{X_{0}}^{p}))\leq\delta_{p+1,q-1}(X).\] The exact sequence \[0\to F^{2}(\Omega_{X}^{p})_{+}\to(\Omega_{X}^{p})_{+}\to\Omega_{X_{0}}^{p}\to 0\] gives rise to the surjection \[K\coloneqq\ker(H^{q}(F^{2}(\Omega_{X}^{p})_{+})\to H^{q}((\Omega_{X}^{p+1})_{ +}))/\dim H^{q}((F^{2}\Omega_{X}^{p-1})_{+})\to\ker((E_{2}^{pq})_{+}\to H^{q}( \Omega_{X_{0}}^{p})).\] Let us consider the morphism of exact sequences Using exactness of the sequence \[H^{q}(F^{2}(\Omega_{X}^{p-1})_{+})\to H^{q}(F^{2}(\Omega_{X}^{p})_{+})\to H^{q }(F^{2}(\Omega_{X}^{p+1})_{+}),\] which follows from Proposition 2.5, we get an embedding \[K\hookrightarrow\ker(H^{q}(F^{2}(\Omega_{X}^{p+1})_{+})\to H^{q}((\Omega_{X} ^{p+1})_{+}))\simeq\operatorname{coker}(H^{q-1}((\Omega_{X}^{p+1})_{+})\to H^{ q-1}(\Omega_{X_{0}}^{p+1}))\] so \(\dim K\leq\delta_{2,q-1}(X)\). **Corollary 2.8**.: _If \(X\) has dimension \(n|2\) then \(E_{2}^{pq}=0\) for \(p>n\) (and clearly for \(q>n\))._ Our next goal is to get a lower bound on \(\delta_{1q}(X)\) (see Proposition 2.11 below). Set \(L\coloneqq\mathcal{N}^{2}\) viewed as a line bundle on \(X_{0}\). Let \(Y\) denote the even scheme with the same underlying space as \(X_{0}\) and with the structure sheaf \(\mathcal{O}_{Y}=(\mathcal{O}_{X})_{+}\). Then \(\mathcal{O}_{Y}\) is a square zero extension of \(\mathcal{O}_{X_{0}}\) by \(L\), so it corresponds to a class \(e\in H^{1}(T_{X_{0}}\otimes L)\). **Lemma 2.9**.: _One has a natural exact sequence of coherent sheaves on \(X_{0}\),_ \[0\to L\xrightarrow{\iota}\Omega_{Y}^{1}|_{X_{0}}\to\Omega_{X_{0}}^{1}\to 0 \tag{2.6}\] _and the corresponding extension class corresponds to \(e\) under the natural isomorphism \(\operatorname{Ext}^{1}(\Omega_{X_{0}}^{1},L)\simeq H^{1}(T_{X_{0}}\otimes L)\)._ Proof.: This exact sequence is standard: its exactness on the left follows from the fact that locally \(Y\) is a Cartier divisor in \(X_{0}\times\mathbb{A}^{1}\). Let \((U_{i})\) be an open affine covering of \(X_{0}\) and let \(\sigma_{i}:\mathcal{O}_{X_{0}}\to\mathcal{O}_{Y}\) be splittings defined over \(U_{i}\), so that the class of \(e\) is represented by the cocylce \(v_{ij}\in T_{X_{0}}\otimes L(U_{ij})\), such that \[(\sigma_{j}-\sigma_{i})(f)=\iota(v_{ij},df).\] We have the induced splittings over \(U_{i}\), \[d\sigma_{i}:\Omega^{1}_{X_{0}}\to\Omega^{1}_{Y}|_{X_{0}}:fdg\mapsto fd\sigma_{i }(g)|_{X_{0}}.\] Now the cocycle \(d\sigma_{j}-d\sigma_{i}\in(\Omega^{1})^{\vee}\otimes L(U_{ij})\) represents the class of the extension (2.6). It remains to observe that \[\langle d\sigma_{j}-d\sigma_{i},dg\rangle=d(\sigma_{j}(g)-\sigma_{i}(g))= \iota(v_{ij},df\rangle,\] so this cocycle coincides with \(v_{ij}\). **Lemma 2.10**.: _(i) One has a canonical decomposition_ \[(\Omega^{1}_{X})_{+}/(\mathcal{N}^{2}\otimes\Omega^{1}_{X_{0}})\simeq\Omega^ {1}_{Y}|_{X_{0}}\oplus S^{2}(\mathcal{N}/\mathcal{N}^{2}).\] _(ii) For any \(q\geq 0\), one has an embedding_ \[\operatorname{im}(H^{q}(\Omega^{1}_{X})\to H^{q}(\Omega^{1}_{X_{0}}))\subset \operatorname{im}(H^{q}(\Omega^{1}_{Y}|_{X_{0}})\to H^{q}(\Omega^{1}_{X_{0}})).\] Proof.: (i) The pull-back map \(\Omega^{1}_{Y}\to(\Omega^{1}_{X})_{+}\) induces an \(\mathcal{O}_{X_{0}}\)-linear map \(\Omega^{1}_{Y}|_{X_{0}}\to(\Omega^{1}_{X})_{+}/(\mathcal{N}^{2}\otimes\Omega^ {1}_{X_{0}})\). On the other hand, the map \[\mathcal{N}_{-}\otimes\mathcal{N}_{-}\to(\Omega^{1}_{X})_{+}:a\otimes b \mapsto adb+bda\] induces an \(\mathcal{O}_{X_{0}}\)-linear map \(S^{2}(\mathcal{N}/\mathcal{N}^{2})\to(\Omega^{1}_{X})_{+}/(\mathcal{N}^{2} \otimes\Omega^{1}_{X_{0}})\). One can check using local coordinates that these maps are embeddings and give the claimed decomposition. (ii) Since the map \((\Omega^{1}_{X})_{+}\to\Omega^{1}_{X_{0}}\) factors through the quotient \((\Omega^{1}_{X})_{+}/(\mathcal{N}^{2}\otimes\Omega^{1}_{X_{0}})\), this follows from the decomposition proved in part (i). **Proposition 2.11**.: _Consider the cup-product map_ \[\mu^{q}_{e}:H^{q}\big{(}\Omega^{1}_{X_{0}}\big{)}\xrightarrow{\cup e}H^{q+1}(L).\] _Then one has_ \[\delta_{1q}(X)\geq\operatorname{rk}(\mu^{q}_{e}).\] Proof.: By Lemma 2.10(ii), \[\delta_{1q}(X)\geq\dim\operatorname{coker}(H^{q}(\Omega^{1}_{Y}|_{X_{0}})\to H ^{q}(\Omega^{1}_{X_{0}})).\] But by Lemma 2.9, we have an identification \[\operatorname{coker}(H^{q}(\Omega^{1}_{Y}|_{X_{0}})\to H^{q}(\Omega^{1}_{X_{0 }}))\simeq\operatorname{im}(\mu^{q}_{e}).\] **Corollary 2.12**.: _Assume that \(X_{0}\) is a smooth projective curve of genus \(g\geq 1\), and \(L\simeq\omega_{X_{0}}\). Then \(\delta_{10}(X)\geq 1\)._ **Example 2.13**.: Consider \(X=G(1|1,2|2)\). Then \(X_{0}=\mathbb{P}^{1}\times\mathbb{P}^{1}\), and \(\mathcal{N}/\mathcal{N}^{2}\simeq\mathcal{O}(-1,-1)^{\otimes 2}\), \(\mathcal{N}^{2}\simeq\omega_{X_{0}}=\mathcal{O}(-2,-2)\). By Proposition 2.11, we have \[\delta_{11}(X)\geq\operatorname{rk}(\mu^{1}_{e}:H^{1}(\Omega^{1}_{X_{0}})\to H ^{2}(\Omega^{2}_{X_{0}})).\] It is well known that \(e\in H^{1}(T_{X_{0}}\otimes\omega_{X_{0}})\simeq H^{1}(\Omega^{1}_{X_{0}})\) is nonzero, hence \(\mu^{1}_{e}\neq 0\), and we get \(\delta_{11}(X)\geq 1\). On the other hand, let \(\mathcal{B}\) denote the Berezinian of the tautological bundle on \(X=G(1|1,2|2)\). Then \(c_{1}(\mathcal{B})\) gives an element of \(H^{1}(\Omega^{1}_{X})\) projecting to a nonzero class in \(H^{1}(\Omega^{1}_{X_{0}})\) (see e.g., [8, Ch. 4.3]). Hence, we have \(\delta_{11}(X)=1\), and so \[\dim E^{11}_{2}(X)=\dim E^{02}_{2}(X)=1.\] In fact, it is easy to see that \(H^{i}(\Omega^{p}_{X})=0\) for \(p\geq 2\) and \(i=0,1\), which implies that the only nonzero \(E_{2}\) terms are \[E^{00}_{2}=E^{02}_{2}=E^{11}_{2}=E^{22}_{2}=k.\] ### Example in dimension \(1\)\(3\) We will give an example showing that in dimension \(1|3\) the map on \(H^{1}\) induced by \(d:(\mathcal{O}_{X})_{-}\to(\Omega^{1}_{X})_{-}\) may have a nontrivial kernel (see Theorem 2.15 below). We start with a general construction of smooth supervarieties of dimension \(1|3\). **Lemma 2.14**.: _Let \(C\) be a smooth curve, \(V\) a vector bundle of rank \(3\) over \(C\),_ \[0\to\bigwedge^{2}V\to\mathcal{O}_{Y}\to\mathcal{O}_{C}\to 0 \tag{2.7}\] _a square zero extension. Then there exists a smooth supervariety \(X\) of dimension \(1|3\) with \(X_{0}=C\), \(\mathcal{N}/\mathcal{N}^{2}=V\), and \((\mathcal{O}_{X})_{+}\simeq\mathcal{O}_{Y}\) as a square zero extension of \(\mathcal{O}_{C}\)._ Proof.: Let \(e\in H^{1}(T_{C}\otimes\bigwedge^{2}V)\) be the class of the extension (2.7). Note that we have a natural isomorphism \[\bigwedge^{2}V\simeq V^{\vee}\otimes\det(V)\simeq\underline{\operatorname{Hom }}(V,\det(V)).\] Let \(D_{\leq 1}(V,\det(V))\) denote the sheaf of differential operators of order \(\leq 1\) from \(V\) to \(\det(V)\). Then we have an exact sequence \[0\to\underline{\operatorname{Hom}}(V,\det(V))\to D_{\leq 1}(V,\det(V)) \xrightarrow{\sigma}T_{C}\otimes\underline{\operatorname{Hom}}(V,\det(V)) \to 0,\] where \(\sigma\) is the symbol map. Since \(H^{2}(C,\underline{\operatorname{Hom}}(V,\det(V)))=0\), we can lift \(e\) to a class \(\widetilde{e}\in H^{1}(C,D_{\leq 1}(V,\det(V)))\). Let us represent \(\widetilde{e}\) by a Cech cocycle with respect to an affine covering \(C=(U_{1},U_{2})\), \(\phi_{12}:V|_{U_{12}}\to\det(V)|_{U_{12}}\). By assumption, it satisfies \[\phi_{12}(fs)=f\phi_{12}(s)+v_{12}(f)\wedge s,\] for \(f\in\mathcal{O}\), \(s\in V\), where \(v_{12}\in H^{0}(U_{12},T_{C}\otimes\bigwedge^{2}V)\) is the Cech cocycle representing \(e\). Now we are going to glue \(X\) from the split supervarieties over \(U_{i}\), \(i=1,2\), corresponding to \(\bigwedge^{\bullet}(V|_{U_{i}})\). We just need to construct an automorphism \(\alpha\) of the sheaf of \(\mathbb{Z}/2\)-graded rings \(\bigwedge^{\bullet}(V|_{U_{12}})\) inducing the identity modulo \(\bigwedge^{\geq 1}\). It is enough to specify how \(\alpha\) acts on \(\mathcal{O}\) and on \(V\). We set for \(f\in\mathcal{O}\), \(s\in V\), \[\alpha(f)=f+v_{12}(f),\ \ \alpha(s)=s+\phi_{12}(s).\] It is easy to check that this gives a well defined automorphism of \(\bigwedge^{\bullet}(V|_{U_{12}})\), and thus, defines the supervariety \(X\) with the required properties. **Theorem 2.15**.: _Assume that the ground field \(k\) is algebraically closed field of characteristic \(\neq 2,3\). For any smooth projective curve \(C\) of genus \(2\) over \(k\), there exists a smooth supervariety \(X\) of dimension \(1|3\) with \(X_{0}=C\), and such that_ \[E_{2}^{01}(X)_{-}=\ker(H^{1}(X,(\mathcal{O}_{X})_{-})\to H^{1}(X,(\Omega^{1}_{ X})_{-}))\neq 0.\] _In particular, the Hodge to de Rham spectral sequence of \(X\) does not degenerate at \(E_{2}\)._ We start with some auxiliary statements. **Lemma 2.16**.: _Assume the characteristic of \(k\) is \(\neq 2,3\). For a \(k\)-vector space \(V\), the linear map_ \[\tau_{V}:V\otimes\bigwedge^{2}V\to V\otimes\bigwedge^{2}V:v_{1}\otimes(v_{2} \wedge v_{3})\mapsto v_{3}\otimes(v_{1}\wedge v_{2})-v_{2}\otimes(v_{1}\wedge v _{3})\] _is invertible._ Proof.: Let us consider the projector \(\pi\) on \(V\otimes\bigwedge^{2}V\) (onto a subspace isomorphic to \(\bigwedge^{3}V\)) given by \[v_{1}\otimes(v_{2}\wedge v_{3})\mapsto\frac{1}{3}\big{(}v_{1}\otimes(v_{2} \wedge v_{3})+v_{2}\otimes(v_{3}\wedge v_{1})+v_{3}\otimes(v_{1}\wedge v_{2}) \big{)}.\] Then we have \(\tau_{V}=3\pi-\operatorname{id}\), so it has eigenvalues \(2\) and \(-1\). **Lemma 2.17**.: _Assume the characteristic of \(k\) is \(\neq 2,3\). Let \(C\) be a smooth projective curve over \(k\), and let \(V\) be a rank \(3\) vector bundle on \(C\) such that_ \[H^{1}(\underline{\operatorname{End}}_{0}(V)\otimes\det(V))=H^{0}(V)=H^{0}(\det (V)^{-1})=0,\] _while \(H^{0}(\omega_{C}\otimes V)\neq 0\) and \(H^{1}(\det(V))\neq 0\). Then there exists a smooth supervariety \(X\) of dimension \(1|3\) with \(X_{0}=C\), and \(\mathcal{N}/\mathcal{N}^{2}=V\), such that \(E_{2}^{01}(X)_{-}\neq 0\)._ Proof.: By Lemma 2.14, for any class \(e\in H^{1}(T_{C}\otimes\bigwedge^{2}V)\), there exists a supervariety \(X\) of dimension \(1|3\) with \(X_{0}=C\), and \(\mathcal{N}/\mathcal{N}^{2}=V\), such that the extension class of \((\mathcal{O}_{X})_{+}\to\mathcal{O}_{C}\) is \(e\). We will show that there exists a choice of \(e\) such that \(E_{2}^{01}(X)_{-}\neq 0\). The exact sequence \[0\to\mathcal{N}^{3}\to\mathcal{O}_{-}\to V\to 0\] shows that the map \(H^{1}(\mathcal{N}^{3})\to H^{1}(\mathcal{O}_{-})\) is injective. Thus, it is enough to prove that the map \(H^{1}(\mathcal{N}^{3})\to H^{1}((\mathcal{N}\Omega^{1})_{-})\) has a nontrivial kernel. Since \(\mathcal{N}^{3}\Omega^{1}_{X_{0}}\simeq\det(V)\otimes\omega_{C}\) has trivial \(H^{1}\) (which is dual to \(H^{0}(\det(V)^{-1})=0\)), it is enough to prove that the map \[H^{1}(\mathcal{N}^{3})\to H^{1}((\mathcal{N}\Omega^{1})_{-}/\mathcal{N}^{3} \Omega^{1}_{X_{0}})\] has a nontrivial kernel. We have an exact sequence of \(\mathcal{O}_{X_{0}}\)-modules \[0\to\mathcal{N}^{2}d\mathcal{N}\to(\mathcal{N}\Omega^{1})_{-}/\mathcal{N}^{3} \Omega^{1}_{X_{0}}\to\mathcal{N}/\mathcal{N}^{2}\otimes\Omega^{1}_{X_{0}}\to 0. \tag{2.8}\] Note that if \(\sigma_{i}:\mathcal{O}_{X_{0}}\to(\mathcal{O}_{X})_{+}\) are local splittings of the extension \((\mathcal{O}_{X})_{+}\to\mathcal{O}_{C}\) then we have induced splittings of (2.8) given by \(n\otimes df\mapsto\widetilde{n}\cdot d\sigma_{i}(f)\) (where \(\widetilde{n}\in\mathcal{N}_{-}\) is a lifting of \(n\in\mathcal{N}/\mathcal{N}^{2}\)). The difference between two such splittings is given by the \(1\)-cocycle \(n\otimes df\mapsto\widetilde{n}\cdot d(v_{ij},df)\), where \(v_{ij}\) is the \(1\)-cocycle representing \(e\in H^{1}(T_{X_{0}}\otimes\bigwedge^{2}V)\). Using the Leibnitz rule we deduce that the extension class of (2.8) in \(\operatorname{Ext}^{1}(V\otimes\Omega^{1}_{X_{0}},V\otimes\bigwedge^{2}V)\) is equal to the image of \(e\) under the natural map \[H^{1}(T_{X_{0}}\otimes\bigwedge^{2}V)\simeq\operatorname{Ext}^{1}(\Omega^{1}_ {X_{0}},\bigwedge^{2}V)\to\operatorname{Ext}^{1}(V\otimes\Omega^{1}_{X_{0}},V \otimes\bigwedge^{2}V)\xrightarrow{\tau_{V}\circ}\ \operatorname{Ext}^{1}(V\otimes\Omega^{1}_{X_{0}},V \otimes\bigwedge^{2}V),\] where the last arrow is induced by the automorphism \(\tau_{V}\) (see Lemma 2.16). We have an isomorphism \(\bigwedge^{2}V\simeq V^{\vee}\otimes\det(V)\), so we have a direct sum decomposition \[V\otimes\bigwedge^{2}V\simeq V\otimes V^{\vee}\otimes\det(V)\simeq\det(V) \oplus\underline{\operatorname{End}}_{0}(V)\otimes\det(V).\] By the assumption, \(H^{1}(\underline{\operatorname{End}}_{0}(V)\otimes\det(V))=0\), so the map \[H^{1}(\mathcal{N}^{3})\simeq H^{1}(\det(V))\to H^{1}(V\otimes\bigwedge^{2}V)\] is an isomorphism. Thus, it is enough to check that the connecting homomorphism \[H^{0}(\omega_{C}\otimes V)\to H^{1}(V\otimes\bigwedge^{2}V)\] associated with (2.8) is nonzero. Since the projection \(V\otimes\bigwedge^{2}V\to\det(V)\) induces an isomorphism on \(H^{1}\), this is equivalent to showing that the homomorphism \[H^{0}(\omega_{C}\otimes V)\xrightarrow{\omega_{e}}\ H^{1}(\det(V))\] is nonzero, where we view \(e\) as an element of \(H^{1}(\omega_{C}^{-1}\otimes V^{\vee}\otimes\det(V))\). Thus, we need to choose \(e\) such that this cup product map is nonzero. This is possible as long as the map \[H^{0}(\omega_{C}\otimes V)\otimes H^{1}(\omega_{C}^{-1}\otimes V^{\vee} \otimes\det(V))\to H^{1}(\det(V))\] is nonzero. By Serre duality, we need to check that the map \[H^{0}(\omega_{C}\otimes V)\otimes H^{0}(\omega_{C}\otimes\det(V)^{-1})\to H^{ 0}(\omega_{C}^{2}\otimes V\otimes\det(V)^{-1})\] is nonzero. By assumption, there exists a nonzero element \(s\in H^{0}(\omega_{C}\otimes\det(V)^{-1})\) (this space is dual to \(H^{1}(\det(V))\neq 0\)). The multiplication by \(s\) is an injective map \[H^{0}(\omega_{C}\otimes V)\xrightarrow{s}\ H^{0}(\omega_{C}^{2}\otimes V \otimes\det(V)^{-1}).\] Hence, our assertion follows from non-vanishing of \(H^{0}(\omega_{C}\otimes V)\). We will need some facts about analogs of Prym varieties associated with cyclic coverings (where we assume that the characteristic is \(\neq 3\)). **Lemma 2.18**.: _Let \(\pi:\widetilde{C}\to C\) be a cyclic covering associated with a line bundle \(\xi\) of order \(3\) on a smooth projective curve \(C\), and let \(\tau:\widetilde{C}\to\widetilde{C}\) be an action of a nontrivial element of the Galois group \(\mathbb{Z}\,/3\,\mathbb{Z}\). (i) The morphisms \(\pi^{*}:J_{C}\to J_{\widetilde{C}}\) and \(\operatorname{Nm}_{\widetilde{C}/C}:J_{\widetilde{C}}\to J_{C}\) are dual to each other with respect to the canonical principal polarizations on the Jacobians. One has_ \[\operatorname{Nm}_{\widetilde{C}/C}\circ\pi^{*}=[3]_{J_{C}},\ \ \pi^{*}\circ \operatorname{Nm}_{\widetilde{C}/C}=\tau_{*}+\tau_{*}^{-1}+\operatorname{id} _{J_{\widetilde{C}}}.\] _(ii) One has \(\ker(\pi^{*})=\langle\xi\rangle\subset J_{C}\), the subgroup of order \(3\) generated by \(\xi\). (iii) Set_ \[\widetilde{P}=\ker(\operatorname{Nm}_{\widetilde{C}/C}:J_{\widetilde{C}}\to C),\] _and let \(P\subset\widetilde{P}\) denote the connected component of \(0\) in \(\widetilde{P}\) (\(P\) is an analog of the Prym variety). Then \(P=(\tau-\operatorname{id})(J_{\widetilde{C}})\), \(\tau-\operatorname{id}:P\to P\) is an isogeny, and \(|\widetilde{P}/P|=3\)._ Proof.: (i) This is well known (see [1, Sec. 11.4]). (ii) Recall that \(\pi_{*}\mathcal{O}_{\widetilde{C}}\simeq\mathcal{O}_{C}\oplus\xi\oplus\xi^{-1}\). Hence, \[h^{0}(\pi^{*}\xi)=h^{0}(\xi\otimes\pi_{*}\mathcal{O})=h^{0}(\pi_{*}\mathcal{O })=1,\] so \(\pi^{*}\xi\) is a degree \(0\) line bundle on \(\widetilde{C}\) with a nonzero section, hence \(\pi^{*}\xi\simeq\mathcal{O}_{\widetilde{C}}\). Conversely, assume \(\pi^{*}M\simeq\mathcal{O}\). Then \[M\otimes\pi_{*}\mathcal{O}\simeq\pi_{*}\pi^{*}M\simeq\pi_{*}\mathcal{O},\] so \(M\) is isomorphic to one of the summands in \(\pi_{*}\mathcal{O}\). (iii) Set \(A:=J_{\widetilde{C}}/P\). We have an exact sequence of abelian varieties (2.9) where \(p\) is induced by \(\operatorname{Nm}_{\widetilde{C}/C}\), and an isogeny \(f:A\to J_{C}\), such that \(\ker(f)\simeq\widetilde{P}/P\) and \(\operatorname{Nm}_{\widetilde{C}/C}=f\circ p\). Consider the dual isogeny \(\hat{f}:J_{C}\to\hat{A}\), and the dual sequence to (2.9), \[0\to\hat{A}\to J_{\widetilde{C}}\to\hat{P}\to 0.\] Then the morphism \(\pi^{*}:J_{C}\to J_{\widetilde{C}}\), being dual to \(\operatorname{Nm}_{\widetilde{C}/C}=f\circ p\), factors as the composition of \(\hat{f}\) followed by the embedding \(\hat{A}\to J_{\widetilde{C}}\). Hence, we have \[\ker(\hat{f})=\ker(\pi^{*})=\langle\xi\rangle.\] But \(\ker(f)\) is Cartier dual to \(\ker(\hat{f})\), so we deduce \[|\widetilde{P}/P|=|\ker(f)|=|\ker(\hat{f})|=3.\] Note that \(\operatorname{Nm}_{\widetilde{C}/C}\circ\tau_{*}=\operatorname{Nm}_{ \widetilde{C}/C}\), so \((\tau_{*}-\operatorname{id})(J_{\widetilde{C}})\) is contained in \(\widetilde{P}\), and hence, in \(P\). It remains to show that the morphism \((\tau_{*}-\operatorname{id}):P\to P\) is an isogeny. It is enough to show that the induced map on the tangent space \(T_{0}P\) is nondegenerate. We have \[T_{0}P\simeq\ker(H^{1}(\widetilde{C},\mathcal{O}_{\widetilde{C}})\xrightarrow {\operatorname{tr}}H^{1}(C,\mathcal{O}_{C})).\] The decomposition \(\pi_{*}\mathcal{O}_{\widetilde{C}}=\mathcal{O}_{C}\oplus\xi\oplus\xi^{-1}\) is compatible with the action of \(\tau\): it corresponds to the eigenvalues \(1\), \(\zeta\) and \(\zeta^{-1}\), where \(\zeta\) is a primitive \(3\)rd root of unity. Hence \(\tau\) acts on \(T_{0}P\) with eigenvalues \(\zeta\) and \(\zeta^{-1}\), which implies that \(\tau-\operatorname{id}\) is nondegenerate. Proof of Theorem 2.15.: It is enough to construct a rank \(3\) vector bundle \(V\) on \(C\) satisfying the assumptions of Lemma 2.17. **Step 1. A triple covering and a special line bundle of order \(3\)**. We start by choosing a line bundle \(\xi\) of order \(3\) on \(C\). Let \(\pi:\widetilde{C}\to C\) be the corresponding cyclic covering (where \(\widetilde{C}\) has genus \(4\)). We will use the notations and the results of Lemma 2.18. We claim that there exists a line bundle \(\eta\) of order \(3\), such that \(\eta\not\in\langle\xi\rangle\), and \(\pi^{*}\eta\in P\). Indeed, since \(\operatorname{Nm}_{\widetilde{C}/C}\circ\pi^{*}=[3]_{J_{C}}\), we have a well defined homomorphism \(J_{C}[3]/\langle\xi\rangle\to\widetilde{P}\), where \(J_{C}[3]:=\ker([3]_{J_{C}})\). Let us consider the induced homomorphism of finite groups \[J_{C}[3]/\langle\xi\rangle\to\widetilde{P}/P.\] Since \(C\) has genus \(2\), we have \(|J_{C}[3]/\langle\xi\rangle|=27\), whereas \(|\widetilde{P}/P|=3\). Hence, there exists a nonzero element \(\eta+\langle\xi\rangle\) in the kernel, where \(\eta\in J_{C}[3]\setminus\langle\xi\rangle\). But this means that \(\pi^{*}\eta\in P\), as claimed. **Step 2. Choosing a point**. We claim that for a generic point \(p\in C\) one has \[H^{*}(\xi(p))=H^{*}(\xi^{-1}(p))=H^{*}(\eta(p))=H^{*}(\eta\otimes\xi(p))=H^{*}( \eta\otimes\xi^{-1}(p)).\] Indeed, for any nontrivial line bundle \(M\) of degree \(0\) one has \(h^{0}(\omega_{C}\otimes M)=1\), so there exists a unique effective divisor \(D_{M}\) such that \(\omega_{C}\otimes M\simeq\mathcal{O}_{C}(D_{M})\). Now we just need to choose \(p\) not in the support of \(D_{\xi}\), \(D_{\xi^{-1}}\), \(D_{\eta\xi}\) and \(D_{\eta\xi^{-1}}\). It follows that the line bundle \(\pi^{*}\eta\in P\) satisfies \[H^{0}(\widetilde{C},\pi^{*}\eta(\pi^{-1}(p)))=H^{0}(C,\eta(p)\oplus\eta\xi(p) \oplus\eta\xi^{-1}(p))=0. \tag{2.10}\] **Step 3. Choosing a line bundle in \(P\)**. We claim that there exists a line bundle \(L\) of degree \(1\) on \(\widetilde{C}\) such that \(\operatorname{Nm}_{\widetilde{C}/C}(L)\simeq\mathcal{O}_{C}(p)\), \(H^{0}(\widetilde{C},L)=0\), and \[H^{*}(\widetilde{C},\tau_{*}L\otimes L^{-1}(\pi^{-1}(p)))=H^{*}(\widetilde{C},\tau_{*}^{-1}L\otimes L^{-1}(\pi^{-1}(p)))=0.\] Indeed, let us look for \(L\) in the form \(L=M(q)\), where \(p=\pi(q)\) and \(M\in P\). We claim that our conditions are satisfied for generic \(M\in P\). Indeed, let us denote by \(\Theta_{p}\subset J_{\widetilde{C}}\) the theta-divisor associated with \(\pi^{-1}(p)\). We need to check that the following conditions hold for generic \(M\in P\): 1. \(H^{0}(\widetilde{C},M(q))=0\); 2. \(\phi^{\pm}(M)\not\in\Theta_{p}\), where \(\phi^{\pm}=t_{\pi^{\pm 1}(q)-q}\circ(\tau_{*}^{+1}-\operatorname{id})\), where \(t_{\tau^{\pm 1}(q)-q}\) is the translation by \(\mathcal{O}(\tau^{\pm 1}(q)-q)\in P\). For (1), we observe that this condition holds for \(M=\pi^{*}\eta\) by (2.10), since \(H^{0}(\pi^{*}\eta(q))\subset H^{0}(\pi^{*}\eta(\pi^{-1}(p)))\). For (2), we observe that the maps \(\phi^{\pm}:P\to P\) are surjective, hence, it is enough to prove the existence of \(M^{\prime}\in P\) such that \(M^{\prime}\not\in\Theta_{p}\). By (2.10), we can take \(M^{\prime}=\pi^{*}\eta\). **Step 4. Checking the properties of the vector bundle**. Now we set \(V=\pi_{*}(L)\). We have \(\det(V)\simeq\operatorname{Nm}_{\widetilde{C}/C}(L)\simeq\mathcal{O}_{C}(p)\), so \[H^{0}(\det(V)^{-1})=H^{0}(\mathcal{O}_{C}(-p))=0,\] while \(H^{1}(\det(V))=H^{1}(\mathcal{O}_{C}(p))\neq 0\). We also have \[H^{0}(C,V)\simeq H^{0}(\widetilde{C},L)=0,\] \[H^{0}(C,\omega_{C}\otimes V)\simeq H^{0}(\widetilde{C},\pi^{*}\omega_{C} \otimes L)\simeq H^{0}(\widetilde{C},\omega_{\widetilde{C}}\otimes L)\neq 0,\] since \(\deg(L)=1\). Finally, since \(\pi\) is unramified, we have an isomorphism \(V^{\vee}\simeq\pi_{*}(L^{-1})\), so \[V^{\vee}\otimes V\simeq\pi_{*}(\pi^{*}\pi_{*}(L)\otimes L^{-1}).\] Furthermore, \(\pi^{*}\pi_{*}(L)\simeq L\oplus\tau^{*}L\oplus(\tau^{-1})^{*}L\). Hence, \[V^{\vee}\otimes V\simeq\pi_{*}\mathcal{O}\oplus\pi_{*}(\tau^{*}L\otimes L^{ -1})\oplus\pi_{*}((\tau^{-1})^{*}L\otimes L^{-1}).\] We need to check that \[h^{1}(V^{\vee}\otimes V\otimes\det(V))=h^{1}(V^{\vee}\otimes V(p))=1.\] Since \(\pi_{*}\mathcal{O}\otimes\mathcal{O}_{C}(p)=\mathcal{O}_{C}(p)\oplus\xi(p) \oplus\xi^{-1}(p)\), this follows from the vanishing \[H^{1}(C,\pi_{*}((\tau^{\pm 1})^{*}L\otimes L^{-1})(p))=H^{1}(\widetilde{C},(\tau^{ \pm 1})^{*}L\otimes L^{-1}(\pi^{-1}(p)))=0.\] _Remark 2.19_.: It is instructive to observe that the construction of Theorem 2.15 breaks in characteristic \(2\) (since Lemma 2.16 needs characteristic \(\neq 2\)). Indeed, by Proposition 2.3, in characteristic \(2\) we have \(E_{2}^{01}(X)_{-}=0\).
2310.13032
Quality-Diversity through AI Feedback
In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and diversifying a population of candidates. However, the applicability of QD to qualitative domains, like creative writing, has been limited by the difficulty of algorithmically specifying measures of quality and diversity. Interestingly, recent developments in language models (LMs) have enabled guiding search through AI feedback, wherein LMs are prompted in natural language to evaluate qualitative aspects of text. Leveraging this development, we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an evolutionary algorithm applies LMs to both generate variation and evaluate the quality and diversity of candidate text. When assessed on creative writing domains, QDAIF covers more of a specified search space with high-quality samples than do non-QD controls. Further, human evaluation of QDAIF-generated creative texts validates reasonable agreement between AI and human evaluation. Our results thus highlight the potential of AI feedback to guide open-ended search for creative and original solutions, providing a recipe that seemingly generalizes to many domains and modalities. In this way, QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve, which are among the core skills underlying human society's capacity for innovation.
Herbie Bradley, Andrew Dai, Hannah Teufel, Jenny Zhang, Koen Oostermeijer, Marco Bellagente, Jeff Clune, Kenneth Stanley, Grégory Schott, Joel Lehman
2023-10-19T12:13:58Z
http://arxiv.org/abs/2310.13032v4
# Quality-Diversity through AI Feedback ###### Abstract In many text-generation problems, users may prefer not only a single response, but a diverse range of high-quality outputs from which to choose. Quality-diversity (QD) search algorithms aim at such outcomes, by continually improving and diversifying a population of candidates. However, the applicability of QD to qualitative domains, like creative writing, has been limited by the difficulty of algorithmically specifying measures of quality and diversity. Interestingly, recent developments in language models (LMs) have enabled guiding search through _AI feedback_, wherein LMs are prompted in natural language to evaluate qualitative aspects of text. Leveraging this development, we introduce Quality-Diversity through AI Feedback (QDAIF), wherein an evolutionary algorithm applies LMs to both generate variation and evaluate the quality and diversity of candidate text. When assessed on creative writing domains, QDAIF covers more of a specified search space with high-quality samples than do non-QD controls. Further, human evaluation of QDAIF-generated creative texts validates reasonable agreement between AI and human evaluation. Our results thus highlight the potential of AI feedback to guide open-ended search for creative and original solutions, providing a recipe that seemingly generalizes to many domains and modalities. In this way, QDAIF is a step towards AI systems that can independently search, diversify, evaluate, and improve, which are among the core skills underlying human society's capacity for innovation.1 Footnote 1: Project Page: [https://qdaif.github.io/](https://qdaif.github.io/) ## 1 Introduction Human innovation is not only a generative capacity for creativity, but also includes the ability to evaluate the subjective quality of new ideas and artifacts. Great ideas are rarely generated all at once out of whole cloth, but rather gradually emerge through divergent chains of elaboration and revision (Stanley & Lehman, 2015). To successfully navigate such a tree of ideas, the creator must evaluate which steps in a chain are worth pursuing further, a question that can be highly subjective, especially in domains with artistic or literary dimensions. Until now, even if AI could provide candidates, the hope for such subjectively tinged evaluation lay firmly with humans. However, the emerging foundation model technology of recent years (Bommasani et al., 2021) now means that the model can also play the role of evaluator, even when the evaluation is in part subjective (Madaan et al., 2023). In this way, for the first time, an entire ideation process that returns a diverse set of interesting artifacts can in principle be automated. This process cannot be run by LMs entirely on their own, but requires chaining together a search algorithm with model calls in a nuanced way. This paper highlights one way to achieve this potential: to combine LMs with the field of quality-diversity (QD) (Mouret & Clune, 2015), which centers on how to design search processes that produce high-quality solutions that span a design space. The main insight in QD algorithms is to explicitly maintain and seek high-quality diverse responses. Typically such search algorithms require hand-designed measures of diversity and quality, as well as a way to generate meaningful variation. Yet the most interesting and complex domains nearly always involve notions of performance, diversity, and variation that are subjective or difficult to specify algorithmically. Extending work that generates variation through LMs (Lehman et al., 2022; Meyerson et al., 2023) and evaluates the quality of potential solutions through LMs (Ahn et al., 2022), we show that LMs can also be used to evaluate qualitative aspects of diversity. In this way, LMs can instantiate the three main ingredients of QD search, thereby enabling powerful new QD algorithms that can ride the coatails of continual LM advances, which we name Quality-Diversity through AI Feedback (QDAIF). Such QDAIF can explore and return diverse, high-quality responses to an LM prompt, without relying on hand-crafted diversity measures or the necessity for model fine-tuning (although, it could also be used for LMs to self-improve by generating fine-tuning data (Lehman et al., 2022; Chen et al., 2023), an interesting direction for self-curated effective learning environments via generated data, towards AI-generating algorithms (Clune, 2019)). We evaluate QDAIF across three creative writing domains: opinion writing, short stories, and poetry. The idea is that in such creative domains, users often enjoy seeing a wide range of possible stories or poems from which to choose or be inspired by. Quantitative results indicate that QDAIF significantly outperforms existing baselines. Additionally, through human evaluation, we observe a strong alignment between human and AI-generated feedback, providing empirical evidence that AI feedback is grounded and that the method can work in practice (i.e. it yields improved quality and diversity as measured by humans). Overall, QDAIF brings us a step closer to AI models that can independently search and innovate, one of the keystone abilities of humans that allow them to create culture and science (Stanley et al., 2017). ## 2 Background & Related Work ### Evolution through Large Models Advancements in language models have enabled new kinds of powerful search algorithms that apply LMs as search operators, e.g. to create variation or evaluate solutions. While other search algorithms could also be used, this paper creates a QDAIF algorithm by extending upon Evolution through Large Models (ELM) (Lehman et al., 2022), a framework for evolutionary search for code or text that uses LMs to generate intelligent variation (for example through specialized language Figure 1: **QDAIF (left) covers more the search space with diverse, high-quality stories compared to the baseline (right). The baseline is LMX Quality-Only**(Meyerson et al., 2023), which optimizes only for the quality of solutions. QDAIF discovered more interesting stories about a spy and a politician, covering examples such as romance stories with a happy-ending, to horror stories with a tragic-ending. The baseline produced a story (right-middle position, starting with “Jason”) with a lower quality score due to the lack of a desired spy character (denoted by the red-colored bin, for a story with a neutral ending, and leaning to horror). QDAIF discovered a better, more-relevant story (bottom-middle position, starting with “a wealthy politician”) for this same neutral bin.** models trained on code diffs (Bradley et al., 2023b), or through simple few-shot prompting (Meyerson et al., 2023; Chen et al., 2023)). Most QDAIF results in this paper generate new search candidates through Language Model Crossover (LMX) (Meyerson et al., 2023), a recent and general few-shot prompting approach that can evolve e.g. mathematical expressions, sentences, Python programs, and prompts for text-to-image models, by leveraging in-context learning capabilities of LMs (Brown et al., 2020). The approach is simple: A few existing search candidates are concatenated into a prompt, predisposing the LM to generate new, similar candidates. In this way, LMX enables creating intelligent variation without requiring any specially-trained models. Our experimental implementation builds on OpenELM (Bradley et al., 2023a), a versatile open-source Python library designed for research into LM-based evolutionary algorithms. ### Quality Diversity Algorithms Traditional optimization algorithms aim to discover a single high-quality solution, which while appropriate for many situations, can fail to _illuminate_ the full range of possible high-quality solutions. For creative and design problems in particular, a user may want to choose what they think is most appropriate from a diversity of such candidates. In contrast, Quality Diversity (QD) algorithms aim to optimize not just for a single optimal solution, but for a diverse set of high-quality solutions (Lehman and Stanley, 2011; Mouret and Clune, 2015; Pugh et al., 2016; Fontaine and Nikolaidis, 2021). QD algorithms can thus provide a richer landscape of solutions, enabling adaptability and flexibility in addressing multifaceted challenges (Cully et al., 2015). In addition to a quality measure (i.e. an objective function), QD requires a metric such that it can encourage desired axes of diversity. For instance, Lehman et al. (2022) evolved Python programs to design varied locomoting robots, where the diversity dimensions are the robot's height, width, and mass. A significant limitation of existing QD algorithms lies in their reliance on low-level quality and diversity measures (Mouret and Clune, 2015). This requirement confounds applying QD algorithms to complex and creative domains, such as the creative writing ones explored in this paper. Intuitively, such measures (e.g. sensor readings (Cully et al., 2015), feature engineering (Manning, 2009)) lack the subtlety and depth needed to capture the complexities of human creativity and intuition, e.g. nuances, moods, or cultural references that resonate in human experience. Interestingly, from having trained on vast amounts of human-generated data, LMs can begin to emulate such human-nuanced judgments (cf. Section 2.3). Thus, by employing an LM to evaluate both quality and diversity, QDAIF significantly simplifies and enlarges the range of domains QD can be applied to. Feedback from learned ML models has been used in prior work to reduce the need for hand-crafted heuristics or expensive ground-truth evaluations. In model-based QD, learned feedback is supplied by surrogate models. Gaier et al. (2017) introduced the use of surrogates (via a Gaussian process) to predict fitness (quality). Subsequently, Keller et al. (2020) introduced a learned model to predict both fitness and behavior characteristics (diversity), becoming a standard approach (Lim et al., 2021; 2022; Zhang et al., 2022; Bhatt et al., 2022). Surrogate models require domain-specific training data to update their predictions on a limited domain, whereas AI feedback leverages off-the-shelf instruction-tuned LMs (Chung et al., 2022; Ouyang et al., 2022; Ouyang et al., 2022) to automate expensive human feedback for a variety of evaluation tasks. More recently, Fontaine and Nikolaidis (2021) utilized CLIP embeddings (Radford et al., 2021) as both quality and diversity measures to navigate the search space of StyleGAN (Karras et al., 2019), producing a range of faces with the desired characteristic (e.g. "A person with red hair"). We show that using pre-trained surrogate models is more prone to reward hacking in the natural language case (Skalse et al., 2022) (cf. Appendix A.2). Hence, QDAIF capitalizes on the strengths of general-purpose LMs for evaluating generated solutions. ### AI Feedback Recent months have seen a surge in research that leverages LMs to provide feedback on the training, evaluation, or problem-solving capabilities of other LMs (Bai et al., 2022; Perez et al., 2022; Shinn et al., 2023; Wang et al., 2023; Colas et al., 2023; Zhang et al., 2023; Lee et al., 2023). Bai et al. (2022) show that using LM-generated critiques and refinements has been instrumental in enhancing performance on metrics like helpfulness and harmlessness. One particularly promising direction for AI feedback is self-refinement, where LMs evaluate and score their own generations, and then iteratively improve their output (Bai et al., 2022; Madaan et al., 2023). Self-refinement has demonstrated significant improvement in output quality as gauged by human evaluators (Madaan et al., 2023), underscoring generation-discrimination discrepancy (Saunders et al., 2022), meaning that it is often easier for a model to evaluate the quality of a generation than to generate the same high-quality text. Complementary to single-objective optimization with self-refine, QDAIF utilizes AI feedback to assess diversity in addition to quality, facilitating more varied and improved text generation over multiple iterations of refinement through evolution. ## 3 Approach Figure 2 provides an overview of the approach, which is to extend a common QD algorithm (MAP-Elites) with LM operators that generate variation, as well as evaluate both the quality and diversity of candidate solutions. The result is a search algorithm capable of iterative discovery and refinement, applicable to subjective text-based domains. **MAP-Elites.** Our QDAIF implementation builds upon MAP-Elites (Mouret and Clune, 2015), a widely used QD algorithm (Lehman et al., 2022; Cully et al., 2015; Nilsson and Cully, 2021; Vassiliades et al., 2016). MAP-Elites discretizes the diversity space (i.e. dimensions of relevant diversity) into a grid, called the archive. The overarching objective is to populate each grid bin (or cell) within the archive with as high-quality a solution as possible. Typically, the MAP-Elites archive is initiated with randomly-generated solutions (although that does not fit the setting described here; initialization is discussed below). An iteration in MAP-Elites follows these steps: (1) randomly select an existing solution from the archive, (2) mutate the chosen solution to generate new solutions, (3) evaluate the new solution's quality and diversity characteristics, and (4) if the new solution is higher quality than the current occupant at the cell corresponding to its diversity characteristics, replace the previous cell occupant solution with the new solution. For a new solution to be added to the archive, it has to improve either the quality or the diversity of the grid, meaning that it has to either fill an empty bin or perform better than the solution already in its bin. QDAIF distinguishes itself from standard MAP-Elites in four key areas: archive initialization, solution mutation, solution evaluation, and grid discretization (cf. Figure 2). We provide details on each of these differences below. **Initialization and Mutation.** For archive initialization, QDAIF employs few-shot prompting, generating solutions based on a hand-chosen set of seed examples. We list in Appendix A.17 the three few-shot examples utilized in each domain, each chosen to span a breadth of diversity characteristics. For example, in a domain where you want diversity of sentiments (like the Opinions domain described in Section 4.1), the few-shot examples demonstrate positive, neutral, and negative sentiments. For solution mutation, QDAIF employs LMX (simplified from "LMX-Near", as detailed in Meyerson et al. (2023). LMX evolves varied text representations (e.g. mathematical expressions, sentences, Python code) by leveraging effective in-context learning (Brown et al., 2020). LMX prompts are kept simple, typically starting with "_Here is a random example of"_. Appendix A.18 shows the full LMX prompts. We also introduce a novel mutation method with instruction-following prompts for poetry in Section 4.4. **Archive Measures.** While it is sometimes feasible to devise hand-crafted heuristics to evaluate the quality of a solution (e.g. efficiency in locomotion) or diversity characteristics (e.g. a robot's size and mass), this approach filters as domains become more complex and nuanced, as in creative writing. For example, hand-crafting robust heuristics for qualitative aspects of a story, such as its genre (e.g. romance vs. horror), is very difficult. QDAIF circumvents the need for hand-coded Figure 2: **Overview of Quality-Diversity through AI Feedback (QDAIF). Dark components are where Language Models (LM) are employed. QDAIF randomly selects a solution from the QD archive. This chosen solution (parent) forms part of the prompt that is fed into an LM, undergoing LMX mutation to produce a new solution. An LM then evaluates the quality and diversity attributes of the new solution. We compare the newly evaluated solution with its existing solutions in the QD archive, and update it.** measures through prompting LMs with easily-written natural language queries to generate feedback. In particular, capable LMs trained on expansive text corpora can begin to mirror human intuition across a range of potentially subtle diversity characteristics. **Quantifying Performance and Diversity.** For quality assessment, we prompt the LM to discern whether the input text contains a high-quality solution or pertains to the requested topic, requesting a "yes" or "no" response. The solution's quality estimate is derived from the logarithm of the probability of the LM's answer. Similarly, for diversity evaluation, we guide the LM to identify a particular diversity trait. For instance, in an opinion generating domain, the LM is prompted to gauge a solution's sentiment, with a requested response of "positive" or "negative". The log probability of these responses serves as our measure of solution diversity. Appendix A.18 shows the full prompts used in each domain to evaluate the solutions. We also introduce a novel categorical approach to evaluate solution attributes based on raw predictions of discrete labels in Section 4.4. **Discretization.** MAP-Elites typically partitions the grid into equally-sized bins, from the intuition that all parts of the behavior space are equally interesting. However, we observe that when assigning a bin along the diversity axis - which is in our approach based on logits of an LM AI feedback - that qualitative changes in behavior do not uniformly correspond to changes in the logits (cf. Appendix A.27). This is likely due to the (non-linear) calibration behavior of instruction-tuned models in predicting the labels (as output tokens) of text passages (Jiang et al., 2021). Hence, we use custom non-uniform bins, which are denser towards range ends. Qualitative analysis of the generated text showed that the non-uniform bins yielded better alignment with typical human perceptions of diversity changes, influenced by both the AI model's calibration and the domain-specific goals. **Models and Setup.** Details on the LMX generation model (Appendix A.20) and finetuned AI feedback model (Appendix A.21) are given, with details on the training of these LMs. Additional default hyperparameters are described in Appendix A.23. ## 4 Experiments on Creative Writing Domain ### Setup: Opinion Writing, Short Stories To demonstrate the versatility of QDAIF in different applications of creative text evolution, we evaluated QDAIF on these domains: **Opinions**, and **Stories**. The **Opinions** domain is focused on generating diverse, realistic pieces about one's opinions on eating vegetables and plant-based foods - the diversity measure is based on the sentiment of opinions on this topic (e.g. shown in example texts in the Figure 2 overview). For the **Stories** domain, the topic is about a short story, containing two characters: a spy, and a politician. The diversity of stories is evaluated using a variety of measures based on AI Feedback, with the main ones being: **Stories - Genre** (Romance vs Horror) (1D archive), **Stories - Ending** (Happy vs Tragic) (1D archive), and **Stories - Genre and Ending** (2D archive). These domains capture the strengths and limitations of all methods, ranging from simple (**Opinions**) to challenging (**Stories - Genre and Ending**). We show in Figure 1 that the 2D domain is challenging, yet QDAIF still outperforms the baseline in filling the archive with diverse, high-quality stories. The AI feedback prompts are outlined in Appendix A.19. **Evaluation.** To assess the performance of methods in creative writing generation, we compute QD scores (Pugh et al., 2016), a standard metric used to measure the quality-diversity of the discovered corpus of texts. A QD score is defined as the sum of the highest quality values found in each bin. To understand the alignment between AI and human feedback for practical applications in QDAIF, we conducted a human evaluation study on selected elite samples from each method (chosen from the median QD score run out of 5 random seed runs). Using a Likert scale (Allen & Seaman, 2007) for quality assessment, we evaluate the capability of each method to produce a collection of diverse, high-quality texts. To do so we calculate a "human" QD score, defined as the sum of quality scores given for all diversity categories identified by the annotator within the set. Furthermore, to understand how closely AI feedback aligns with human perspectives on subjective evaluation, we measured the agreement rates between human annotators and AI and between two human annotators. Details of the human study are specified in Appendix A.1, demonstrating the validity and advantages of AI feedback in generating human-like feedback on subjective quality and diversity measures. ### Comparisons between QDAIF and Baselines To evaluate the strengths and limitations of QDAIF in generating high-quality and diverse creative writing texts, we compared our method against the following baseline methods: **Fixed-Few-Shot**, **Shuffling-Few-Shot**, **Random-Search**, and **LMX Quality-Only**. The baselines carry out search in the following ways: * **Fixed-Few-Shot**: Use a fixed few-shot prompt to sample many completions for creative domain texts (i.e. no iterative search). * **Shuffling-Few-Shot**: Shuffle the in-context examples of **Fixed-Few-Shot** prompt, before sampling the completion from this prompt. * **Random-Search**: Create a prompt pool of examples (initialized from prompts in Appendix A.19), add all completions to the pool, and choose few-shot examples from the growing pool (without pool size limit). * **LMX Quality-Only**: Create and maintain a pool as in **Random-Search**, but only up to 100 highest-quality completions (as evaluated by AI Feedback) are kept in the pool (i.e. iterative search focused only on quality). We choose a variety of baselines, some highlighting representative alternative approaches (e.g. few-shot prompting), and ablations of QDAIF, to validate our algorithmic choices. For example, **Fixed-Few-Shot** and **Shuffling-Few-Shot** enable the generation of different texts (relying on stochastic sampling), while being constrained to the output distribution of the fixed set of in-context examples. **Random-Search** and **LMX Quality-Only** are methods where the (prompt) pool of examples that we can sample from grows in size, starting from an initial pool of examples. In contrast to QDAIF, **Random-Search** is limited by the lack of constraints in the prompt pool, especially in maintaining the quality of the growing pool through evaluation. **LMX Quality-Only** adds a quality-based evaluation step with AI feedback that optimizes the quality of texts in the pool over time to contain only texts with high quality scores, but is not designed to encourage diversity in texts in comparison to QDAIF. For each domain and baseline described above, we recorded runs for 2000 iterations, repeated with 5 different random seeds. For measures that were not computed during the runs, AI feedback is used to compute the quality and diversity measures for each iteration sample at the end of the run. Following established protocol in evaluating and comparing QDAIF and baselines (Pugh et al., 2016), we compare the QD score of all methods. Similar to QDAIF, the baseline methods use hand-written examples in Appendix A.19, either in a fixed prompt or as the initial prompt pool. **Performance Comparison.** We report results comparing the QD score performance for the different methods of generating creative writing texts in Figure 3. We computed the mean and the bootstrapped 95% confidence intervals (CI) from 100k resamples, across 5 random seeds of runs. We notice from the 95% CI that QDAIF achieves significantly better QD score than all baseline methods in all creative writing domains. The broader range of stories and opinions generated by QDAIF is also evident qualitatively. For example, in the Stories - Genre and Ending domain, while the baseline methods deliver straightforward and more predictable stories of how the spy "pulled out a knife and stabbed [the politician] to death", QDAIF generates a more dramatic story of how the spy "transformed into the monster and killed everyone". **Random-Search** is the worst-performing method overall, with significantly lower QD score performance in **Opinions** and **Stories - Genre** compared to the best-performing baselines. Interestingly, **LMX Quality-Only** does not significantly outperform the methods using a fixed population prompt pool (**Fixed-Few-Shot** and **Shuffling-Few-Shot**). On **Stories - Genre**, **LMX Quality-Only** is often weaker than **Fixed-Few-Shot** and **Shuffling-Few Figure 3: **QDAIF significantly outperforms baselines in QD score performance in all domains**. Performance stats with mean bootstrapped 95% CI, across 5 random seed runs. The maximum possible QD score is 20 (100 for 2D archive (4th plot)). See Appendix A.7 for additional stats. **Shot**, in spite of the fact that quality optimization is occurring. The results show that single-objective optimization cannot guide the search for diverse, high-quality texts alone. **Human Feedback Evaluation.** We report the subjective quality and diversity from the human study of the set of texts (diverse samples of elites from different bins) discovered by each method, with comparisons in Table 1. Detailed discussion on the human study setup is described in Appendix A.1. We observe that compared to baselines, QDAIF is competitive with or better at discovering diverse, high-quality texts in **Opinions** and **Stories**, evaluated with human preferences. This observation is reflected in the human QD score, measuring the perceived quality-diversity of the generated elite texts from each method in a single run. QDAIF sets also showed high agreement between humans and AI feedback on the diversity categories of presented texts, as well as between two annotators, competitive with **Fixed-Few-Shot**. Although the average perceived quality of texts is better from **Fixed-Few-Shot**, this is not enough for bringing high-quality examples for different niches of outputs (i.e. higher QD score). Furthermore, **Shuffling-Few-Shot** demonstrates even lower human evaluation scores, despite the use of the same fixed set of hand-written seeds, indicating lower robustness due to the use of different ordering of few-shot examples. Prior work hints at the sensitivity of LMs to few-shot prompt ordering, with task-solving capabilities varying significantly due to this ordering (Lu et al., 2021). The gap in human-evaluated performance between **Fixed-Few-Shot** and **Shuffling-Few-Shot** indicates that reliance on fixed prompts is less likely to enable reliable search, in contrast to the robustness shown by QDAIF. Additionally, **Random-Search** and **LMX Quality-Only** obtained even lower human evaluation scores, even though the methods either explore different prompts or optimize for the quality of texts. We provide a detailed discussion (for baseline methods) on findings from the subjective study of the discovered texts in Appendix A.12, as well as the qualitative behavior of the text search over time in Appendix A.13. Through guided evolutionary search, QDAIF surpasses all baseline methods in terms of computed QD score performance, and is competitive (or better) compared to baselines, according to human evaluation. ### Extensions to AI Feedback and Mutation Model In addition to experiments with QDAIF described in previous sections, we investigated the effects on the performance due to variations of the method. **LMX Model Size.** We used larger versions of the LMX models (30B and 70B) for mutation, and compared it to the performance of the 13B model (default). While no relationship was found between model size and QD score, quality ratings from human feedback improved with outputs from larger models (described in detail in Appendix A.3). **Few-Shot Prompting for AI Feedback.** We compared the performance of QDAIF on the **Stories - Genre** domain when we prompted our AI feedback model for diversity measures given a few-shot prompt with the following settings: 2-shot, 4-shot, and 8-shot. Using a higher number of few-shots led to improvements in human quality ratings of texts. Interestingly, few-shot AI feedback caused QDAIF runs to return solutions where human feedback disagreed with AI feedback on the genre more often; intuitively, few-shot demonstrations would help LMs evaluate texts closer to human notions of genres. Future research is needed to explore whether this is a general phenomenon or \begin{table} \begin{tabular}{l c c c c} \hline \hline **Method** & \begin{tabular}{c} **Human** \\ **QD score** \\ \end{tabular} & \begin{tabular}{c} **Quality** \\ **rating** \\ \end{tabular} & \begin{tabular}{c} **Human-AI** \\ **agreement** \\ \end{tabular} & \begin{tabular}{c} **Human** \\ **agreement** \\ \end{tabular} \\ \hline Fixed-Few-Shot & 0.767 & 4.133 & 0.800 & 0.867 \\ Shuffling-Few-Shot & 0.696 & 3.500 & 0.700 & 0.667 \\ Random-Search & 0.606 & 3.300 & 0.733 & 0.600 \\ LMX, Fitness-Only & 0.650 & 3.533 & 0.633 & 0.733 \\ QDAIF (ours) & 0.772 & 3.900 & 0.833 & 0.800 \\ \hline \hline \end{tabular} \end{table} Table 1: **QDAIF is competitive/better in terms of Human QD score against baseline methods from the human evaluation study.** The stats are averaged across three domains: **Opinions**, **Stories - Genre**, and **Stories - Ending**. The Human QD score quantifies the perceived quality-diversity of a set of solutions returned by a method (i.e. high-quality texts for all the diversity categories of texts that the evaluator could identify). is particular to the model we used and this particular domain. Further discussion and results are highlighted in Appendix A.4. **Varying Initialization and Mutation Method.** Ideally, QDAIF would be simpler if it could be run without seed examples (e.g. requesting a story from an instruction-following LM). We investigated the potential of QDAIF when the solution population is initialized from zero-shot prompted generations, and evolved using LMX. Initial results on **Opinions** and **Stories** for a more challenging case (i.e. initializing texts with zero-shot-prompted outputs from the pre-trained LM used in LMX) along with prior work discussion are shown in Appendix A.5, highlighting comparable performance in terms of QD score, with divergence observed in alignment with human preferences. We also find that QDAIF, in these domains, is robust to the mechanism of generating variation. Appendix A.6 describes an alternative mutation method (based on a more gradual few-shot replacement operation) that is more effective in some circumstances, although in general offers comparable performance. We provide a detailed discussion (for QDAIF methods) on findings from the subjective study of the discovered texts in Appendix A.10, as well as the qualitative behavior of the text search over time in Appendix A.11. There may be many ways of leveraging LMs to generate variation (including finetuning models; see Appendix A.8), and this is an exciting avenue of future research. ### Evolving Poetry with Instruction Guidance The final experiment explores scaling up QDAIF to a more capable model, GPT-4 (OpenAI, 2023), in the challenging task of generating poetry, highlighting how QDAIF will benefit from advances in model capabilities. The aim of the **Poetry** domain is to produce high-quality poems with varying genres and emotional tones, unrestricted by topic. Here, the MAP-Elites archive has two axes of diversity: genre and tone, and they are delineated using categorical labels. The genre axis has the labels "haiku", "sonnet", "ballad", "limerick", and "hymn", while the tone axis has the labels of "happy", "dark", "mysterious", "romantic", and "reflective". We created a new mutation operator, **LMX-rewrite**, for this domain, that leverages instruction-following to tell a model to translate a parent poem into an offspring with different diversity characteristics. To generate a new solution, we prompt GPT-4 to rewrite the selected poem into an inspired, different poem: _Inspired by this poem, write a new poem of very high, award winning quality with a different poetic genre (format and form) and tone compared to the poem above._ We used GPT-4 to determine the genre and tone of a poem and rate its quality. For quality, we prompt GPT-4 to _Rate the quality of the above poem on a scale from 1 to 10_. To determine the diversity attributes, we prompt GPT-4 to determine which genre or tone is the poem closest to. For example, to determine genre, we ask GPT-4: _What genre is this poem closest to from the following list: ["haiku", "sonnet", "ballad", "limerick", "hymn"]?_ Appendix A.26 shows the full prompts Figure 4: **QDAIF (LMX-guided) (left) covers the space of poetry with high-quality solutions (on a rating scale), with poems matching the closest bins.** QDAIF solutions take qualitative inspiration from the seed poem’s imagery of _“fields of green waves”_ in Appendix A.26 while giving meaningfully diverse kinds of poems across the search space. QDAIF (LMX-rewrite) (not shown) also covers more the space of diverse, high-quality poems compared to **Random-Poems** (right). and setup. We observed high consistency in GPT-4's responses; across multiple LM calls, quality ratings for the same poem fluctuated by no more than a single point. We show qualitative examples of poems with their AI feedback evaluations in Appendix A.16. In this setup, all methods are run for 200 iterations, and QDAIF is benchmarked against the baseline, **Random-Poems**, as well as an ablation method, **Fixed Seed Rewrite**. **Random-Poems** simply generates 200 random poems. **Fixed Seed Rewrite** rewrites only the seed poem in Appendix A.26, without evolutionary search. We found that QDAIF achieves a higher QD score of 130 (CI: 118 - 145) in comparison to **Random-Poems** with 76 (CI: 67 - 85) and **Fixed Seed Rewrite** with 99 (CI: 72 - 117). We observed a similar trend (with a wider performance gap between QDAIF and other methods) when we used GPT-3.5-Turbo for the generation step instead of GPT-4 while keeping GPT-4 as the evaluator (cf. Appendix A.14). QDAIF was shown to have greater QD score performance than the other methods according to the Mann-Whitney U Test (\(p\leq 0.05\)). Although QDAIF with **LMX-rewrite** leaves some of the bins empty like **Random-Poems** (e.g. GPT-4 doesn't recognize that limericks are also a valid genre for diverse poems during search), we can adapt QDAIF by adding guidance on the desired (randomly chosen) genre and tone for rewriting (**LMX-guided**). The performance of this method of QDAIF is on par with a **Targeted-Poems** approach (generating high-quality poems of randomly chosen genre and tone per step) in terms of QD score, and even better when using an older version of GPT-4 (cf. Appendix A.14). Furthermore, we found that the rewriting step is useful for generating poems that can meaningfully preserve inspiration from parent poems, enabling users to control the outcomes of search, and giving us a tool that can mimic stepping stones of human innovation for AI-assisted creative applications (Stanley et al., 2017) (see Appendix A.15). Figure 4 highlights the potential of QDAIF with GPT-4, especially in controlling a set of solutions subjectively aligned with AI feedback labels. Overall, results highlight the challenge of obtaining the desired diversity in outputs from standard prompting of models like GPT-4 without explicit guidance (Renda et al., 2023; Friedrich et al., 2023) while emphasizing the importance of combining a rewriting mutation operator with an evolving population of solutions in leading to more diverse, high-quality outcomes that form more natural chains of inspiration. ## 5 Discussion and Conclusion This paper introduces QDAIF, a quality-diversity method that aims to discover diverse and high-quality solutions in qualitative domains, by leveraging advances in foundation models to evaluate the quality and diversity of generated individuals. The paper's results highlight that QDAIF can succeed at its aims, generating solutions that align with human perception of quality and diversity. We note limitations with QDAIF that motivate future work. Firstly, we suspect reward hacking happening when using LMs to generate feedback. Our human evaluation investigation shows that while the LM's evaluation of quality mostly aligns with human perception, the correlation drops when the evaluated quality is in the range 0.995 to 1 (cf. Figure 5). The text generation might have exploited certain attributes or phrasings that allow an LM to give a high-quality estimate, but not what humans would agree is good. This is a common issue highlighted by other works when using AI models as classifiers or evaluators (Nguyen et al., 2015), highlighting risks of open-ended search to be tackled (Ecoffet et al., 2020). One method to address this limitation could be to use RLHF finetuning (Ouyang et al., 2022) to produce LMs that can detect and mitigate adversarially generated texts. RLHF models are finetuned to generate responses to tasks in order to maximize human preferences on instruction-following, potentially leading to improvements in the evaluation of creative writing, a human-centric medium of innovation. Another possible approach could be to use an ensemble of different AI models to evaluate solutions, rather than relying only on one; the hope would be that robustness would result from models having uncorrelated blind spots. Furthermore, although QDAIF makes it easy to specify qualitative aspects of diversity through natural language prompts, it still requires a researcher to define what axes of diversity they are most interested in. For example, if we applied QDAIF to generate short stories of different genres (e.g. comparing horror vs. romance), it would not autonomously explore other important attributes that a writer might care about (e.g. first-person vs. third-person perspective) unless explicitly specified. When we tested different diversity measures in the Stories domain, such pathologies were observed (Appendix A.28). For example, when using "hero spy vs. hero politician" as the diversity measure, many of the solutions generated tend to neglect the interaction between the spy and the politician, focusing solely on the character that is meant to be the hero. However, someone writing a short story about a spy and a politician would naturally care about how the characters interact with one another. One possible way to automatically determine interesting diversity measures is to utilize the human notions of interestingness distilled into foundation models (Zhang et al., 2023). That is, we could ask LMs to suggest interesting diversity measures that a human would typically care about in the domain, thereby enabling a more autonomous creative search. In conclusion, we show that QDAIF is a promising approach to open-ended search that can reveal unexplored creative writing spaces, surpassing alternative text generation methods in generating diverse high-quality natural language text. AI feedback, Evolution through Large Models (ELM), and quality-diversity search (QD) were found to be essential ingredients for enhanced AI systems that can innovate in subjective spaces, similar to past research on Innovation Engines (Nguyen et al., 2016, 2015b). In fact, we see AI feedback as a general ingredient for open-ended search for solutions in multimodal domains, capable of following instructions beyond text (Liu et al., 2023). QDAIF can be easily extended to multi-modal domains (e.g. vision-language) for synthetic data generation and evaluation, building on top of recent advances in the field (Eichenberg et al., 2021; Alayrac et al., 2022; Bellagente et al., 2023; Driess et al., 2023; Bhatt et al., 2023; Sudhakaran et al., 2023; Todd et al., 2023). We see many possibilities from QDAIF to build creative search systems with evaluation, diversification, and improvement capabilities, bringing us closer to AI that can support and extend human innovation. ## Ethics Statement Human evaluations were performed by the co-authors of this paper and select colleagues. All human evaluators provided informed consent, and their feedback and assessments were obtained without coercion or bias. We took action to prevent bias by presenting evaluators with texts to evaluate in a blind setting, with only the instructions for the study annotation task presented (to carefully read through the presented texts, then give a quality score and a label of the characteristic that best matches the texts). We show a detailed setup for the human study in Appendix A.1. For transparency, we provide the full set of results with caption descriptions from our human evaluation. In the **Opinions** domain, Tables 12-15 contain the human evaluation results for sets from baseline methods, Tables 28-31 contain the human evaluation results for sets from QDAIF methods, and Tables 24-27 contain the human evaluation results for sets from embedding feedback QD methods. In the **Stories - Genre** domain, Tables 16-19 contain the human evaluation results for sets from baseline methods, and Tables 32-35 contain the human evaluation results for sets from QDAIF methods. For the **Stories - Ending** domain, Tables 20-23 contain the human evaluation results for sets from baseline methods, and Tables 36-39 contain the human evaluation results for sets from QDAIF methods. ## Author Contributions Herbie developed the setup and framework for the Poetry domain experiments and base library for research. Andrew developed the setup and experiments for the Opinions and Stories domains, and contributed to extended studies, visualization, and analysis across experiments in the paper. Hannah contributed additional experimentation in the Stories domain, in addition to coordinating part of human evaluation studies. Jenny contributed qualitative analysis across studied domains. Koen developed visualization scripts used in Opinions and Stories domain experiments. Marco contributed to part of the technical implementation and ideation. Andrew conducted the blind human evaluation study, and Gregory advised on the conduct and analysis of the human study. Joel, Jeff, and Ken initiated early ideation for this work. Joel, Gregory, Jeff, and Ken advised and guided. Andrew, Jenny, Herbie, and Joel wrote the manuscript with edits and feedback from all authors. ## Acknowledgements We thank Robert Baldock, Samuel Weinbach, Souradeep Nanda, Jan Zierstek, and Andres Felipe Cruz Salinas for insightful discussions and feedback within the lab at Aleph Alpha. We also thank Katherine Hardgrave, David Nugent, Daniel Flood, and Formula Trinity Autonomous for the inspiration that seeded the momentum leading up to this work.
2308.09220
Interior and boundary mixed norm derivative estimates for nonstationary Stokes equations
We obtain weighted mixed norm Sobolev estimates in the whole space for nonstationary Stokes equations in divergence and nondivergence form with variable viscosity coefficients that are merely measurable in time variable and have small mean oscillation in spatial variables in small cylinders. As an application, we prove interior mixed norm derivative estimates for solutions to both equations. We also discuss boundary mixed norm Hessian estimates for solutions to equations in nondivergence form under the Lions boundary conditions.
Hongjie Dong, Hyunwoo Kwon
2023-08-18T00:39:35Z
http://arxiv.org/abs/2308.09220v1
# Interior and boundary mixed norm derivative estimates for nonstationary Stokes equations ###### Abstract. We obtain weighted mixed norm Sobolev estimates in the whole space for nonstationary Stokes equations in divergence and nondivergence form with variable viscosity coefficients that are merely measurable in time variable and have small mean oscillation in spatial variables in small cylinders. As an application, we prove interior mixed norm derivative estimates for solutions to both equations. We also discuss boundary mixed norm Hessian estimates for solutions to equations in nondivergence form under the Lions boundary conditions. Key words and phrases:Time-dependent Stokes system; weighted estimates; interior and boundary Lebesgue mixed-norm estimates 2020 Mathematics Subject Classification: 76D03; 76D07; 35K51; 35B45 H. Dong was partially supported by Simons Fellows Award 007638 and the NSF under agreement DMS-2055244. H. Kwon was partially supported by the NSF under agreement DMS-2055244. property (see e.g. [7]). These equations are also naturally introduced when we consider Stokes equations on manifolds (see e.g. [18, 54]). For stationary Stokes equations, there is plenty of literature on Sobolev type estimates. When the viscosity coefficient is constant, Cattabriga [9] first obtained \(W^{1}_{q}\)-estimates when \(d=3\) and \(1<q<\infty\) in a smooth domain. Later, it was extended by Amrouche-Girault [4] to a bounded \(C^{1,1}\)-domain, \(d\geq 2\) and \(1<q<\infty\). This was further extended to bounded Lipschitz domains with small Lipschitz constants by Galdi-Simader-Sohr [33]. A complete solvability result was obtained by Dindos-Mitrea [18] on arbitrary bounded Lipschitz domain in \(\mathbb{R}^{d}\), \(d\geq 2\). We refer to [32] for exterior problems of stationary Stokes equations. When viscosity coefficients are variable coefficients, Dong-Kim [21, 22] obtained \(W^{1}_{q}\)-estimates and weighted \(W^{1}_{q}\)-estimates on Reifenberg flat domains even if the viscosity coefficient is merely measurable in one direction and has a small BMO seminorm in orthogonal directions. Many authors have studied mixed-norm Sobolev estimates for nonstationary Stokes equations in various settings. When \(a^{ij}=\delta^{ij}\), Solonnikov [62] obtained \(L_{q}\)-estimates and solvability result for (1.1) under the Dirichlet boundary conditions on the half-space and bounded \(C^{2}\)-domains. Later, it was extended by Giga-Sohr [38] to mixed-norm Sobolev estimates including exterior domains. An elementary proof was given by Maremonti-Solonnikov [53], and later Geissert et. al. [34] gave a different proof via \(H^{\infty}\)-calculus. For the problem (1.3) under the Dirichlet boundary conditions, Giga-Giga-Sohr [37] obtained \(L_{q}\)-estimates on half-spaces without estimating pressure. Later, Koch-Solonnikov [46] gave more precise \(L_{q}\)-estimates for the problem (1.3) on half-spaces including estimates for the pressure. These results were later extended by Chang-Kang [10] to anisotropic Sobolev spaces on the half-space under the Dirichlet boundary conditions. For weighted estimates, Frohlich [31] obtained weighted mixed-norm estimates by employing \(H^{\infty}\)-calculus approach based on the Stokes resolvent estimates due to Farwig-Sohr [29] and Frohlich [30]. For variable coefficients cases, there are relatively few results on mixed-norm Sobolev estimates. Solonnikov [63] first obtained \(L_{q}\)-estimates and solvability results for such problem when \(a^{ij}\) is continuous in \(t\) and belongs to \(W^{1}_{r}\) in \(x\) for some \(r\) when the domain is bounded. Later, Abels-Terasawa [2] and Abels [1] extended this result to mixed-norm estimates on several unbounded domains. There are also results on \(L_{q}\)-estimates under different assumptions on the viscosity part. See Bothe-Pruss [8], Pruss [57], Pruss-Simonett [58], and the references therein. Recently, Tolksdorf [65] obtained mixed-norm Sobolev estimates on the whole space with a restricted range of \(q\) when \(a^{ij}\) is a bounded measurable function depending only on spatial variables. We also note that the variable density case was considered by Ladyzhenskaya-Solonnikov [49] and Danchin [17]. In the case of the heat equation \(\partial_{t}v-\Delta v=0\), it is well known that \[\|D^{2}v\|_{L_{2}(Q_{1/2})}\leq N\|v\|_{L_{2}(Q_{1})}\] for some constant \(N=N(d)>0\). Here \(Q_{r}(t_{0},x_{0})\) denotes the parabolic cylinder centered at \((t_{0},x_{0})\in\mathbb{R}^{d+1}\) with radius \(r>0\): \[Q_{r}(t_{0},x_{0})=(t_{0}-r^{2},t_{0})\times B_{r}(x_{0}),\] where \(B_{r}(x_{0})\) is the ball in \(\mathbb{R}^{d}\) of radius \(r\) centered at \(x_{0}\in\mathbb{R}^{d}\). When \((t_{0},x_{0})=(0,0)\), we drop \((t_{0},x_{0})\) in the notation. However, it is nontrivial to show the validity of such estimates for nonstationary Stokes equations due to the nonlocal effect of the pressure. When \(a^{ij}=\delta^{ij}\), Chen-Strain-Yau-Tsai [12] proved that if \(1<s,q<\infty\), \(f\in L_{s,q}(Q_{1})^{d}\), \(g=0\), and \(u\in L_{s,1}(Q_{1})^{d}\) is a very weak solution to (1.1) in \(Q_{1}\), then \(D^{2}u\in L_{s,q}(Q_{1/2})\) and \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}( Q_{1})}\right)\] for some constant \(N=N(d,s,q)>0\). This inequality was independently proved by Jin [42] and Wolf [66] when \(f=g=0\) and \(s=q=2\). We also note that Hu-Li-Wang [40] obtained interior \(L_{q}\)-estimates via a different approach without using the representation formula for Stokes equations. Recently, Dong-Phan [26] obtained such estimates even if \(a^{ij}\) is not constant and \(\operatorname{div}u=g\). More precisely, if \(1<s,q<\infty\), \(f\in L_{s,q}(Q_{1})^{d}\), \(g\in W^{0,1}_{s,q}(Q_{1})\), and \((u,p)\in\tilde{W}^{1,2}_{s,q}(Q_{1})^{d}\times W^{0,1}_{1}(Q_{1})\) is a strong solution to (1.1) in \(Q_{1}\), then under the assumption that \(a^{ij}\) has small mean oscillation in spatial variables in small cylinders (see Assumption 2.4), they proved that there exists a constant \(N=N(d,s,q,\nu,R_{0})>0\) such that \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}( Q_{1})}+\|Dg\|_{L_{s,q}(Q_{1})}\right). \tag{1.4}\] Here \(\tilde{W}^{1,2}_{s,q}(Q_{1})\) is the space of all functions \(u\) belonging to \(D^{k}u\in L_{s,q}(Q_{1})\), \(k=0,1,2\), and \(u_{t}\in L_{1}(Q_{1})\) (see the lines above Theorem 2.7 for the definition of \(\tilde{W}^{1,2}_{s,q}(Q_{1})\)). Similarly, gradient estimates were obtained for the problem (1.3) even if \(a^{ij}\) is unbounded. Note that these are only _a priori_ estimates. In the same paper, they applied interior regularity results for (1.3) to the incompressible Navier-Stokes equations to improve known regularity criteria results. Very recently, via level set argument as in [40], Dong-Li [27] obtained interior \(L_{q}\)-regularity for Stokes equations in both divergence form and nondivergence form under the stronger assumption that the viscosity coefficients are Holder continuous in spatial variables. For boundary estimates, Seregin [60] proved the local spatial smoothing property of strong solutions to nonstationary Stokes equations under the Dirichlet boundary conditions (or no-slip boundary conditions) and \(\partial_{t}u,D^{2}u,\nabla p\in L_{s,q}(Q_{1}^{+})\), where \(Q_{r}^{+}=Q_{r}\cap\mathbb{R}_{+}^{d}\). Later, several counterexamples were constructed to show that it is not possible to have spatial smoothing of such solutions under the Dirichlet boundary conditions if we do not impose regularity conditions on the pressure (see Kang [43] and Seregin-Sverak [59]). Related to our paper, Chang-Kang [11] proved that boundary gradient estimates may fail for solutions to nonstationary Stokes equations under the Dirichlet boundary conditions. It is natural to ask what type of boundary conditions may yield the boundary derivative estimates of solutions to nonstationary Stokes equations. One answer was given by Dong-Kim-Phan [24] who proved that boundary mixed-norm Hessian estimates for solutions to (1.1) on \(Q_{1}^{+}\) are possible if we consider the Lions boundary conditions (see (2.3)) which were introduced by J.-L. Lions in [51, pp. 87-98] (see also P.-L. Lions in [52, pp. 129-131]). Such boundary conditions are a special case of the Navier boundary conditions which were introduced by Navier [56] in 1827: \[u\cdot n=0,\quad(2\mathbb{D}(u)n)_{\tau}+\alpha u_{\tau}=0\quad\text{on } \partial\Omega,\] where \(\alpha\geq 0\) is the friction coefficient, \(n\) is the outer unit normal vector to the boundary \(\partial\Omega\), and \(v_{\tau}=v-(v\cdot n)n\) is the tangential component of \(v\) to the boundary \(\partial\Omega\), and \(\mathbb{D}(u)\) is the deformation tensor of \(u\) defined by \([\mathbb{D}(u)]^{ij}=(D_{i}u^{j}+D_{j}u^{i})/2\). Many researchers studied Stokes and Navier-Stokes equations under such boundary conditions for mathematical reasons and physical applications. See, for instance, [3, 5, 13, 16, 35, 36, 44, 50] and references therein. Very recently, Chen-Liang-Tsai [14] proved that gradient estimates for very weak solutions to nonstationary Stokes equations on \(Q_{1}^{+}\) are possible under the Navier boundary conditions when \(\operatorname{div}u=0\) and \(a^{ij}=\delta^{ij}\). The purpose of this paper is two-fold. We prove weighted mixed-norm Sobolev estimates and solvability of the Cauchy problems for (1.1) and (1.3) in \((0,T)\times\mathbb{R}^{d}\) when the viscosity coefficients satisfy the \(\operatorname{VMO}_{x}\) assumption (see Assumption 2.4). As an application of these weighted mixed-norm estimates, we prove that if \((u,p)\in\tilde{W}_{q_{0}}^{1,2}(Q_{1})^{d}\times W_{1}^{0,1}(Q_{1})\) is a strong solution to (1.1) for some \(1<q_{0}<\infty\) and \(f\in L_{s,q}(Q_{1})^{d}\), and \(g\in W_{s,q}^{0,1}(Q_{1})\), then \(D^{2}u\in L_{s,q}(Q_{1/2})\) and (1.4) holds. For Stokes equations in nondivergence form, we also prove similar results under the Lions boundary conditions. In contrast to Dong-Phan [26] and Dong-Kim-Phan [24], we do not a priori assume that our strong solution \(u\) to (1.1) belongs to \(\tilde{W}_{s,q}^{1,2}\). A similar result holds for weak solutions \(u\) to (1.3) for the interior case. Let us briefly outline the proofs of main theorems. To prove the weighted mixed-norm Sobolev estimates (Theorem 2.5) in \((0,T)\times\mathbb{R}^{d}\), we employ the perturbation technique utilizing the Fefferman-Stein theorem, which was first introduced by Krylov [47] (see also [48]). To do so, we need weighted mixed-norm Sobolev estimates for Stokes equations with measurable coefficients depending only on \(t\) (Theorem 4.1), which is not available in the literature. Such coefficients are referred to as _simple coefficients_ in this article. To obtain the solvability, we consider the associated vorticity equation to remove the pressure term, and then we recover a solution using the divergence equation and the Newtonian potential. A proof is given in Appendix A. Using this solvability result, we prove mean oscillation estimate of the gradient of the vorticity of a solution to (1.1) to derive a priori estimates for solutions to (1.1) by using generalized Fefferman-Stein theorem established in [20] (see Lemma 3.5). Then the desired result follows from the method of continuity together with the solvability results for Stokes equations with simple coefficients. A similar argument is also applied to Stokes equations in divergence form (Theorem 2.6) with some modification. To prove the interior mixed Hessian estimates (Theorem 2.7) of solutions to equations in nondivergence form, we mollify Equation (1.1) in space and time to obtain \[\begin{cases}\partial_{t}u^{(\varepsilon)}-a^{ij}D_{ij}u^{( \varepsilon)}+\nabla p^{(\varepsilon)}&=f^{(\varepsilon)}+[a^{ij}D_{ij}u]^{( \varepsilon)}-a^{ij}D_{ij}u^{(\varepsilon)},\\ \operatorname{div}u^{(\varepsilon)}&=g^{(\varepsilon)}\end{cases}\] and then decompose \(u^{(\varepsilon)}=u_{1}^{\varepsilon}+u_{2}^{\varepsilon}\) and \(p^{(\varepsilon)}=p_{1}^{\varepsilon}+p_{2}^{\varepsilon}\), where \((u_{1}^{\varepsilon},p_{1}^{\varepsilon})\) satisfies the initial value problem for (1.1) with \(u_{1}^{\varepsilon}(-1,\cdot)\) on \(\mathbb{R}^{d}\) by replacing \(f\) with \(h^{\varepsilon}:=([a^{ij}D_{ij}u]^{(\varepsilon)}-a^{ij}D_{ij}u^{(\varepsilon) })1_{Q_{3/4}}\) and \(g\) with zero, respectively. Using the aforementioned weighted solvability results, we will show that \(u_{1}^{\varepsilon}\in W_{s_{1},q_{1}}^{1,2}((-1,0)\times\mathbb{R}^{d})\) (see Lemma 7.3) for any \(1<s_{1},q_{1}<\infty\). Moreover, it follows from parabolic Sobolev embedding theorem \(W_{q_{0}}^{1,2}(Q_{1})\hookrightarrow L_{s,1}(Q_{1})\) and \(L_{q_{0}}\)-estimates for \(u_{1}^{\varepsilon}\) that \(u_{1}^{\varepsilon}\to 0\) in \(L_{s,1}(Q_{1})\). Then we can apply the result of Dong-Phan [26] mentioned above to \(u_{2}^{\varepsilon}\) to get (1.4) by replacing \((u,f,g)\) with \((u_{2}^{\varepsilon},f^{(\varepsilon)},g^{(\varepsilon)})\). Then using weak compactness result in \(L_{s,q}(Q_{1/2})\), we can pass the limit to show that up to subsequence, \(D^{2}u_{2}^{\varepsilon_{j}}\to D^{2}u\) weakly in \(L_{s,q}(Q_{1/2})\). This implies the desired result in Theorem 2.7. To prove the interior gradient estimates (Theorem 2.8) of solutions to equations in divergence form, we perform a similar strategy as in the case of equations in nondivergence form. However, the previous strategy cannot be directly applied since unlike the space \(W^{1,2}_{q_{0}}(Q_{1})\), the space \(\mathcal{H}^{1}_{q_{0}}(Q_{1})\) is not always embedded into \(L_{s,1}(Q_{1})\) (see Section 2 for definitions of \(\mathcal{H}^{1}_{q_{0}}(Q_{1})\)). To overcome this issue, if \(s>q_{0}\), then since \(p_{1}^{\varepsilon}\), \(Du_{1}^{\varepsilon}\in L_{q_{0}}((-1,0)\times\mathbb{R}^{d})\), \((u_{1}^{\varepsilon})_{t}\) can be written as a divergence of some matrix field \(\mathbf{G}^{\varepsilon}\in L_{q_{0}}(Q_{3/4})^{d\times d}\). Then by using the recent embedding result due to Kim-Ryu-Woo [45] (see Lemma 3.1) and \(L_{q_{0}}\)-estimates for \(u_{1}^{\varepsilon}\), there exists \(q_{0}<s_{1}\leq s\) such that \(u_{1}^{\varepsilon}\in L_{s_{1},q_{0}}(Q_{3/4})\) and \(u_{1}^{\varepsilon}\to 0\) in \(L_{s_{1},q_{0}}(Q_{3/4})\) as \(\varepsilon\to 0\). Hence by using a similar argument that we used in the case of nondivergence form, we can show that \(Du\in L_{s_{1},q}(Q_{3/4})\). Then by applying the above argument again, we can prove that \(Du\in L_{s,q}(Q_{1/2})\) and the corresponding estimate for \(Du\) to (1.4). The case \(s\leq q_{0}\) is easy to prove. This outlines the proof of Theorems 2.7 and 2.8. Lastly, this approach also enables us to show the boundary mixed-norm Hessian estimates of strong solutions to (1.1) in \(Q_{1}^{+}:=(-1,0)\times\{y:|y|<1,y_{d}>0\}\) under the Lions boundary conditions. See Section 8. However, we mainly focus on the interior derivative estimates for simplicity. This paper proceeds in eight sections and three appendix sections. In Section 2, we introduce some notation and state the main results of this paper. In Section 3, we summarize known results on function spaces with and without weights, potential estimates, and solvability results on the divergence equation and parabolic equations with simple coefficients. In Section 4, we derive solvability results in weighted mixed-norm Sobolev estimates and Holder estimates for solutions to (1.1) and (1.3) with simple coefficients in \((0,T)\times\mathbb{R}^{d}\). Then we prove weighted mixed-norm solvability results for (1.1) and (1.3) in \((0,T)\times\mathbb{R}^{d}\) with variable viscosity coefficients in Sections 5 and 6, respectively. In Section 7, we prove the interior mixed-norm derivative estimates (Theorems 2.7 and 2.8) for solutions to (1.1) and (1.3), respectively. In Section 8, we give a brief description of proving boundary mixed-norm Hessian estimates for solutions to (1.1) under the Lions boundary conditions. Finally, we give the proofs of the solvability of Stokes equations with simple coefficients in mixed-norm weighted Sobolev spaces in Appendices A and B, respectively. ## 2. Notation and Main results ### Notation and assumptions By \(N=N(p_{1},\dots,p_{k})\), we denote a generic positive constant depending only on the parameters \(p_{1},\dots,p_{k}\). For two Banach spaces \(X\) and \(Y\), we write \(X\hookrightarrow Y\) if \(X\subset Y\) and there exists a constant \(N\) such that \(\|u\|_{Y}\leq N\|u\|_{X}\) for all \(u\in X\). Let \(\Omega\) be any domain in \(\mathbb{R}^{d}\), where \(\mathbb{R}^{d}\) is the standard \(d\)-dimensional Euclidean space of points \(x=(x_{1},\dots,x_{d})\), \(d\geq 2\). For \(0<T<\infty\), we write \(\mathbb{R}_{T}:=(0,T)\) and \(\mathbb{R}^{d}_{T}:=\mathbb{R}_{T}\times\mathbb{R}^{d}\). When \(T=\infty\), we write \(\mathbb{R}^{d}_{\infty}=\mathbb{R}^{d+1}\). We denote the point in \(\mathbb{R}^{d}_{T}\) by \((t,x)=(t,x^{\prime},x_{d})\), where \(x^{\prime}\in\mathbb{R}^{d-1}\) and \(x_{d}\in\mathbb{R}\). We also define \(\mathbb{R}^{d}_{+}:=\{(y^{\prime},y_{d}):y^{\prime}\in\mathbb{R}^{d-1},y_{d}>0\}\). For \(r>0\) and \((t,x)\in\mathbb{R}^{d+1}\), we write \[Q_{r}(t,x):=(t-r^{2},t)\times B_{r}(x),\quad Q_{r}=Q_{r}(0,0)\] where \[B_{r}(x):=\{y\in\mathbb{R}^{d}:|x-y|<r\}.\] For \((t,x)\in\mathbb{R}^{d}_{+}\), we define \(Q_{r}^{+}(t,x)=Q_{r}(t,x)\cap\mathbb{R}^{d}_{+}\) and we write \(B_{r}^{\prime}(x^{\prime})\) the \((d-1)\)-dimensional ball in \(\mathbb{R}^{d-1}\) with the radius \(r\) centered at \(x^{\prime}\in\mathbb{R}^{d-1}\). Let \(\mathbb{N}_{0}=\{0,1,2,\dots\}\) be the set of nonnegative integers. For multi-indices \(\gamma=(\gamma_{1},\dots,\gamma_{d})\in\mathbb{N}^{d}\) and a function \(u\), we define \[u_{x_{i}}=\frac{\partial u}{\partial x_{i}}=D_{i}u,\quad(1\leq i\leq d),\quad D ^{\gamma}u=D_{1}^{\gamma_{1}}\cdots D_{d}^{\gamma_{d}}u,\quad x^{\gamma}=(x_{ 1})^{\gamma_{1}}\cdots(x_{d})^{\gamma_{d}}.\] For \(m\in\mathbb{N}\), we use \(D^{m}\) to denote a partial derivative of order \(m\) with respect to \(x\). For a function \(u\), we define \[\nabla u:=(D_{1}u,\dots,D_{d}u)\quad\text{and}\quad\nabla^{2}u:=[D_{ij}u]_{i, j=1}^{d}.\] Given a weakly differentiable vector field \(u=(u^{1},\dots,u^{d})\), define its _gradient_\(\nabla u\) and _vorticity_\(\nabla\times u\) by \[(\nabla u)^{ij}:=D_{j}u^{i},\quad\text{and}\quad[\nabla\times u]_{ij}:=D_{i}u^ {j}-D_{j}u^{i},\quad 1\leq i,j\leq d,\] respectively. We use bold-roman to denote \(2\)-tensors, e.g., \(\mathbf{F}:(0,T)\times\mathbb{R}^{d}\to\mathbb{R}^{d\times d}\). For two vectors \(u=(u^{1},\dots,u^{d})\) and \(v=(v^{1},\dots,v^{d})\), their inner product is defined by \[u\cdot v:=\sum_{i=1}^{d}u^{i}v^{i}.\] For two \(2\)-tensors \(\mathbf{F}=[F^{ij}]_{i,j=1}^{d}\) and \(\mathbf{G}=[G^{ij}]_{i,j=1}^{d}\), their inner product is defined by \[\mathbf{F}:\mathbf{G}:=\sum_{i,j=1}^{d}F^{ij}G^{ij}.\] For a measurable set \(A\) of \(\mathbb{R}^{d}\), we use \(|A|\) to denote the Lebesgue measure of \(A\) and \(1_{A}\) the indicator of \(A\). If \(0<|A|<\infty\), we write \[\fint_{A}fdx=(f)_{A}:=\frac{1}{|A|}\int_{A}f\,dx.\] A function \(w\) is a _weight_ on \(\mathbb{R}^{d}\) if \(w\) is nonnegative and \(w>0\) a.e. on \(\mathbb{R}^{d}\). For \(1<q<\infty\), we write \(w\in A_{q}(\mathbb{R}^{d},dx)\) if \[[w]_{A_{q}}:=\sup_{x_{0}\in\mathbb{R}^{d},r>0}\left(\fint_{B_{r}(x_{0})}w\,dx \right)\left(\fint_{B_{r}(x_{0})}w^{-1/(q-1)}\,dx\right)^{q-1}<\infty.\] See basic properties of \(A_{q}\)-weights in Subsection 3.1. We can also define \(A_{1}\) weights, see e.g. [39, Chapter 7]. For \(k=1,2,\dots\), \(1\leq q<\infty\), and \(w\in A_{q}(\mathbb{R}^{d},dx)\), we define \[W_{q,w}^{k}(\Omega)=\{u:u,Du,\dots,D^{k}u\in L_{q,w}(\Omega)\}.\] By \(C_{0}^{\infty}(U)\), we denote the set of infinitely differentiable functions with compact support in \(U\). For \(-\infty<S<T<\infty\), we write \[C_{0}^{\infty}([S,T)\times\Omega):=\{u|_{(S,T)\times\Omega}:u\in C_{0}^{ \infty}((-\infty,T)\times\Omega)\}.\] We denote \(W_{q,w,0}^{1}(\Omega)\) the closure of \(C_{0}^{\infty}(\Omega)\) under \(\|\cdot\|_{W_{q,w}^{1}(\Omega)}\). For \(1\leq s,q<\infty\), \(K\geq 1\), and a function \(w\) on \(\mathbb{R}^{d+1}\), we write \([w]_{A_{s,q}}\leq K\) if there exist weights \(w_{1}\) on \(\mathbb{R}^{d}\) and \(w_{2}\) on \(\mathbb{R}\) such that \[w(t,x)=w_{1}(x)w_{2}(t)\] and \[[w_{1}]_{A_{q}},[w_{2}]_{A_{s}}\leq K.\] For \(1<s,q<\infty\), \(-\infty\leq S<\infty\), \(-\infty<T\leq\infty\), and weights \(w\in A_{s,q}\), we define \[\|f\|_{L_{s,q,w}((S,T)\times\Omega)}:=\left(\int_{S}^{T}\left(\int_{\Omega}|f|^ {q}w_{1}\,dx\right)^{s/q}w_{2}\,dt\right)^{1/s}\] and \[L_{s,q,w}((S,T)\times\Omega):=\{f:\|f\|_{L_{s,q,w}((S,T)\times\Omega)}<\infty\}.\] Similarly, for \(1\leq s,q<\infty\), and \(w\in A_{s,q}\), we define weighted parabolic Sobolev spaces \[W^{0,1}_{s,q,w}((S,T)\times\Omega) :=\{u:u,Du\in L_{s,q,w}((S,T)\times\Omega)\},\] \[W^{1,2}_{s,q,w}((S,T)\times\Omega):=\{u:u,Du,D^{2}u,u_{t}\in L_{s,q,w}((S,T)\times\Omega)\}\] with the norm \[\|u\|_{W^{0,1}_{s,q,w}((S,T)\times\Omega)} :=\|u\|_{L_{s,q,w}((S,T)\times\Omega)}+\|Du\|_{L_{s,q,w}((S,T) \times\Omega)},\] \[\|u\|_{W^{1,2}_{s,q,w}((S,T)\times\Omega)} :=\|u_{t}\|_{L_{s,q,w}((S,T)\times\Omega)}+\sum_{k=0}^{2}\|D^{k}u \|_{L_{s,q,w}((S,T)\times\Omega)}.\] When \(s=q\) and \(w=1\), we write \(L_{q}((S,T)\times\Omega)=L_{q,q,w}((S,T)\times\Omega)\) and \(W^{1,2}_{q}((S,T)\times\Omega)=W^{1,2}_{q,q,w}((S,T)\times\Omega)\). For a measurable function \(u\) defined on \((S,T)\times\Omega\), we write \(u\in L_{s,q,\text{loc}}((S,T)\times\Omega)\) if \(u\in L_{s,q}(K)\) for any compact subset \(K\) of \((S,T)\times\Omega\). Similarly, we can define \(W^{0,1}_{s,q,\text{loc}}((S,T)\times\Omega)\) and \(W^{1,2}_{s,q,\text{loc}}((S,T)\times\Omega)\). For equations in divergence form, we introduce another function spaces \(\mathbb{H}^{-1}_{s,q,w}\) and \(\mathcal{H}^{1}_{s,q,w}\). We say that \(f\in\mathbb{H}^{-1}_{s,q,w}((S,T)\times\Omega)\) if there exist \(g_{0},g=(g_{1},\ldots,g_{d})\in L_{s,q,w}((S,T)\times\Omega)\) such that \[f=g_{0}+D_{i}g_{i}\quad\text{in }(S,T)\times\Omega\] in the sense of distribution and the norm \[\|f\|_{\mathbb{H}^{-1}_{s,q,w}((S,T)\times\Omega)}:=\inf\left\{\sum_{i=0}^{d} \|g_{i}\|_{L_{s,q,w}((S,T)\times\Omega)}:f=g_{0}+D_{i}g_{i}\right\}\] is finite. We define \[\mathcal{H}^{1}_{s,q,w}((S,T)\times\Omega):=\{u:u_{t}\in\mathbb{H}^{-1}_{s,q, w}((S,T)\times\Omega),u\in W^{0,1}_{s,q,w}((S,T)\times\Omega)\}\] with the norm \[\|u\|_{\mathcal{H}^{1}_{s,q,w}((S,T)\times\Omega)}:=\|u_{t}\|_{\mathbb{H}^{-1 }_{s,q,w}((S,T)\times\Omega)}+\|u\|_{W^{0,1}_{s,q,w}((S,T)\times\Omega)}.\] When \(s=q\) and \(w=1\), we write \(\mathcal{H}^{1}_{q}((S,T)\times\Omega)=\mathcal{H}^{1}_{q,q,w}((S,T)\times\Omega)\). Now we define strong solutions of Stokes equations in nondivergence form (1.1). **Definition 2.1**.: Let \(f\in L_{1,\mathrm{loc}}((S,T)\times\Omega)^{d}\) and \(g\in L_{1,\mathrm{loc}}((S,T)\times\Omega)\). A pair \((u,p)\) is said to be a _strong solution_ to (1.1) in \((S,T)\times\Omega\) if \(u\in W^{1,2}_{1,\mathrm{loc}}((S,T)\times\Omega)^{d}\) and \(p\in W^{0,1}_{1,\mathrm{loc}}((S,T)\times\Omega)\) satisfy \[\partial_{t}u-a^{ij}D_{ij}u+\nabla p=f\quad\text{and}\quad\operatorname{div}u =g\quad\text{a.e. in }(S,T)\times\Omega.\] Similarly, we define weak solutions of Stokes equations in divergence form (1.3). **Definition 2.2**.: Given \(\mathbf{F}\in L_{1,\mathrm{loc}}((S,T)\times\Omega)^{d\times d}\) and \(g\in L_{1,\mathrm{loc}}((S,T)\times\Omega)\), \(u\) is a _weak solution_ to (1.3) in \((S,T)\times\Omega\) if \(u\in W^{0,1}_{1,\mathrm{loc}}((S,T)\times\Omega)^{d}\) satisfies \[\int_{\Omega}u(t,x)\cdot\nabla\varphi(x)\,dx=-\int_{\Omega}g(t,x)\varphi(x)dx\] for a.e. \(t\in(S,T)\), for all \(\varphi\in C^{\infty}_{0}(\Omega)\), and \[-\int_{S}^{T}\int_{\Omega}u\cdot(\partial_{t}\phi)-D_{i}\phi\cdot a^{ij}D_{j}u \,dxdt=-\int_{S}^{T}\int_{\Omega}\mathbf{F}:\nabla\phi\,dxdt\] for all \(\phi\in C^{\infty}_{0}((S,T)\times\Omega)^{d}\) with \(\operatorname{div}\phi(t)=0\) for all \(t\in(S,T)\). To discuss the solvability of the initial value problem for Stokes equations in divergence form and nondivergence form in \((S,T)\times\Omega\), we write \(u\in\mathring{W}^{1,2}_{s,q,w}((S,T)\times\Omega)\) if there exists \(\tilde{u}\in W^{1,2}_{s,q,w}((-\infty,T)\times\Omega)\) such that \(\tilde{u}=u\) in \((S,T)\times\Omega\) and \(\tilde{u}=0\) in \((-\infty,S)\times\Omega\). Similarly, we can define \(\mathring{\mathcal{H}}^{1}_{s,q,w}((S,T)\times\Omega)\). **Definition 2.3**.: Let \(1<s,q<\infty\) and \(w\in A_{s,q}\). 1. Given \(\mathbf{F}\in L_{1,\mathrm{loc}}((S,T)\times\Omega)^{d\times d}\) and \(g\in L_{1,\mathrm{loc}}((S,T)\times\Omega)\), we say that \((u,p)\in\mathcal{H}^{1}_{s,q,w}((S,T)\times\Omega)^{d}\times L_{s,q,w}((S,T) \times\Omega)\) is a _weak solution to (1.3) in \((S,T)\times\Omega\) with \(u(S,\cdot)=0\) on \(\Omega\)_ if \(u\in\mathring{\mathcal{H}}^{1}_{s,q,w}((S,T)\times\Omega)^{d}\) and \((u,p)\) satisfies \[-\int_{S}^{T}\int_{\Omega}u\cdot(\partial_{t}\phi)-D_{i}\phi\cdot a^{ij}D_{j}u +p\operatorname{div}\phi\,dxdt=-\int_{S}^{T}\int_{\Omega}\mathbf{F}:\nabla \phi\,dxdt\] for all \(\phi\in C^{\infty}_{0}([S,T)\times\Omega)^{d}\). 2. Given \(f\in L_{1,\mathrm{loc}}((S,T)\times\Omega)^{d}\) and \(g\in W^{0,1}_{1,\mathrm{loc}}((S,T)\times\Omega)\), we say that \((u,p)\in W^{1,2}_{s,q,w}((S,T)\times\Omega)^{d}\times W^{0,1}_{1,\mathrm{loc} }((S,T)\times\Omega)\) is a _strong solution to (1.1) in \((S,T)\times\Omega\) with \(u(S,\cdot)=0\) on \(\Omega\)_ if \(u\in\mathring{W}^{1,2}_{s,q,w}((S,T)\times\Omega)^{d}\) and \(u\) is a strong solution to (1.1) in \((S,T)\times\Omega\). ### Main results Now we present the main results of this paper. The following is our assumption on the viscosity coefficient of Equations (1.1) and (1.3). **Assumption 2.4** (\(\delta\)).: There exists \(R_{0}\in(0,1/4)\) such that for any \((t_{0},x_{0})\in\mathbb{R}^{d+1}\), there exists \(\overline{a}^{ij}(t)\) satisfying (1.2) and \[\fint_{Q_{r}(t_{0},x_{0})}|a^{ij}(t,x)-\overline{a}^{ij}(t)|\,dxdt\leq\delta\] for any \((t_{0},x_{0})\in\mathbb{R}^{d+1}\), \(0<r<R_{0}\), and for all \(i,j=1,2,\ldots,d\). _Remark_.: 1. The condition is weaker than the usual full VMO condition in both \(t\) and \(x\) since it does not require any regularity condition in \(t\). A typical example is \(a^{ij}(t,x)=b(t)c^{ij}(x)\), where \(b(t)\) and \(c(t)\) satisfy \[\nu\leq|b(t)|,|c^{ij}(x)|\leq\nu^{-1},\quad\text{for all }(t,x)\in\mathbb{R}^{d+1}, \quad c^{ij}\in\text{VMO}\quad\text{for all }i,j\] for some \(\nu\in(0,1)\). Here \(c^{ij}\in\text{VMO}\) means \[\lim_{r\to 0+}\fint_{B_{r}(x)}|c^{ij}(y)-(c^{ij})_{B_{r}(x)}|\,dy=0.\] 2. If \((t,x_{0})\in\mathbb{R}\times\mathbb{R}^{d}_{+}\), then by Assumption 2.4\((\delta)\), there exists \(R_{0}>0\) such that \[\fint_{Q^{+}_{r}(t_{0},x_{0})}|a^{ij}-\overline{a}^{ij}(t)|\,dxdt\leq 2\delta\] and \[\fint_{Q^{+}_{r}(t_{0},x_{0})}|a^{ij}-(a^{ij})_{B^{+}_{r}(x_{0})}|\,dxdt\leq 4\delta\] for \(0<r<R_{0}\). Our first result concerns the solvability of the initial-value problem for Stokes equations in nondivergence form on weighted mixed-norm Sobolev spaces on \(\mathbb{R}^{d}_{T}\). **Theorem 2.5**.: _Let \(1<s,q<\infty\), \(0<T<\infty\), and let \(K_{0}\geq 1\) be constant, \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _There exists \(0<\delta<1\) depending only on \(d\), \(\nu\), \(s\), \(q\), and \(K_{0}\) such that under Assumption 2.4\((\delta)\), for every \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) and \(g\in\dot{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\) and \(g_{t}=\operatorname{div}G\) for some vector field \(G=(G_{1},\dots,G_{d})\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) in the sense that_ \[\int_{\mathbb{R}^{d}_{T}}g\varphi_{t}\,dxdt=\int_{\mathbb{R}^{d}_{T}}G\cdot \nabla\varphi\,dxdt \tag{2.1}\] _for any \(\varphi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\), there exists a unique strong solution \((u,p)\) to (1.1) in \(\mathbb{R}^{d}_{T}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying_ \[u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})^{d},\quad\nabla p\in L_{s, q,w}(\mathbb{R}^{d}_{T})^{d}.\] _Moreover, we have_ \[\|u\|_{W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla p\|_{L_{s,q,w}(\mathbb{R }^{d}_{T})}\leq N\left(\|f\|_{L_{s,q}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right),\] _where \(N=N(d,s,q,K_{0},\nu,R_{0},T)>0\)._ The second result describes the solvability of the initial-value problem for Stokes equations in divergence form on weighted mixed-norm Sobolev spaces on \(\mathbb{R}^{d}_{T}\). **Theorem 2.6**.: _Let \(1<s,q<\infty\), \(0<T<\infty\), and let \(K_{0}\geq 1\) be constant, \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _There exists \(0<\delta<1\) depending only on \(d\), \(\nu\), \(s\), \(q\), and \(K_{0}\) such that under Assumption 2.4\((\delta)\), for every \(\mathbf{F}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) and \(g\in L_{s,q,w}(\mathbb{R}^{d}_{T})\) satisfying \(g_{t}=\operatorname{div}\operatorname{div}\mathbf{G}\) for some 2-tensor \(\mathbf{G}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) in the sense that_ \[\int_{\mathbb{R}^{d}_{T}}g\varphi_{t}\,dxdt=-\int_{\mathbb{R}^{d}_{T}}\mathbf{ G}:\nabla^{2}\varphi\,dxdt \tag{2.2}\] _for all \(\varphi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\), there exists a unique weak solution \((u,p)\) to (1.3) in \(\mathbb{R}_{T}^{d}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying_ \[u\in\tilde{\mathcal{H}}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})^{d},\quad p\in L_{s,q, w}(\mathbb{R}_{T}^{d}).\] _Moreover, we have_ \[\|u\|_{\mathcal{H}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})}+\|p\|_{L_{s,q,w}(\mathbb{R }_{T}^{d})}\leq N\left(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|g\|_{L _{s,q,w}(\mathbb{R}_{T}^{d})}+\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})} \right),\] _where \(N=N(d,s,q,K_{0},\nu,R_{0},T)>0\)._ As an application of Theorems 2.5 and 2.6, we prove the interior mixed-norm derivative estimates for strong solutions and weak solutions of (1.1) and (1.3), respectively. To state results in a more compact way, we introduce additional function space \[\tilde{W}_{s,q}^{1,2}(U)=\{u:u,Du,D^{2}u\in L_{s,q}(U),u_{t}\in L_{1}(U)\},\] where \(U\) is an open subset of \(\mathbb{R}^{d+1}\). **Theorem 2.7**.: _Let \(1<q_{0},s,q<\infty\). Then there exists \(\delta=\delta(d,s,q,q_{0},\nu)>0\) such that under Assumption 2.4\((\delta)\) that if \((u,p)\in\tilde{W}_{q_{0}}^{1,2}(Q_{1})^{d}\times W_{1}^{0,1}(Q_{1})\) is a strong solution to (1.1) in \(Q_{1}\) for some \(f\in L_{s,q}(Q_{1})^{d}\) and \(g\in W_{s,q}^{0,1}(Q_{1})\), then \(D^{2}u\in L_{s,q}(Q_{1/2})\). Moreover, there exists a constant \(N=N(d,s,q,q_{0},\nu,R_{0})>0\) such that_ \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q} (Q_{1})}+\|Dg\|_{L_{s,q}(Q_{1})}\right).\] **Theorem 2.8**.: _Let \(1<q_{0},s,q<\infty\). Then there exists \(\delta=\delta(d,s,q,q_{0},\nu)>0\) such that under Assumption 2.4\((\delta)\) that if \(u\in W_{q_{0}}^{0,1}(Q_{1})^{d}\) is a weak solution to (1.1) in \(Q_{1}\) for some \(\mathbf{F}\in L_{s,q}(Q_{1})^{d\times d}\) and \(g\in L_{s,q}(Q_{1})\), then \(Du\in L_{s,q}(Q_{1/2})\). Moreover, there exists a constant \(N=N(d,s,q,q_{0},\nu,R_{0})>0\) such that_ \[\|Du\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|\mathbf{F}\|_{L _{s,q}(Q_{1})}+\|g\|_{L_{s,q}(Q_{1})}\right).\] _Remark_.: 1. If \(u\in\tilde{W}_{q_{0}}^{1,2}(Q_{1})^{d}\), then by the parabolic Sobolev embedding theorem, \(u\in L_{s,1}(Q_{1})\). However, if \(u\in W_{q_{0}}^{0,1}(Q_{1})^{d}\), then the norm \(\|u\|_{L_{s,1}(Q_{1})}\) is not always finite. 2. Due to Serrin's counterexample in [61], weak and strong solutions may not possess good regularity in the time variable, i.e., it is not expected that \(u_{t}\in L_{s,q}(Q_{1/2})\) for the case of equations in nondivergence form. Similarly, it is not expected that \(u_{t}\in\mathbb{H}_{s,q}^{-1}(Q_{1/2})\) for the case of equations in divergence form. 3. When \(a^{ij}\) is merely measurable in \(t\), then Theorems 2.7 and 2.8 hold even for very weak solutions \(u\in L_{s,1}(Q_{1})^{d}\) (see Remark 7.7). However, when \(a^{ij}\) depends on \(x\), it is unclear to us whether we could obtain interior mixed-norm derivative estimates for very weak solutions to (1.1) and (1.3) since it is ambiguous to define the notion of very weak solutions. 4. In contrast to Theorems 2.5 and 2.6, we do not need compatibility conditions on \(g\). 5. In fact, from its proof, Theorem 2.7 still holds if we assume that \(\partial_{t}u+\nabla p\in L_{1}(Q_{1})^{d}\) instead of assuming that \(\partial_{t}u,\nabla p\in L_{1}(Q_{1})^{d}\). In this case, due to the lack of regularity in the time variable, it is not always guaranteed that \(u\in L_{s,1}(Q_{1})^{d}\). One can also obtain a boundary version of Theorem 2.7 if we consider the Lions boundary conditions. We assume the following condition on viscosity coefficients: **Assumption 2.9** (\(\delta\)).: There exists \(R_{0}\in(0,1/4)\) such that for any \((t_{0},x_{0})\in\overline{Q_{2}^{+}}\), there exists \(\hat{a}^{ij}(t)\) satisfying uniform ellipticity (1.2) and \[\fint_{Q_{r}^{+}(t_{0},x_{0})}|a^{ij}(t,x)-\hat{a}^{ij}(t)|\,dxdt\leq\delta, \quad\text{for }i,j=1,\ldots,d\] for all \(0<r<R_{0}\). **Theorem 2.10**.: _Let \(1<s,q,q_{0}<\infty\). Then there exists \(\delta>0\) such that under Assumption 2.9\((\delta)\), if \((u,p)\in\tilde{W}_{q_{0}}^{1,2}(Q_{1}^{+})^{d}\times W_{1}^{0,1}(Q_{1}^{+})\) is a strong solution to (1.1) in \(Q_{1}^{+}\) satisfying the Lions boundary conditions_ \[D_{d}u^{k}=u^{d}=0\quad\text{on }(-1,0]\times B_{1}^{\prime}\times\{0\}, \quad k=1,\ldots,d-1 \tag{2.3}\] _for some \(f\in L_{s,q}(Q_{1}^{+})^{d}\) and \(g\in W_{s,q}^{0,1}(Q_{1}^{+})\), then \(D^{2}u\in L_{s,q}(Q_{1/2}^{+})\). Moreover, there exists a constant \(N=N(d,s,q,q_{0},\nu,R_{0})>0\) such that_ \[\left\|D^{2}u\right\|_{L_{s,q}(Q_{1/2}^{+})}\leq N\left(\left\|u\right\|_{L_{ s,1}(Q_{1}^{+})}+\left\|f\right\|_{L_{s,q}(Q_{1}^{+})}+\left\|Dg\right\|_{L_{s,q}(Q_{1}^{+})}\right).\] _Remark_.: 1. Suppose that \((u,p)\in\tilde{W}_{q_{0}}^{1,2}(Q_{1}^{+})^{d}\times W_{1}^{0,1}(Q_{1}^{+})\) is a strong solution to (1.1) in \(Q_{1}^{+}\) satisfying the Navier boundary conditions: \[D_{d}u^{k}-\alpha u^{k}=u^{d}=0\quad\text{on }(-1,0]\times B_{1}^{\prime}\times\{0\}, \quad k=1,\ldots,d-1\] for some \(\alpha>0\). If in addition \(u,Du,p\in L_{s,q}(Q_{1}^{+})\), then we can apply Theorem 2.10 to \((v,\pi)\) defined by \(v(t,x)=e^{-\alpha x_{d}}u(t,x)\) and \(\pi(t,x)=e^{-\alpha x_{d}}p(t,x)\) to get \(D^{2}u\in L_{s,q}(Q_{1/2}^{+})\) and \[\left\|D^{2}u\right\|_{L_{s,q}(Q_{1/2}^{+})}\] \[\leq N\left(\left\|u\right\|_{W_{s,q}^{0,1}(Q_{1}^{+})}+\left\|p \right\|_{L_{s,q}(Q_{1}^{+})}+\left\|Dg\right\|_{L_{s,q}(Q_{1}^{+})}+\left\|f \right\|_{L_{s,q}(Q_{1}^{+})}\right)\] for some constant \(N=N(d,s,q,q_{0},\nu,R_{0},\alpha)>0\). 2. In a recent preprint, Chen-Liang-Tsai [14] obtained gradient estimates for very weak solutions to (1.3) under the Navier boundary conditions when \(a^{ij}=\delta^{ij}\) and \(g=0\). In a subsequent paper, we plan to further investigate gradient estimates for Stokes equations with variable viscosity coefficients under the Navier boundary conditions. ## 3. Preliminaries This section consists of four parts. In Subsection 3.1, we list embedding theorems of function space \(\mathcal{H}_{s,q}^{1}((0,T)\times\Omega)\), properties of \(A_{p}\)-weights, and Poincare inequality on weighted spaces. In Subsection 3.2, we introduce Hardy-Littlewood maximal operator and Fefferman-Stein sharp maximal operator that will be used in this paper. Next, in Subsection 3.3, we state the solvability of the divergence equation in weighted Sobolev spaces. Finally, we state estimates of potentials on weighted spaces and list weighted solvability results for parabolic equations with simple coefficients in Subsection 3.4. These results will be used to construct a solution from vorticity in the remaining sections 4, 5, and 6. ### Function spaces with and without weights In this subsection, we summarize several properties of function spaces with and without weights. The following embedding result is a special case of Kim-Ryu-Woo [45, Theorem 5.2]. **Lemma 3.1**.: _Let \(0<T<\infty\) and let \(\Omega\) be a smooth bounded domain in \(\mathbb{R}^{d}\), \(d\geq 2\). Suppose that \(1<s_{0},q_{0},s,q<\infty\) satisfy \(s\leq s_{0}\leq\infty\), \(q\leq q_{0}\leq\infty\) and either_ * \(s_{0}=s\) _and_ \(d/q\leq 1+d/q_{0}\)_,_ \(q\neq d\) _or_ \(q_{0}\neq\infty\)_; or_ * \(s_{0}>s\) _and_ \(d/q+2/s\leq 1+d/q_{0}+2/s_{0}\)_._ _Then there exists a constant \(N=N(d,s,q,T,\operatorname{diam}\Omega)>0\) such that_ \[\|u\|_{L_{x_{0},q_{0}}((0,T)\times\Omega)}\leq N\left(\|u\|_{L_{s,q}((0,T) \times\Omega)}+\|G\|_{L_{x,q}((0,T)\times\Omega)}\right)\] _for all \(u\in W^{0,1}_{s,q}((0,T)\times\Omega)\) satisfying \(u_{t}=\operatorname{div}G\) for some \(G\in L_{s,q}((0,T)\times\Omega)^{d}\)._ Next we summarize some properties of \(A_{p}\) weights and results on weighted Sobolev spaces, see e.g. Farwig-Sohr [29, Lemmas 2.2 and 2.3] and Grafakos [39, Chapter 7]. **Proposition 3.2**.: _Let \(1<p<\infty\) and \(w\in A_{p}(\mathbb{R}^{d},dx)\)._ * \(w^{-1/(p-1)}\in A_{p^{\prime}}\) _and_ \([w^{-1/(p-1)}]_{A_{p^{\prime}}}=[w]^{1/(p-1)}_{A_{p}}\)_;_ * _If_ \(1<p<q<\infty\)_, then_ \(w\in A_{q}\) _and_ \([w]_{A_{q}}\leq[w]_{A_{p}}\)_;_ * _There exists_ \(1<q=q(d,p,[w]_{A_{p}})<p\) _such that_ \(w\in A_{q}\)_;_ * _The functions defined by_ \[|x|^{\alpha}\quad\text{and}\quad(1+|x|)^{\alpha}\] _are_ \(A_{p}\)_-weights for all_ \(-d<\alpha<d(p-1)\)_;_ * _There exist_ \(\delta\in(0,1)\) _and_ \(N>0\) _depending only on_ \(d\)_,_ \(p\)_, and_ \([w]_{A_{p}}\) _such that_ \[\frac{w(S)}{w(B)}\leq N\left(\frac{|S|}{|B|}\right)^{\delta}\] _for any ball_ \(B\) _in_ \(\mathbb{R}^{d}\) _and any measurable subset_ \(S\) _of_ \(B\)_;_ * \(w(B_{R})\to\infty\) _as_ \(R\to\infty\)_._ Proof.: (i) This follows directly from the definition. (ii) This follows directly from the definition and Holder's inequality. (iii) See e.g. [39, Theorem 7.2.2]. (iv) See Farwig-Sohr [29, Lemmas 2.2 and 2.3]. (v) See e.g. [39, Proposition 7.2.8]. (vi) For \(R>1\), choose \(S=B_{1}\) and \(B=B_{R}\) in (vi). Then \[\frac{w(B_{1})}{w(B_{R})}\leq N(d,p,[w]_{A_{p}})\left(\frac{|B_{1}|}{|B_{R}|} \right)^{\delta}.\] Since \(|B_{R}|\to\infty\) as \(R\to\infty\), it follows that \(w(B_{R})\to\infty\) as \(R\to\infty\). This completes the proof of Proposition 3.2. The following weighted Poincare inequality was first proved by Fabes-Kenig-Serapioni [28] and later simplified by Chiarenza-Frasca [15]. **Lemma 3.3**.: _Let \(1<p<\infty\) and \(w\in A_{p}\), \(1\leq k\leq d/(d-1)+\delta\) for some \(\delta>0\). Then there exists a constant \(N=N(d,p,[w]_{A_{p}})>0\) such that_ \[\left(\frac{1}{w(B_{R})}\int_{B_{R}}|u|^{kp}w\,dx\right)^{1/(kp)}\leq NR\left( \frac{1}{w(B_{R})}\int_{B_{R}}|\nabla u|^{p}w\,dx\right)^{1/p}\] _for all \(u\in C_{0}^{\infty}(B_{R})\) and_ \[\left(\frac{1}{w(B_{R})}\int_{B_{R}}|u-(u)_{B_{R},w}|^{kp}w\,dx\right)^{1/(kp) }\leq NR\left(\frac{1}{w(B_{R})}\int_{B_{R}}|\nabla u|^{p}w\,dx\right)^{1/p}\] _for all \(u\in C^{\infty}(\overline{B_{R}})\), where \((u)_{B_{R},w}=w(B_{R})^{-1}\int_{B_{R}}uw\,dx\)._ ### Hardy-Littlewood maximal function and Fefferman-Stein theorem on weighted spaces For \(T\in(-\infty,\infty]\) and a locally integrable function \(f:\mathbb{R}_{T}^{d}\to\mathbb{R}\), we define its _Hardy-Littlewood maximal function_ by \[M_{T}f(t,x):=\sup_{Q_{r}(s,y)\ni(t,x)}\fint_{Q_{r}(s,y)}|f(r,z)|\,drdz,\quad(t,x)\in\overline{\mathbb{R}_{T}^{d}}.\] If \(T=\infty\), we write \(M_{T}f:=Mf\). Muckenhoupt [55] first proved the boundedness of the Hardy-Littlewood maximal operator on weighted spaces \(L_{q,w}(\mathbb{R}^{d})\), \(1<q<\infty\), and \(w\in A_{q}(\mathbb{R}^{d},dx)\). By applying a version of the Rubio de Francia extrapolation theorem (see e.g. [20, Theorem 2.5]), we can also prove the mixed-norm version of the theorem of Muckenhoupt. **Lemma 3.4**.: _Let \(T\in(-\infty,\infty]\), \(1<s,q<\infty\), \(K_{0}\geq 1\), \(w(t,x)=w_{1}(x)w_{2}(t)\), \(w_{1}\in A_{q}(\mathbb{R}^{d},dx)\), \(w_{2}\in A_{s}(\mathbb{R},dt)\) with \([w]_{A_{s,q}}\leq K_{0}\). Then there exists a constant \(N=N(d,s,q,K_{0})>0\) such that_ \[\|M_{T}f\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N\|f\|_{L_{s,q,w}(\mathbb{R}_{ T}^{d})}\] _for all \(f\in L_{s,q,w}(\mathbb{R}_{T}^{d})\)._ To introduce another type of maximal operator that we need, let \[\mathbb{C}_{n}:=\{Q_{\bar{i}}^{n}=Q_{(i_{0},i_{1},\ldots,i_{d})}^{n}:\bar{i}= (i_{0},i_{1},\ldots,i_{d})\in\mathbb{Z}^{d}\},\] where \(n\in\mathbb{Z}\) and \[Q_{\bar{i}}^{n}=\left[\frac{i_{0}}{2^{2n}},\frac{i_{0}+1}{2^{2n}}\right)\times \left[\frac{i_{1}}{2^{n}},\frac{i_{1}+1}{2^{n}}\right)\times\cdots\times\left[ \frac{i_{d}}{2^{n}},\frac{i_{d}+1}{2^{n}}\right).\] Then the collection \(\mathbb{C}_{n}\) is a filtration of partitions (see e.g. [48, Chapter 3] or [20, Theorem 2.1]). Define the _dyadic sharp function_ of \(g\) by \[g_{\mathrm{dy}}^{\#}(t,x):=\sup_{n<\infty}\fint_{Q_{\bar{i}}^{n}\ni(t,x)}|g(s, y)-g_{|_{n}}(t,x)|\,dyds,\] where \[g_{|_{n}}(t,x)=\fint_{Q_{\bar{i}}^{n}}g(s,y)\,dyds,\quad(t,x)\in Q_{\bar{i}}^{ n}.\] The following version of the Fefferman-Stein theorem was proved in Dong-Kim [20, Corollary 2.7]. **Lemma 3.5**.: _Let \(T\in(-\infty,\infty]\), \(p,q,\tilde{p},\tilde{q}\in(1,\infty)\), \(K_{0}\geq 1\), \(w_{1}\in A_{\tilde{p}}(\mathbb{R}^{d},dx)\), \(w_{2}\in A_{\tilde{q}}(\mathbb{R},dt)\), \(w=w(t,x)=w_{1}(x)w_{2}(t)\), \([w]_{A_{\tilde{p},\tilde{q}}}\leq K_{0}\). Then there exists a constant \(N=N(K,p,q,\tilde{p},\tilde{q},K_{0})>0\) such that_ \[\|f\|_{L_{p,q,w}(\mathbb{R}^{d}_{T})}\leq N\|f^{\#}_{\mathrm{dy}}\|_{L_{p,q,w} (\mathbb{R}^{d}_{T})}\] _for all \(f\in L_{p,q,w}(\mathbb{R}^{d}_{T})\)._ ### The equation \(\mathrm{div}\,u=g\) Let \(\Omega\) be a bounded Lipschitz domain in \(\mathbb{R}^{d}\), \(d\geq 2\). We consider the following Dirichlet problem for the divergence equation: \[\left\{\begin{aligned} \mathrm{div}\,u&=g&& \text{in }\Omega,\\ u&=0&&\text{on }\partial\Omega.\end{aligned}\right. \tag{3.1}\] The \(W^{1}_{q}\)-solvability of the problem (3.1) is a classical result due to Bogovskii [6] by introducing an integral representation of solutions to the problem (3.1) on a star-shaped domain (see Galdi [32]). This result was extended by Huber [41] to weighted Sobolev spaces as below. **Theorem 3.6**.: _Let \(1<q<\infty\), \(K_{0}\geq 1\), and \(w\in A_{q}\) satisfying \([w]_{A_{q}}\leq K_{0}\). Then there exists a bounded linear operator_ \[\mathcal{B}:L_{q,w,\#}(\Omega)\to W^{1}_{q,w,0}(\Omega)^{d}\] _such that \(\mathrm{div}(\mathcal{B}f)=f\) for all \(f\in L_{q,w,\#}(\Omega)\), where \(L_{q,w,\#}(\Omega)\) is the collection of all \(f\in L_{q,w}(\Omega)\) with \(\int_{\Omega}f\,dx=0\). Moreover, we have \(\mathcal{B}f\in C^{\infty}_{0}(\Omega)^{d}\) if \(f\in C^{\infty}_{0}(\Omega)\) with \(\int_{\Omega}f\,dx=0\) and_ \[\|\mathcal{B}f\|_{W^{1}_{q,w}(\Omega)}\leq N\|f\|_{L_{q,w}(\Omega)}\] _for all \(f\in L_{q,w,\#}(\Omega)\), where the constant \(N\) depends only on \(d\), \(q\), \(K_{0}\), and \(\Omega\)._ _Remark_.: The operator \(\mathcal{B}\) is the same Bogovskii operator introduced in [6]. If \(\Omega\) is bounded star-shaped with respect to an open ball \(B_{R}\) with \(\overline{B_{R}}\subset\Omega\), then there exists a constant \(N=N(d,q,K_{0},\mathrm{diam}\,\Omega/R)>0\). \[\|D(\mathcal{B}g)\|_{L_{q,w}(\Omega)}\leq N\|g\|_{L_{q,w}(\Omega)}\] for all \(g\in L_{q,w,\#}(\Omega)\). ### Potential estimates and solvability of parabolic equations on weighted spaces In this subsection, we give some potential estimates on weighted \(L_{q}\)-spaces and state the solvability of elliptic and parabolic equations in weighted Sobolev spaces. We also state weighted a priori \(L_{q}\)-estimates for Poisson equations that will be used in this paper. Let \(\Phi\) be the fundamental solution of the Laplacian defined by \[\Phi(x)=\begin{cases}\frac{1}{d(2-d)\omega_{d}}\frac{1}{|x|^{d-2}}&\text{if }d \geq 3,\\ \frac{1}{2\pi}\ln|x|&\text{if }d=2,\end{cases}\] where \(\omega_{d}\) is the volume of the unit ball in \(\mathbb{R}^{d}\). The following lemma will be used to prove the existence of weak and strong solutions to Stokes equations with simple coefficients. The proof is almost identical to that of Lemma 4.1 in [24]. The key difference is to apply weighted \(L_{q}\)-boundedness of singular integral operators (see e.g. [64, SS4.2, Chapter V]) instead of the unweighted version when we prove (3.2) and (3.3). We omit its proof. **Lemma 3.7**.: _Let \(1<q<\infty\), \(1<q_{0}<d\), and \(K\geq 1\). Fix \(w\in A_{q}(\mathbb{R}^{d},dx)\) with \([w]_{A_{q}}\leq K_{0}\). For each \(h\in L_{q_{0}}(\mathbb{R}^{d})\cap L_{q,w}(\mathbb{R}^{d})\), define_ \[v_{k}(x)=\int_{\mathbb{R}^{d}}D_{k}\Phi(x-y)h(y)\,dy\quad\text{in }\mathbb{R}^{d}, \quad 1\leq k\leq d.\] * \(v_{k}\in L_{q_{0}^{*}}(\mathbb{R}^{d})\) _and_ \(Dv_{k}\in L_{q,w}(\mathbb{R}^{d})\) _with the estimate_ \[\|v_{k}\|_{L_{q_{0}^{*}}(\mathbb{R}^{d})}\leq N_{1}(d,q_{0})\|h\|_{L_{q_{0}}( \mathbb{R}^{d})},\] \[\|Dv_{k}\|_{L_{q,w}(\mathbb{R}^{d})}\leq N_{2}(d,q,K_{0})\|h\|_{L_{q,w}( \mathbb{R}^{d})},\] (3.2) _where_ \(q_{0}^{*}=dq_{0}/(d-q_{0})\)_. We also have_ \[\sum_{k=1}^{d}D_{k}v_{k}=h\quad\text{in }\mathbb{R}^{d}.\] * _If_ \(Dh\in L_{q,w}(\mathbb{R}^{d})\) _in addition, then_ \(D^{2}v_{k}\in L_{q,w}(\mathbb{R}^{d})\) _with_ \[\Delta v_{k}=D_{k}h\quad\text{in }\mathbb{R}^{d}\] _and_ \[\|D^{2}v_{k}\|_{L_{q,w}(\mathbb{R}^{d})}\leq N(d,q,K_{0})\|D_{k}h\|_{L_{q,w}( \mathbb{R}^{d})}\] (3.3) _holds._ * _If_ \(Dh\in L_{q,w}(\mathbb{R}^{d})\cap L_{q_{0}}(\mathbb{R}^{d})\) _in addition, then_ \[Dv_{k}(x)=\int_{\mathbb{R}^{d}}D_{k}\Phi(x-y)Dh(y)\,dy\quad\text{in }\mathbb{R}^{d}.\] We will use the following weighted \(L_{s,q}\)-results that can be found in [20, Theorem 5.2]. **Theorem 3.8**.: _Let \(0<T<\infty\), \(K_{0}\geq 1\), \(1<q,s<\infty\), \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[ w]_{A_{s,q}}\leq K_{0}.\] * _For every_ \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})\)_, there exists a unique_ \(u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})\) _satisfying_ \[\partial_{t}u-a^{ij}(t)D_{ij}u=f\quad\text{in }\mathbb{R}^{d}_{T}.\] _Moreover, we have_ \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}\|f\|_{L_{s,q,w}(\mathbb{ R}^{d}_{T})}\] _and_ \[\|u\|_{W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{2}\|f\|_{L_{s,q,w}(\mathbb{ R}^{d}_{T})}\] _for some constants_ \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) _and_ \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\)_._ * _For every_ \(F=(F^{1},\ldots,F^{d})\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\)_, there exists a unique_ \(u\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\) _satisfying_ \[\partial_{t}u-D_{i}(a^{ij}(t)D_{j}u)=\operatorname{div}F\quad\text{in }\mathbb{R}^{d}_{T},\] _i.e.,_ \(u\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\) _satisfies_ \[-\int_{0}^{T}\int_{\mathbb{R}^{d}}u\partial_{t}\phi-a^{ij}(t)D_{j}uD_{i}\phi \,dxdt=-\int_{0}^{T}\int_{\mathbb{R}^{d}}F\cdot\nabla\phi\,dxdt\] _for all_ \(\phi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\)_. Moreover, we have_ \[\|Du\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}\|F\|_{L_{s,q,w}(\mathbb{R}^{d }_{T})}\] _and_ \[\|u\|_{\mathcal{H}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{2}\|F\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}\] _for some constants_ \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) _and_ \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\)_._ We will also use the following regularity results which can be easily proved by using Theorem 3.8. **Corollary 3.9**.: _Let \(K_{0}\geq 1\), \(1<q<\infty\), and \(w\in A_{q}(\mathbb{R}^{d},dx)\) satisfying \([w]_{A_{q}}\leq K_{0}\)._ * _If_ \(u\in W^{2}_{q,w}(\mathbb{R}^{d})\) _satisfies_ \[-\Delta u=f\quad\text{in $\mathbb{R}^{d}$}\] _for some_ \(f\in L_{q,w}(\mathbb{R}^{d})\)_, then there exists a constant_ \(N=N(d,q,K_{0})>0\) _such that_ \[\|D^{2}u\|_{L_{q,w}(\mathbb{R}^{d})}\leq N\|f\|_{L_{q,w}(\mathbb{R}^{d})}.\] * _If_ \(u\in W^{1}_{q,w}(\mathbb{R}^{d})\) _satisfies_ \[-\Delta u=\operatorname{div}F\quad\text{in $\mathbb{R}^{d}$}\] _for some_ \(F\in L_{q,w}(\mathbb{R}^{d})^{d}\)_, then there exists a constant_ \(N=N(d,q,K_{0})>0\) _such that_ \[\|Du\|_{L_{q,w}(\mathbb{R}^{d})}\leq N\|F\|_{L_{q,w}(\mathbb{R}^{d})}.\] ## 4. Stokes equations with simple coefficients In this section, we consider the Cauchy problem for Stokes equations with simple coefficients, that is, for \(0<T<\infty\), \[\left\{\begin{aligned} \partial_{t}u-a^{ij}(t)D_{ij}u+\nabla p& =f\quad\text{in $(0,T)\times\mathbb{R}^{d}$},\\ \operatorname{div}u&=g\quad\text{in $(0,T)\times \mathbb{R}^{d}$},\\ u&=0\quad\text{on $\{t=0\}\times\mathbb{R}^{d}$}, \end{aligned}\right. \tag{4.1}\] where the viscosity coefficient \(a^{ij}\) is merely measurable in \(t\) and satisfies uniform ellipticity condition (1.2). We also consider the Cauchy problem for Stokes equations in divergence form: \[\left\{\begin{aligned} \partial_{t}u-D_{i}(a^{ij}(t)D_{j}u)+ \nabla p&=\operatorname{div}\mathbf{F}\quad\text{in $(0,T)\times \mathbb{R}^{d}$},\\ \operatorname{div}u&=g\quad\quad\quad\text{in $(0,T)\times \mathbb{R}^{d}$},\\ u&=0\quad\quad\quad\text{on $\{t=0\}\times\mathbb{R}^{d}$}. \end{aligned}\right. \tag{4.2}\] We first state the \(W^{1,2}_{s,q,w}\)-solvability for the problem (4.1) in \(\mathbb{R}^{d}_{T}\). The proof is almost identical to that of Theorem 1.4 in [24]. We will explain the main difference in Appendix A for the sake of completeness. **Theorem 4.1**.: _Let \(1<s,q<\infty\), \(0<T<\infty\), and let \(K_{0}\geq 1\) be constant, \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w] _{A_{s,q}}\leq K_{0}.\] _Then for every \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), \(g\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\), and \(g_{t}=\operatorname{div}G\) for some vector field \(G=(G^{1},\ldots,G^{d})\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) in the sense that_ \[\int_{\mathbb{R}^{d}_{T}}g\varphi_{t}\,dxdt=\int_{\mathbb{R}^{d}_{T}}G\cdot \nabla\varphi\,dxdt\] _for all \(\varphi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\), there exists a unique strong solution \((u,p)\) to (4.1) satisfying_ \[u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})^{d},\quad\nabla p\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d},\quad(p(t,\cdot))_{B_{1}}=0\text{ for all }t\in(0,T)\] _Moreover, we have_ \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w }(\mathbb{R}^{d}_{T})}), \tag{4.3}\] \[\|\nabla p\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}),\] \[\|u_{t}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}),\] _and_ \[\|u\|_{W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla p\|_{L_{s,q,w}(\mathbb{R} ^{d}_{T})} \tag{4.4}\] \[\leq N_{2}(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}),\] _where \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\)._ By using Theorem 4.1, we also have the existence and uniqueness of weak solutions to (4.2) in \(\mathbb{R}^{d}_{T}\) as follows, which can be deduced from Theorem 4.1 and a duality argument based on Theorem 3.8. We give its proof in Appendix B for the sake of completeness. **Theorem 4.2**.: _Let \(1<s,q<\infty\), \(0<T<\infty\), and let \(K_{0}\geq 1\) be constant, \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _For every \(\mathbf{F}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) and \(g\in L_{s,q,w}(\mathbb{R}^{d}_{T})\) satisfying \(g_{t}=\operatorname{div}\operatorname{div}\mathbf{G}\) for some matrix field \(\mathbf{G}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) in the sense that_ \[\int_{\mathbb{R}^{d}_{T}}g\varphi_{t}\,dxdt=-\int_{\mathbb{R}^{d}_{T}}\mathbf{ G}:\nabla^{2}\varphi\,dxdt \tag{4.5}\] _for all \(\varphi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\), there exists a unique weak solution \((u,p)\) of (4.2) satisfying_ \[u\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})^{d},\quad p\in L_{s,q,w}(\mathbb{R}^{d}_{T}).\] _Moreover, we have_ \[\|Du\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}\left(\|g\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\mathbf{ F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right),\] \[\|p\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}\left(\|g\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\mathbf{ F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \right),\] _and_ \[\|u\|_{\mathcal{H}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})}+\|p\|_{L_{s,q,w}(\mathbb{R} ^{d}_{T})}\leq N_{2}\left(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|g\|_ {L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \right),\] _where \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\)._ Recall that for \(0<\alpha\leq 1\) and each parabolic cylinder \(Q\) in \(\mathbb{R}^{d+1}\), we write \[[u]_{C^{\alpha/2,\alpha}(Q)}:=\sup_{(t,x),(s,y)\in Q,(t,x)\neq(s,y)}\frac{|u(t,x)- u(s,y)|}{|t-s|^{\alpha/2}+|x-y|^{\alpha}}\] and we define \[\|u\|_{C^{\alpha/2,\alpha}(Q)}:=\|u\|_{L_{\infty}(Q)}+[u]_{C^{\alpha/2,\alpha} (Q)}.\] We have the following local Holder estimates for the vorticity of a solution to nonstationary Stokes equations. **Lemma 4.3**.: _Let \(1<q_{0}<\infty\) and let \(u\in W^{0,1}_{q_{0}}(Q_{1})^{d}\) be a weak solution of_ \[\partial_{t}u-D_{i}(a^{ij}(t)D_{j}u)+\nabla p=0,\quad\operatorname{div}u=0 \tag{4.6}\] _in \(Q_{1}\). There exists a constant \(N=N(d,\nu,q_{0})>0\) such that_ \[\|\omega\|_{C^{1/2,1}(Q_{1/2})}\leq N\|\omega\|_{L_{q_{0}}(Q_{1})},\] _where \(\omega=\nabla\times u\)._ Proof.: By taking mollification in \(x\), we may assume that \(u\) is smooth in \(x\). For \(\psi\in C^{\infty}_{0}(Q_{1})\) and \(k,l=1,\ldots,d\), define \(\phi=(D_{l}\psi)e_{k}-(D_{k}\psi)e_{l}\), where \(\{e_{k}\}\) is the standard basis for \(\mathbb{R}^{d}\). Then it is easy to see that \(\operatorname{div}\phi(t,\cdot)=0\) in \(B_{1}\) for \(t\in(-1,0)\). If we use \(\phi\) as a test function in the definition of the weak solution, then one can show that \(\omega^{kl}:=D_{k}u^{l}-D_{l}u^{k}\) is a very weak solution to the vorticity equation \[\partial_{t}\omega^{kl}-D_{i}(a^{ij}(t)D_{j}\omega^{kl})=0\quad\text{in }Q_{1}.\] Then the desired result follows from interior estimates for parabolic equations with coefficients measurable in \(t\), Sobolev embedding theorem, and a standard iteration argument. We omit the details (see e.g. [48, Chapter 2] and [19]). Since \(a^{ij}\) depends only on \(t\), by using Lemma 4.3 and a standard scaling argument, we have the following mean oscillation estimate for vorticity and its gradient. See e.g. [19, Lemma 4] for the proof. **Lemma 4.4**.: _Let \(\kappa\geq 8\), \(0<r<\infty\), and \(1<q_{0}<\infty\)._ * _If_ \((u,p)\) _is a weak solution of (_4.6_) in_ \(Q_{\kappa r}(X_{0})\) _and_ \(\omega=\nabla\times u\)_, then there exists a constant_ \(N=N(d,q_{0},\nu)>0\) _such that_ \[(|\omega-(\omega)_{Q_{r}(X_{0})}|)_{Q_{r}(X_{0})}\leq N\kappa^{-1}(|\omega|^{ q_{0}})_{Q_{\kappa r}(X_{0})}^{1/q_{0}}.\] * _If_ \((u,p)\) _is a strong solution of (_4.6_) in_ \(Q_{\kappa r}(X_{0})\) _and_ \(\omega=\nabla\times u\)_, then there exists a constant_ \(N=N(d,q_{0},\nu)>0\) _such that_ \[(|D\omega-(D\omega)_{Q_{r}(X_{0})}|)_{Q_{r}(X_{0})}\leq N\kappa^{-1}(|D\omega |^{q_{0}})_{Q_{\kappa r}(X_{0})}^{1/q_{0}}.\] ## 5. Stokes equations in nondivergence form This section is devoted to proving Theorem 2.5. We first obtain a mean oscillation estimate for the gradient of vorticity \(\omega=\nabla\times u\) of a strong solution \(u\) to (1.1). **Lemma 5.1**.: _Let \(\kappa\geq 8\), \(\delta\in(0,1)\), \(q,\mu,\mu^{\prime}\in(1,\infty)\), \(1/\mu+1/\mu^{\prime}=1\), and \(a^{ij}\) satisfy Assumption 2.4\((\delta)\). Then for any \(0<r\leq R_{0}/\kappa\), \((t_{0},x_{0})\in\mathbb{R}^{d+1}\), and \((u,p)\in W^{1,2}_{q\mu,\operatorname{loc}}(\mathbb{R}^{d+1})^{d}\times W^{0,1} _{1,\operatorname{loc}}(\mathbb{R}^{d+1})\) satisfying_ \[\partial_{t}u-a^{ij}(t,x)D_{ij}u+\nabla p=f,\quad\operatorname{div}u=g\quad \text{in }Q_{\kappa r}(t_{0},x_{0}), \tag{5.1}\] _where \(f\in L_{q,\mathrm{loc}}(\mathbb{R}^{d+1})^{d}\) and \(g\in W^{0,1}_{q,\mathrm{loc}}(\mathbb{R}^{d+1})\), we have_ \[\begin{split}&\left(|D\omega-(D\omega)_{Q_{r}(t_{0},x_{0})}| \right)_{Q_{r}(t_{0},x_{0})}\\ &\leq N\kappa^{-1}\left[(|D^{2}u|^{q})^{1/q}_{Q_{sr}(t_{0},x_{0}) }+(|f|^{q})^{1/q}_{Q_{sr}(t_{0},x_{0})}+(|Dg|^{q})^{1/q}_{Q_{sr}(t_{0},x_{0})} \right]\\ &\quad+N\kappa^{(d+2)/q}\left[(|f|^{q})^{1/q}_{Q_{sr}(t_{0},x_{0} )}+\delta^{1/(q\mu^{\prime})}(|D^{2}u|^{q\mu})^{1/(q\mu)}_{Q_{sr}(t_{0},x_{0}) }+(|Dg|^{q})^{1/q}_{Q_{sr}(t_{0},x_{0})}\right]\end{split}\] _for some constant \(N=N(d,q,\nu)>0\)._ Proof.: For an integrable function \(h\) defined on \(Q_{r}\), define \[h^{(\varepsilon)}(t,x)=\int_{Q_{\varepsilon}}h(t+s,x+y)\eta_{\varepsilon}(s,y )\,dyds,\quad(t,x)\in(-r^{2}+\varepsilon^{2},0)\times B_{r-\varepsilon}, \tag{5.2}\] where \(\eta\in C^{\infty}_{0}(Q_{1})\), \(\eta_{\varepsilon}(t,x)=\varepsilon^{-d-2}\eta(t/\varepsilon^{2},x/\varepsilon)\), and \(\int_{Q_{1}}\eta\,dxdt=1\). By mollifying equation (5.1), we get \[\partial_{t}u^{(\varepsilon)}-a^{ij}D_{ij}u^{(\varepsilon)}+\nabla p^{( \varepsilon)}=f^{(\varepsilon)}+(a^{ij}D_{ij}u)^{(\varepsilon)}-a^{ij}D_{ij} u^{(\varepsilon)}\] in \(Q_{r^{\prime}}(t_{0},x_{0})\) for \(0<r^{\prime}<\kappa r\) and for sufficiently small \(\varepsilon\). If we prove the estimate in the lemma for \(u^{(\varepsilon)}\), we get the desired result by letting \(\varepsilon\to 0\). Hence we may assume that \(u\) and \(p\) are infinitely differentiable. Since \((u,p)\) satisfies (5.1), it follows that \(g\in\mathcal{H}^{1}_{q,\mathrm{loc}}(\mathbb{R}^{d+1})\). By translation invariance, we may assume that \((t_{0},x_{0})=(0,0)\). Let \(\zeta_{r}(x)\) and \(\psi_{r}(t)\) be infinitely differentiable functions defined on \(\mathbb{R}^{d}\) and \(\mathbb{R}\) satisfying \[\zeta_{r}(x)=1\quad\text{on }B_{2r/3},\quad\zeta_{r}(x)=0\quad\text{on } \mathbb{R}^{d}\setminus B_{r},\] \[\psi_{r}(t)=1\quad\text{on }t\in(-4r^{2}/9,4r^{2}/9),\quad\psi_{r}(t)=0\quad \text{on }\mathbb{R}\setminus(-r^{2},r^{2}).\] Set \(\phi_{r}(t,x)=\psi_{r}(t)\zeta_{r}(x)\). Then \(\phi_{r}=1\) on \(Q_{2r/3}\) and \(|D\phi_{r}|\leq 4/r\). Consider the following initial-value problem for Stokes equations: \[\begin{cases}\partial_{t}u_{1}-\overline{a}^{ij}(t)D_{ij}u_{1}+\nabla p_{1}=1 _{Q_{sr}}h&\text{in }(-(\kappa r)^{2},0)\times\mathbb{R}^{d},\\ \operatorname{div}u_{1}=\tilde{g}&\text{in }(-(\kappa r)^{2},0)\times \mathbb{R}^{d},\\ u_{1}=0&\text{on }\{t=-\kappa r^{2}\}\times\mathbb{R}^{d},\end{cases} \tag{5.3}\] where \[\begin{split} h(t,x)=[f+(a^{ij}-\overline{a}^{ij}(t))D_{ij}u], \quad\tilde{g}(t,x):=(g-[g(t,\cdot)]_{\zeta_{\kappa r,B_{sr}}})\phi_{\kappa r },\\ [f]_{\zeta_{r,B_{r}}}:=\frac{1}{\left(\int_{B_{r}}\zeta_{r}dx \right)}\int_{B_{r}}f\zeta_{r}\,dx.\end{split}\] By using the Poincare inequality, it is easily seen that \[\|D\tilde{g}\|_{L_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})}\leq N(d,q)\|Dg \|_{L_{q}(Q_{\kappa r})}. \tag{5.4}\] Also, since \(g\in\mathcal{H}^{1}_{q,\mathrm{loc}}(\mathbb{R}^{d+1})\), it follows that \[\tilde{g}\in\mathring{\mathcal{H}}^{1}_{q}((-(\kappa r)^{2},0)\times\mathbb{R} ^{d}).\] It remains to show the compatibility condition, i.e., there exists \(\tilde{G}\in L_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})^{d}\) such that \[\partial_{t}\tilde{g}=\operatorname{div}\tilde{G}\quad\text{in }(-(\kappa r)^{2},0) \times\mathbb{R}^{d}\] in the sense of (2.1). Note that \[\partial_{t}\tilde{g}=(\partial_{t}g-[\partial_{t}g(t,\cdot)]_{\zeta_{\kappa r,B_{ \kappa r}}})\phi_{\kappa r}+(g-[g(t,\cdot)]_{\zeta_{\kappa r,B_{\kappa r}}}) \partial_{t}\phi_{\kappa r}.\] By integrating it over \(B_{\kappa r}\), we have \[\int_{B_{\kappa r}}\partial_{t}\tilde{g}\,dx=0.\] Hence by Theorem 3.6, there exists \(G\in L_{q}(-(\kappa r)^{2},0;W^{1}_{q,0}(B_{\kappa r})^{d})\) such that \[\operatorname{div}G=\partial_{t}\tilde{g}\quad\text{in }(-(\kappa r)^{2},0) \times B_{\kappa r},\quad G=0\quad\text{on }(-(\kappa r)^{2},0)\times\partial B_{\kappa r}.\] Extend \(G\) to be zero outside \((-(\kappa r)^{2},0)\times B_{\kappa r}\) and denote this extension by \(\tilde{G}\). Since \(\tilde{g}\) has compact support on \((-(\kappa r)^{2},0)\times B_{\kappa r}\) and \(G(t,\cdot)=0\) on \((-(\kappa r)^{2},0)\times\partial B_{\kappa r}\) for \(t\in(-(\kappa r)^{2},0)\), we see that \[\operatorname{div}\tilde{G}=\partial_{t}\tilde{g}\quad\text{in }(-(\kappa r)^{2},0) \times\mathbb{R}^{d}\] in the sense of (2.1). Since \(u\in W^{1,2}_{q,\operatorname{loc}}(\mathbb{R}^{d+1})^{d}\) satisfies (5.1), it follows from Theorem 4.1 that there exists a unique strong solution \((u_{1},p_{1})\) to (5.3) satisfying \[u_{1}\in\mathring{W}^{1,2}_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})^{d}, \quad\nabla p_{1}\in L_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})^{d}.\] Moreover, it follows from (4.3) and (5.4) that \[\|D^{2}u_{1}\|_{L_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})} \leq N\|1_{Q_{\kappa r}}[f+(a^{ij}-\overline{a}^{ij})D_{ij}u]\|_{L_{q}((-( \kappa r)^{2},0)\times\mathbb{R}^{d})} \tag{5.5}\] \[+N\|Dg\|_{L_{q}(Q_{\kappa r})},\] where \(N=N(d,q,\nu)>0\). Define \((u_{2},p_{2})=(u-u_{1},p-p_{1})\). Then \((u_{2},p_{2})\) satisfies \[\begin{cases}\partial_{t}u_{2}-\overline{a}^{ij}(t)D_{ij}u_{2}+\nabla p_{2}=0, &\text{in }Q_{2\kappa r/3},\\ \operatorname{div}u_{2}=[g(t,\cdot)]_{B_{\kappa r}}&\text{in }Q_{2\kappa r/3}. \end{cases}\] Write \(\omega=\nabla\times u\), \(\omega_{1}=\nabla\times u_{1}\), and \(\omega_{2}=\nabla\times u_{2}\). By Lemma 4.4 (i), we have \[\begin{split}(|D\omega_{2}-(D\omega_{2})_{Q_{r}}|)_{Q_{r}}& \leq N\kappa^{-1}(|D\omega_{2}|^{q})^{1/q}_{Q_{2\kappa r/3}}\\ &\leq N(d,q,\nu)\kappa^{-1}\left[(|D\omega|^{q})^{1/q}_{Q_{2 \kappa r/3}}+(|D\omega_{1}|^{q})^{1/q}_{Q_{2\kappa r/3}}\right].\end{split} \tag{5.6}\] Since \(a^{ij}\) satisfies Assumption 2.4 (\(\delta\)) and \(a^{ij}\), \(\overline{a}^{ij}\) are bounded by \(\nu^{-1}\), it follows from (5.5) and Holder's inequality that \[(|D\omega_{1}|^{q})^{1/q}_{Q_{\kappa r}}\leq N(d,q,\nu)\left[(|f|^{q})^{1/q}_{ Q_{\kappa r}}+\delta^{1/(q\mu^{\prime})}\left(|D^{2}u|^{q\mu}\right)^{1/(q\mu)}_{ Q_{\kappa r}}+(|Dg|^{q})^{1/q}_{Q_{\kappa r}}\right]. \tag{5.7}\] By (5.6) and (5.7), we get \[\begin{split}&(|D\omega_{2}-(D\omega_{2})_{Q_{r}}|)_{Q_{r}}\\ &\leq N(d,q,\nu)\kappa^{-1}\left[(|D\omega|^{q})^{1/q}_{Q_{ \kappa r}}+(|f|^{q})^{1/q}_{Q_{\kappa r}}+(|Dg|^{q})^{1/q}_{Q_{\kappa r}}+(|D^ {2}u|^{q})^{1/q}_{Q_{\kappa r}}\right].\end{split}\] Since \(\omega=\omega_{1}+\omega_{2}\), the above inequality and (5.7) imply \[\begin{split}&\fint_{Q_{r}}|D\omega-(D\omega)_{Q_{r}}|\,dxdt\\ &\leq\fint_{Q_{r}}|D\omega_{2}-(D\omega_{2})_{Q_{r}}|\,dxdt+2\fint_ {Q_{r}}|D\omega_{1}|\,dxdt\end{split}\] \[\leq N\kappa^{-1}\left[(|D^{2}u|^{q})_{Q_{xr}}^{1/q}+(|f|^{q})_{Q_{xr} }^{1/q}+(|Dg|^{q})_{Q_{xr}}^{1/q}\right]\] \[\quad+N\kappa^{(d+2)/q}\left[(|f|^{q})_{Q_{xr}}^{1/q}+\delta^{1/(q \mu^{\prime})}(|D^{2}u|^{q\mu})_{Q_{xr}}^{1/(q\mu)}+(|Dg|^{q})_{Q_{xr}}^{1/q}\right]\] for some constant \(N=N(d,q,\nu)>0\). This completes the proof of Lemma 5.1. The following proposition does not require the compatibility condition on \(g\) since it only involves a priori estimates for \(D^{2}u\). **Proposition 5.2**.: _Let \(0<T<\infty\), \(K_{0}\geq 1\), \(1<q,s<\infty\), \(t_{1}\in\mathbb{R}\), \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _Then there exist \(\delta>0\) and \(R_{1}>0\) such that under Assumption 2.4 (\(\delta\)), for any \(u\in\tilde{W}_{s,q,w}^{1,2}(\mathbb{R}_{T}^{d})^{d}\) vanishing outside \((t_{1}-(R_{0}R_{1})^{2},t_{1})\times\mathbb{R}^{d}\) and \(p\in W_{1,\mathrm{loc}}^{0,1}(\mathbb{R}_{T}^{d})\) satisfying_ \[\partial_{t}u-a^{ij}(t,x)D_{ij}u+\nabla p=f,\quad\mathrm{div}\,u=g\quad\text{ in }\mathbb{R}_{T}^{d},\] _where \(f\in L_{s,q,w}(\mathbb{R}_{T}^{d})^{d}\) and \(g\in W_{s,q,w}^{0,1}(\mathbb{R}_{T}^{d})\), there exists a constant \(N=N(d,s,q,K_{0},\nu)>0\) such that_ \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N\left(\|f\|_{L_{s,q,w}( \mathbb{R}_{T}^{d})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\right).\] Proof.: Since \(w_{1}\in A_{q}(\mathbb{R}^{d},dx)\) and \(w_{2}\in A_{s}(\mathbb{R},dt)\), it follows from Proposition 3.2 (iii) that there exist \(\sigma_{1}\) and \(\sigma_{2}\) such that \(q-\sigma_{1}>1\), \(s-\sigma_{2}>1\), and \[w_{1}\in A_{q-\sigma_{1}}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s-\sigma_{2}}( \mathbb{R},dt).\] Choose \(q_{0},\mu\in(1,\infty)\) so that \[q_{0}\mu=\min\left\{\frac{q}{q-\sigma_{1}},\frac{s}{s-\sigma_{2}}\right\}>1.\] By Proposition 3.2 (ii), we also have \[w_{1}\in A_{q-\sigma_{1}}\subset A_{q/(q_{0}\mu)}\subset A_{q/q_ {0}}(\mathbb{R}^{d},dx),\] \[w_{2}\in A_{s-\sigma_{2}}\subset A_{q/(q_{0}\mu)}\subset A_{s/q_ {0}}(\mathbb{R},dt).\] Then by Holder's inequality (see e.g. [20, Lemma 5.10]), we have \[u\in W_{q_{0}\mu,\mathrm{loc}}^{1,2}(\mathbb{R}_{T}^{d})^{d},\quad f\in L_{q_ {0}\mu,\mathrm{loc}}(\mathbb{R}_{T}^{d})^{d},\quad\text{and}\quad g\in W_{q_{0 }\mu,\mathrm{loc}}^{0,1}(\mathbb{R}_{T}^{d}).\] Let \(\kappa\geq 8\), \(0<\delta<1\), and \(R_{1}>0\) be constants to be specified below. For each \((t,x)\in\mathbb{R}_{T}^{d}\) and \(Q^{n}\in\mathbb{C}_{n}\) such that \((t,x)\in Q^{n}\), \(n\in\mathbb{Z}\), find \((t_{0},x_{0})\in\mathbb{R}_{T}^{d}\) and the smallest \(r\in(0,\infty)\) so that \(Q^{n}\subset Q_{r}(t_{0},x_{0})\) and \[\fint_{Q^{n}}|f(s,y)-f|_{n}(t,x)|\,dsdy\leq N\fint_{Q_{r}(t_{0},x_{0})}|f(s,y)- (f)_{Q_{r}(t_{0},x_{0})}|\,dsdy,\] where \(N\) depends only on \(d\). On one hand, if \(r>R_{0}/\kappa\), since \(u\) vanishes outside \((t_{1}-(R_{0}R_{1})^{2},t_{1})\times\mathbb{R}^{d}\), we have \[\fint_{Q^{n}}|D\omega(s,y)-(D\omega)|_{n}(t,x)|\,dsdy\] \[\leq N\fint_{Q_{r}(t_{0},x_{0})}|D\omega(s,y)-(D\omega)_{Q_{r}(t _{0},x_{0})}|\,dsdy\] \[\begin{split}&\|D\omega\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\\ &\leq N\left(\kappa^{-1}+\kappa^{(d+2)/q_{0}}\delta^{1/(q_{0}\mu)}+ \kappa^{2(1-1/q_{0})}R_{1}^{2(1-1/q_{0})}\right)\|D^{2}u\|_{L_{s,q,w}(\mathbb{ R}^{d}_{T})}\\ &\quad+N(\kappa^{-1}+\kappa^{(d+2)/q_{0}})(\|f\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})\end{split} \tag{5.10}\] for some constant \(N=N(d,s,q,K_{0})>0\). Since \(\operatorname{div}u=g\) in \(\mathbb{R}^{d}_{T}\), we get \[\Delta u^{i}=D_{i}g+\sum_{k\neq i}D_{k}(D_{k}u^{i}-D_{i}u^{k})\quad\text{in }\mathbb{R}^{d}_{T},\quad 1 \leq i\leq d.\] Hence it follows from Corollary 3.9 (i) and (5.10) that \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] \[\leq N(\kappa^{-1}+\kappa^{(d+2)/q_{0}}\delta^{1/(q_{0}\mu)}+\kappa^ {2(1-1/q_{0})}R_{1}^{2(1-1/q_{0})})\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] \[\quad+N(\kappa^{-1}+\kappa^{(d+2)/q})(\|f\|_{L_{s,q,w}(\mathbb{R} ^{d}_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})\] for some constant \(N=N(d,\nu,s,q,K_{0})>0\). Choose \(\kappa\geq 8\) large so that \(N\kappa^{-1}\leq 1/6\) and choose \(0<\delta<1\) so that \(N\kappa^{(d+2)/q_{0}}\delta^{1/(q_{0}\mu)}\leq 1/6\). Finally, choose \(R_{1}>0\) so that \(N\kappa^{2(1-1/q_{0})}R_{1}^{2(1-1/q_{0})}\leq 1/6\). Then we get \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N(\|f\|_{L_{s,q,w}(\mathbb{R}^{d }_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})\] for some constant \(N=N(d,s,q,\nu,K_{0})>0\). This completes the proof of Proposition 5.2. Now we apply "partition of unity in time" argument to remove the assumption that \(u\) has compact support in time as in [23, Lemma 6.5]. **Theorem 5.3**.: _Let \(0<T<\infty\), \(K_{0}\geq 1\), \(1<q,s<\infty\), \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w]_ {A_{s,q}}\leq K_{0}.\] _There exists_ \[\delta=\delta(d,\nu,s,q,K_{0})\in(0,1)\] _such that under Assumption 2.4\((\delta)\), if \((u,p)\) is a strong solution to (1.1) in \(\mathbb{R}^{d}_{T}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying \(u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), and \(g\in W^{0,1}_{s,q,w}(\mathbb{R}^{d}_{T})\), then we have_ \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}\left(\|f\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right)+N_{2}\|u\| _{L_{s,q,w}(\mathbb{R}^{d}_{T})},\] _where \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,R_{0})>0\). Moreover, if \(\nabla p\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), and \(g\in\mathring{\mathcal{H}}^{s}_{s,q,w}(\mathbb{R}^{d}_{T})\) and \(g_{t}=\operatorname{div}G\) for some \(G\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) in the sense of (2.1), then there exists a constant \(N=N(d,\nu,s,q,K_{0},R_{0})>0\) such that_ \[\|\partial_{t}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|D^{2}u\|_{L_{ s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla p\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \tag{5.11}\] \[\leq N\left(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|u\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}\right).\] Proof.: Take \(\delta>0\) and \(R_{1}>0\) given in Proposition 5.2. Choose sequences \(t_{k}\in\mathbb{R}\) and \(\{\eta_{k}(t)\}\) so that \(\eta_{k}\geq 0\), \(\eta_{k}\in C^{\infty}_{0}(\mathbb{R})\), \(\operatorname{supp}\eta_{k}\subset(t_{k}-(R_{0}R_{1})^{2},t_{k})\) and \[1\leq\sum_{k=1}^{\infty}|\eta_{k}(t)|^{s}\leq\chi_{0},\quad\sum_{k=1}^{\infty}| \eta^{\prime}_{k}(t)|^{s}\leq\chi_{1}\quad\text{for all $t\in(0,T)$}, \tag{5.12}\] where \(\chi_{0}\) depends only on \(s\), and \(\chi_{1}\) depends only on \(d\), \(s\), \(R_{0}\), and \(R_{1}\). Note that \(u_{k}(t,x):=u(t,x)\eta_{k}(t)\) and \(p_{k}(t,x):=p(t,x)\eta_{k}(t)\) satisfies \[\left\{\begin{aligned} (u_{k})_{t}-a^{ij}D_{ij}u_{k}+\nabla p_{k}& =\eta_{k}f+\eta^{\prime}_{k}u,\\ \operatorname{div}u_{k}&=\eta_{k}g\end{aligned}\right.\] in \(\mathbb{R}^{d}_{T}\). Then it follows from Proposition 5.2 that \[\|D^{2}u_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N\left(\|f\eta_{k}\|_{L_{s, q,w}(\mathbb{R}^{d}_{T})}+\|(Dg)\eta_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|u \eta^{\prime}_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right) \tag{5.13}\] for some constant \(N=N(d,\nu,s,q,K_{0})>0\). By summing (5.13) over \(k\) and using (5.12), we get \[\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}\left(\|f\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right)+N_{2}\|u\|_{L _{s,q,w}(\mathbb{R}^{d}_{T})}, \tag{5.14}\] where \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,R_{0})>0\). To show (5.11), since \((u,p)\) satisfies \(u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), \(\nabla p\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), and \[\partial_{t}u-\Delta u+\nabla p=f+[a^{ij}(t,x)-\delta^{ij}]D_{ij}u,\quad\text{ \rm div}\,u=g\quad\text{\rm in }\mathbb{R}^{d}_{T},\] where \(g_{t}=\text{\rm div}\,G\) in the sense of (2.1) for some \(G\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), it follows from Theorem 4.1 and (5.14) that \[\|\partial_{t}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla p\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] \[\leq N(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})\] \[\leq N(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|u\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}),\] where \(N=N(d,s,q,\nu,K_{0},R_{0})>0\). This completes the proof of Theorem 5.3. The following lemma helps us absorb the term \(\|u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\) on the right-hand side in (5.11) into the left-hand side of (5.11) which can be easily proved by using fundamental theorem of calculus and Lemma 3.4, so the proof is omitted. **Lemma 5.4**.: _Let \(T>0\), \(1<s,q<\infty\), \(K_{0}\geq 1\), and let \(w(t,x)=w_{1}(x)w_{2}(t)\) satisfy_ \[[w]_{A_{s,q}}\leq K_{0}.\] _Then there exists a constant \(N=N(d,s,q,K_{0})>0\) such that_ \[\|u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq NT\|\partial_{t}u\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}\] _for all \(u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})\)._ Now we are ready to prove Proof of Theorem 2.5.: Since \(u\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), and \(g\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\), we can extend \(u\), \(f\), and \(g\) to be zero for \(t<0\) so that \(u\in W^{1,2}_{s,q,w}((-\infty,T)\times\mathbb{R}^{d})^{d}\), \(f\in L_{s,q,w}((-\infty,T)\times\mathbb{R}^{d})^{d}\), and \(g\in\mathcal{H}^{1}_{s,q,w}((-\infty,T)\times\mathbb{R}^{d})\). Take a positive integer \(m\) to be specified below and set \[s_{j}=\frac{jT}{m},\quad j=-1,0,1,2,\ldots,m-1\] and \(\eta_{j}\in C^{\infty}(\mathbb{R})\) satisfying \[\eta_{j}(t)=1\quad\text{for }t\geq s_{j},\quad\eta_{j}(t)=0\quad\text{for }t\leq s_{j-1} \text{ with }|\eta^{\prime}_{j}|\leq\frac{2m}{T}.\] Note that \[\left\{\begin{aligned} \partial_{t}(\eta_{j}u)-a^{ij}(t,x)D_{ij}( \eta_{j}u)+\nabla(\eta_{j}p)&=\eta_{j}f+\eta^{\prime}_{j}u\\ \text{\rm div}(\eta_{j}u)&=\eta_{j}\,\text{\rm div}\,u =\eta_{j}g\end{aligned}\right.\] in \(\mathbb{R}^{d}_{T}\) and \((\eta_{j}u)(s_{j-1},\cdot)=0\) for \(j=0,1,2,\ldots,m-1\). Moreover, \((\eta_{j}g)_{t}=\text{\rm div}\,\tilde{G}\) for some \(\tilde{G}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) satisfying \[\|\tilde{G}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq(1+2m[w_{2}]^{1/s}_{A_{s}})\|G \|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}.\] Indeed, by the compatibility condition (2.1), we have \[\int_{\mathbb{R}^{d}_{T}}g\varphi_{t}\,dxdt=\int_{\mathbb{R}^{d}_{T}}G_{i}D_{i} \varphi\,dxdt \tag{5.15}\] for all \(\varphi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\). For \(\psi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\), put \(\varphi(t,x)=\int_{t}^{T}\psi(s,x)\,ds\) in (5.15). Then the Fubini theorem gives \[\int_{\mathbb{R}^{d}_{T}}g\eta_{j}\psi_{t}\,dxdt=\int_{\mathbb{R}^{d}}\eta_{j}G _{i}D_{i}\psi\,dxdt+\int_{\mathbb{R}^{d}_{T}}\left(\int_{0}^{t}G_{i}(s,x)\,ds \right)(\eta_{j})_{t}\psi\,dxdt\] for all \(\psi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})\). Then we can define \[\tilde{G}^{j}_{i}(t,x)=\eta_{j}(t)G_{i}(t,x)+\eta^{\prime}_{j}(t)\left(\int_{ 0}^{t}G_{i}(s,x)\,ds\right).\] By the Minkowski integral inequality, Holder's inequality, and the definition of \(A_{s}\) weights, we have \[\|\tilde{G}^{j}_{i}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq\|\eta_{j}G_{i}\|_{L_ {s,q,w}(\mathbb{R}^{d}_{T})}+\left\|(\eta_{j})_{t}\int_{0}^{t}G_{i}(\tau, \cdot)\,d\tau\right\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \tag{5.16}\] \[\leq\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\frac{2m}{T}\|G\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}T\left(\int_{0}^{T}w_{2}^{-1/(s-1)}\,d\tau\right)^{(s-1)/ s}\left(\int_{0}^{T}w_{2}\,d\tau\right)^{1/s}\] for all \(j=0,1,\ldots,m\). To proceed further, for simplicity, we write \[\|u\|_{(s_{j},s_{j+1})}:=\|u\|_{L_{s,q,w}((s_{j},s_{j+1})\times\mathbb{R}^{d})}.\] Then by Theorem 5.3 and Lemma 5.4, we have \[\|u\|_{(s_{j},s_{j+1})}\leq\|u\eta_{j}\|_{(s_{j-1},s_{j+1})} \tag{5.17}\] \[\leq N\left(\frac{T}{m}\right)\|\partial_{t}(\eta_{j}u)\|_{(s_{j- 1},s_{j+1})}\] \[\leq N\left(\frac{T}{m}\right)\left(\|f\eta_{j}\|_{(s_{j-1},s_{j+ 1})}+\|\eta_{j}(Dg)\|_{(s_{j-1},s_{j+1})}+\|\tilde{G}\|_{(s_{j-1},s_{j+1})}+\| u\|_{(s_{j},s_{j+1})}\right),\] where \(N=N(d,\nu,s,q,K_{0},R_{0})>0\). Choose \(m\) sufficiently large integer \(m\) so that \(N(T/m)\leq 1/2\). Then by (5.16) and (5.17), we have \[\|u\|_{(s_{j},s_{j+1})}\leq N\left(\|f\|_{(0,s_{j+1})}+\|Dg\|_{(0,s_{j+1})}+\| G\|_{(0,s_{j+1})}+\|u\|_{(0,s_{j})}\right),\] where \(N=N(d,\nu,s,q,K_{0},R_{0},T)>0\) and \(j=0,1,\ldots,m-1\). By induction and noting that \(\|u\|_{L_{s,q,w}((0,s_{0})\times\mathbb{R}^{d})}=0\), we get \[\|\partial_{t}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|u\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}+\|D^{2}u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla p\|_{ L_{s,q,w}(\mathbb{R}^{d}_{T})}\] \[\leq N\left(\|f\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|Dg\|_{L_{s,q,w }(\mathbb{R}^{d}_{T})}+\|G\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right),\] where \(N=N(d,\nu,s,q,K_{0},R_{0},T)>0\). Hence by the method of continuity with Theorem 4.1, we get the desired solvability results for the problem (1.1). This completes the proof of Theorem 2.5. ## 6. Stokes equations in divergence form In this section, we prove Theorem 2.6. The proof of Theorem 2.6 is similar to that of Theorem 2.5 with some modification. Rather than giving full details of the proof, we highlight the essential differences from the proof of Theorem 2.5. We first obtain a mean oscillation estimate for vorticity \(\omega=\nabla\times u\) of weak solutions \(u\) to (1.3). **Lemma 6.1**.: _Let \(\kappa\geq 8\), \(1<q<\infty\), \(\mu,\mu^{\prime}\in(1,\infty)\), \(1/\mu+1/\mu^{\prime}=1\), and \(a^{ij}\) satisfy Assumption 2.4 \((\delta)\). Then for any \(0<r\leq R_{0}/\kappa\), \((t_{0},x_{0})\in\mathbb{R}^{d+1}\), and \(u\in\mathcal{H}^{1}_{q\mu,\mathrm{loc}}(\mathbb{R}^{d+1})^{d}\) satisfying_ \[\partial_{t}u-D_{i}(a^{ij}(t,x)D_{j}u)+\nabla p=\mathrm{div}\,\mathbf{F},\quad \mathrm{div}\,u=g\quad\text{in }Q_{\kappa r}(t_{0},x_{0}),\] _where \(\mathbf{F}\in L_{q,\mathrm{loc}}(\mathbb{R}^{d+1})^{d\times d}\) and \(g\in L_{q,\mathrm{loc}}(\mathbb{R}^{d+1})\), we have_ \[\left(|\omega-(\omega)_{Q_{r}(t_{0},x_{0})}|\right)_{Q_{r}(t_{0}, x_{0})}\] \[\leq N\kappa^{-1}\left[(|Du|^{q})_{Q_{\kappa r}(t_{0},x_{0})}^{1/ q}+(|\mathbf{F}|^{q})_{Q_{\kappa r}(t_{0},x_{0})}^{1/q}\right]\] \[\quad+N\kappa^{(d+2)/q}\left[(|\mathbf{F}|^{q})_{Q_{\kappa r}(t_ {0},x_{0})}^{1/q}+(|g|^{q})_{Q_{\kappa r}(t_{0},x_{0})}^{1/q}+\delta^{1/(q\mu ^{\prime})}(|Du|^{q\mu})_{Q_{\kappa r}(t_{0},x_{0})}^{1/(q\mu)}\right]\] _for some constant \(N=N(d,q,\nu)>0\)._ Proof.: The proof is essentially the same as that of Lemma 5.1. The main difference is to apply Theorem 4.2 instead of Theorem 4.1. By translation invariance, we may assume that \((t_{0},x_{0})=(0,0)\). For a locally integrable function \(h\) defined on \(Q_{\kappa r}\), let \(h^{(\varepsilon)}\) denote the standard mollification defined in (5.2). Fix \(0<r^{\prime}<\kappa r\) and let \(\varphi\in C_{0}^{\infty}(Q_{r^{\prime}})\). Then for small \(\varepsilon>0\) satisfying \(\varepsilon^{2}<(\kappa r)^{2}-(r^{\prime})^{2}\), it is easy to verify that \[\int_{Q_{r^{\prime}}}h^{(\varepsilon)}(t,x)\partial_{t}\varphi(t,x)\,dtdx= \int_{Q_{\kappa r}}h(s,y)\partial_{t}\tilde{\varphi}^{(\varepsilon)}(s,y)\, dsdy,\] where \[\tilde{\varphi}^{(\varepsilon)}(s,y)=\int_{Q_{r^{\prime}}}\eta_{\varepsilon}( s-t,y-x)\varphi(t,x)\,dtdx. \tag{6.1}\] Then for \(\varphi\in C_{0}^{\infty}(Q_{r^{\prime}})^{d}\) satisfying \(\mathrm{div}\,\varphi(t,\cdot)=0\) in \(B_{r^{\prime}}\) for all \(t\in(-(r^{\prime})^{2},0)\), using \(\tilde{\varphi}^{(\varepsilon)}\) as a test function in the definition of weak solutions, we get \[-\int_{Q_{r^{\prime}}}u^{(\varepsilon)}\cdot\partial_{t}\varphi \,dz+\int_{Q_{r^{\prime}}}a^{ij}D_{j}u^{(\varepsilon)}\cdot D_{i}\varphi\,dx\] \[=-\int_{Q_{r^{\prime}}}\mathbf{F}^{(\varepsilon)}:\nabla\varphi\, dz-\int_{Q_{r^{\prime}}}[a^{ij}D_{j}u^{(\varepsilon)}-(a^{ij}D_{j}u)^{( \varepsilon)}]\cdot D_{i}\varphi\,dz.\] In other words, \(u^{(\varepsilon)}\) is a weak solution to \[\partial_{t}u^{(\varepsilon)}-D_{i}(a^{ij}D_{j}u^{(\varepsilon)})+\nabla p^{ (\varepsilon)}=\mathrm{div}(\mathbf{F}^{(\varepsilon)}+\mathbf{H}^{ \varepsilon})\quad\text{in }Q_{r^{\prime}},\] where \(\mathbf{H}^{\varepsilon}=(H_{1}^{\varepsilon},\ldots,H_{\varepsilon}^{ \varepsilon})\) and \(H_{i}^{\varepsilon}:=(a^{ij}D_{j}u)^{(\varepsilon)}-a^{ij}(t,x)D_{j}u^{( \varepsilon)}\) in \(Q_{r^{\prime}}\). If we prove the estimate in the lemma for \(u^{(\varepsilon)}\), we get the desired result by letting \(\varepsilon\to 0\). Hence we may assume that \(u\) and \(g\) are infinitely differentiable. Let \(\zeta_{r}(x)\) and \(\psi_{r}(t)\) be infinitely differentiable functions defined on \(\mathbb{R}^{d}\) and \(\mathbb{R}\) satisfying \(0\leq\zeta_{r}\leq 1\), \(0\leq\psi_{r}\leq 1\), \[\zeta_{r}(x)=1\quad\text{on }B_{2r/3},\quad\zeta_{r}(x)=0\quad\text{on }\mathbb{R}^{d} \setminus B_{r},\] \[\psi_{r}(t)=1\quad\text{on }t\in(-4r^{2}/9,4r^{2}/9),\quad\psi_{r}(t)=0\quad\text{on }\mathbb{R}\setminus(-r^{2},r^{2}).\] Set \(\phi_{r}(t,x)=\psi_{r}(t)\zeta_{r}(x)\). Then \(\phi_{r}=1\) on \(Q_{2r/3}\) and \(|D\phi_{r}|\leq 4/r\). Since \[\int_{B_{\kappa r}}(g-[g(t,\cdot)]_{\zeta_{\kappa r},B_{\kappa r}})\phi_{ \kappa r}\,dx=0,\] for each \(t\in[-(\kappa r)^{2},0)\), it follows from Theorem 3.6 that there exists \(G(t,\cdot)\in W^{1}_{q,0}(B_{\kappa r})^{d}\) satisfying \[\operatorname{div}G=(g-[g(t,\cdot)]_{\zeta_{\kappa r},B_{\kappa r}})\phi_{ \kappa r}\quad\text{in }B_{\kappa r},\quad G=0\quad\text{on }\partial B_{\kappa r}\] and \[\|DG(t,\cdot)\|_{L_{q}(B_{\kappa r})}\leq N(d,q)\|(g(t,\cdot)-[g(t,\cdot)]_{ \zeta_{\kappa r},B_{\kappa r}})\phi_{\kappa r}(t,\cdot)\|_{L_{q}(B_{\kappa r})} \tag{6.2}\] for \(t\in(-(\kappa r)^{2},0)\) and \(G(-(\kappa r)^{2},\cdot)=0\) on \(B_{\kappa r}\). Hence by (6.2) and Holder's inequality, we have \[\|DG\|_{L_{q}(Q_{\kappa r})}\leq N(d,q)\|g\|_{L_{q}(Q_{\kappa r})}. \tag{6.3}\] Now we choose \(\varphi\in C_{0}^{\infty}(B_{2})\) so that \(\varphi=0\) in \(B_{1}\) and \(\int_{B_{2}}\varphi\,dx=1\). Define \(\overline{G}^{i}(t,x)\) by \[\overline{G}^{i}(t,x):=\begin{cases}G^{i}(t,x)&\text{in }B_{\kappa r},\\ c^{i}(t)\varphi\left(\frac{x}{\kappa r}\right)&\text{in }B_{2\kappa r} \setminus B_{\kappa r},\\ 0&\text{in }B_{2\kappa r}^{c},\end{cases}\] where \(c^{i}(t)\)\(=-(\int_{B_{\kappa r}}G^{i}(t,x)\,dx)(\int_{B_{2\kappa r}}\varphi\left( \frac{x}{\kappa r}\right)\,dx)^{-1}\) so that \[\int_{B_{2\kappa r}}\overline{G}^{i}(t,x)\,dx=0.\] Define \[h(t,x):=\operatorname{div}\overline{G}(t,x).\] Note that \(h=(g-[g]_{B_{\kappa r},\varsigma_{\kappa r}})\phi_{\kappa r}\) in \(Q_{\kappa r}\). Since \(g\) is infinitely differentiable in \(t\), \(h\) is also infinitely differentiable in \(t\). Moreover, since \(\int_{B_{2\kappa r}}\overline{G}^{i}\,dx=0\), it follows that \(\int_{\partial B_{2\kappa r}}\partial_{t}\overline{G}^{i}\,dx=0\). Hence by Theorem 3.6, there exists \(H^{i}\in W^{1}_{q,0}(B_{2\kappa r})\) satisfying \[\operatorname{div}H^{i}=\partial_{t}\overline{G}^{i}\quad\text{in }B_{2\kappa r}, \quad H^{i}=0\quad\text{on }\partial B_{2\kappa r}\] for each \(t\). Extend \(H^{i}\) to be zero outside \(B_{2\kappa r}\). Since \(\partial_{t}\overline{G}^{i}\) has compact support in \(B_{2\kappa r}\) for each \(t\in(-(\kappa r)^{2},0)\) and \(\overline{G}^{i}(-(\kappa r)^{2},\cdot)=0\), we have \[\int_{-(\kappa r)^{2}}^{0}\int_{\mathbb{R}^{d}}(\operatorname{div}H^{i})\phi \,dxdt=\int_{-(\kappa r)^{2}}^{0}\int_{\mathbb{R}^{d}}(\partial_{t}\overline {G}^{i})\phi\,dxdt=-\int_{-(\kappa r)^{2}}^{0}\int_{\mathbb{R}^{d}}\overline{G }^{i}\partial_{t}\phi\,dxdt\] for all \(\phi\in C_{0}^{\infty}([-(\kappa r)^{2},0)\times\mathbb{R}^{d})\). For all \(\psi\in C_{0}^{\infty}([-(\kappa r)^{2},0)\times\mathbb{R}^{d})\), taking \(\phi=D_{i}\psi\), we have \[-\int_{-(\kappa r)^{2}}^{0}\int_{\mathbb{R}^{d}}H^{i}\cdot\nabla(D_{i}\psi)\, dxdt=\int_{-(\kappa r)^{2}}^{0}\int_{\mathbb{R}^{d}}h\partial_{t}\psi\,dxdt.\] Hence \(h\) satisfies the compatibility condition (4.5). By a change of variables, Holder's inequality, and the Poincare inequality, we get \[|c^{i}(t)|\leq\frac{N(d,q)}{(\kappa r)^{d/q}}\|G^{i}(t,\cdot)\|_{L_{q}(B_{ \kappa r})}\leq\frac{N(d,q)}{(\kappa r)^{d/q}}(\kappa r)\|DG^{i}(t,\cdot)\|_{L _{q}(B_{\kappa r})} \tag{6.4}\] Moreover, it follows from (6.3), (6.4), and a change of variable that \[\|h\|_{L_{q}((-(\kappa r)^{2},0)\times\mathbb{R}^{d})}\] \[\leq N(d,q)\left(\|DG\|_{L_{q}(Q_{\kappa r})}+\frac{1}{\kappa r}\|c (t)D\varphi(x/(\kappa r))\|_{L_{q}((-(\kappa r)^{2},0)\times(B_{2\kappa r} \setminus B_{\kappa r}))}\right)\] \[\leq N(d,q)\|g\|_{L_{q}(Q_{\kappa r})}.\] Now consider the following initial-value problem for Stokes equations: for \(l=1,\ldots,d\), \[\begin{cases}\partial_{t}u_{1}^{l}-D_{i}(\overline{a}^{ij}(t)D_{j}u_{1}^{l})+ D_{l}p_{1}=D_{i}[1_{Q_{\kappa r}}H^{ij,l}]&\text{in }(-(\kappa r)^{2},0)\times\mathbb{R}^{d},\\ \operatorname{div}u_{1}=h&\text{in }(-(\kappa r)^{2},0)\times\mathbb{R}^{d}, \\ u_{1}=0&\text{on }\{t=-\kappa r^{2}\}\times\mathbb{R}^{d},\end{cases}\] where \(H^{ij,l}=[F^{li}+(a^{ij}-\overline{a}^{ij}(t))D_{j}u^{l}]\). If we define \(u_{2}=u-u_{1}\) and \(p_{2}=p-p_{1}\), then \((u_{2},p_{2})\) satisfies \[\begin{cases}\partial_{t}u_{2}-D_{i}(\overline{a}^{ij}(t)D_{j}u_{2})+\nabla p _{2}=0&\text{in }Q_{2\kappa r/3},\\ \operatorname{div}u_{2}=[g(t,\cdot)]_{B_{\kappa r}}&\text{in }Q_{2\kappa r/3}.\end{cases}\] Since \(h\) satisfies the compatibility condition, following exactly the same argument as in the proof of Lemma 5.1, we can prove the desired result. Following exactly the same argument as in Proposition 5.2 using Lemma 6.1 instead of Lemma 5.1, one can prove the following proposition of which proof is omitted. This proposition is necessary to perform a partition of unity in time argument. **Proposition 6.2**.: _Let \(0<T<\infty\), \(K_{0}\geq 1\), \(1<q,s<\infty\), \(t_{1}\in\mathbb{R}\), \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _Then there exist \(\delta>0\) and \(R_{1}>0\) such that under Assumption 2.4 \((\delta)\), for \(u\in\hat{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) vanishing outside \((t_{1}-(R_{0}R_{1})^{2},t_{1})\times\mathbb{R}^{d}\) and is a weak solution to_ \[\partial_{t}u-D_{i}(a^{ij}(t,x)D_{j}u)+\nabla p=\operatorname{div}\mathbf{F}, \quad\operatorname{div}u=g\quad\text{in }\mathbb{R}^{d}_{T},\] _where \(\mathbf{F}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) and \(g\in L_{s,q,w}(\mathbb{R}^{d}_{T})\), then there exists a constant \(N=N(d,s,q,K_{0},\nu)>0\) such that_ \[\|Du\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N\left(\|\mathbf{F}\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|g\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right).\] Using Proposition 6.2, we can prove the gradient estimate of weak solutions by following exactly the same argument as in Theorem 5.3. **Theorem 6.3**.: _Let \(0<T<\infty\), \(K_{0}\geq 1\), \(1<q,s<\infty\), \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w ]_{A_{s,q}}\leq K_{0}.\] _There exists \(\delta=\delta(d,\nu,s,q,K_{0})>0\) such that under Assumption 2.4 \((\delta)\), if \(u\in\hat{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) is a weak solution to (1.3) in \(\mathbb{R}^{d}_{T}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying \(g\in L_{s,q,w}(\mathbb{R}^{d}_{T})\) and \(\mathbf{F}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\), then_ \[\|Du\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}(\|\mathbf{F}\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}+\|g\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})+N_{2}\|u\|_{L_{s,q, w}(\mathbb{R}^{d}_{T})},\] _where \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,R_{0})>0\). Moreover, if \(p\in L_{s,q,w}(\mathbb{R}^{d}_{T})\), \(g_{t}=\operatorname{div}\operatorname{div}\mathbf{G}\) for some \(\mathbf{G}\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d\times d}\) in the sense of (2.2), then_ \[\begin{split}&\|Du\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|p\|_{L_{s, q,w}(\mathbb{R}^{d}_{T})}\\ &\leq N_{1}(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|g\|_ {L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} )+N_{2}\|u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}.\end{split} \tag{6.5}\] The key difference from the nondivergence form case is that it is hard to immediately absorb the term \(\|u\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\) into the left-hand side of (6.5) by using the time derivative of \(u\) as in Lemma 5.4 (see also the proof of Theorem 5.3). Nevertheless, using a mollification argument, we can absorb the term into the left-hand side. See, for instance, Dong-Liu [25]. For the sake of completeness, we explain it in detail. Choose a radially symmetric \(\zeta\in C^{\infty}_{0}(\mathbb{R}^{d})\) satisfying \(\operatorname{supp}\zeta\subset B_{1}\) and \(\int_{\mathbb{R}^{d}}\zeta\,dx=1\), and for \(\varepsilon>0\), let \(\zeta^{\varepsilon}(x):=\varepsilon^{-d}\zeta(x/\varepsilon)\). The following lemma can be easily proved by the definition of mollification, fundamental theorem of calculus, and Lemma 3.4 (see [25, Lemma A.2]). **Lemma 6.4**.: _Let \(1<q<\infty\), \(K_{0}\geq 1\), and \(w_{1}\in A_{q}(\mathbb{R}^{d},dx)\) with \([w_{1}]_{A_{q}}\leq K_{0}\). For \(\varepsilon>0\) and \(v\in W^{1}_{q,w_{1}}(\mathbb{R}^{d})\), define_ \[v^{(\varepsilon)}(x):=(v*\zeta^{\varepsilon})(x)=\int_{\mathbb{R}^{d}}\zeta^{ \varepsilon}(x-y)v(y)\,dy.\] _Then there exists a constant \(N=N(d,q,K_{0})>0\) such that_ \[\|v^{(\varepsilon)}-v\|_{L_{q,w_{1}}(\mathbb{R}^{d})}\leq N\varepsilon\|Dv\|_ {L_{q,w_{1}}(\mathbb{R}^{d})}.\] **Lemma 6.5**.: _Let \(1<s,q<\infty\), \(K_{0}\geq 1\), and let \(w=w_{1}(x)w_{2}(t)\) satisfy \([w]_{A_{s,q}}\leq K_{0}\). If \(u\in\mathcal{H}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})\) satisfies_ \[\partial_{t}u=f+\operatorname{div}F\quad\text{in }\mathbb{R}^{d}_{T} \tag{6.6}\] _for some \(f\in L_{s,q,w}(\mathbb{R}^{d}_{T})\) and \(F=(F^{1},\ldots,F^{d})\in L_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\), then \(u^{(\varepsilon)}:=u*\zeta^{\varepsilon}\in W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})\) and for any \(0<\varepsilon<1\), we have_ \[\|\partial_{t}u^{(\varepsilon)}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq\frac{N} {\varepsilon}\|F\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+N\|f\|_{L_{s,q,w}(\mathbb{ R}^{d}_{T})}, \tag{6.7}\] _where \(N=N(d,s,q,K_{0})>0\)._ Proof.: Since \(u\) satisfies (6.6), we have \[-\int_{\mathbb{R}^{d}_{T}}u\partial_{t}\phi\,dxdt=\int_{\mathbb{R}^{d}_{T}}f \phi\,dxdt-\int_{\mathbb{R}^{d}_{T}}F\cdot\nabla\phi\,dxdt \tag{6.8}\] for all \(\phi\in C^{\infty}_{0}(\mathbb{R}^{d}_{T})\). Put \(\phi=\psi*\zeta^{\varepsilon}\) in (6.8), where \(\psi\in C^{\infty}_{0}(\mathbb{R}^{d}_{T})\). Then the Fubini theorem gives \[-\int_{\mathbb{R}^{d}_{T}}u^{(\varepsilon)}\partial_{t}\psi\,dxdt=\int_{ \mathbb{R}^{d}_{T}}f^{(\varepsilon)}\psi\,dxdt-\int_{\mathbb{R}^{d}_{T}}F^{( \varepsilon)}\cdot\nabla\psi\,dxdt. \tag{6.9}\] Note that \[|f*\zeta^{\varepsilon}(t,x)| =\left|\int_{B_{1}}f(t,x-\varepsilon y)\zeta(y)\,dy\right|\] \[\leq\|\zeta\|_{L_{\infty}}|B_{1}|\fint_{B_{1}}|f(t,x-\varepsilon y )|\,dy\] \[\leq\|\zeta\|_{L_{\infty}}|B_{1}|\mathcal{M}^{x}f(t,x),\] for \((t,x)\in\mathbb{R}_{T}^{d}\), where \(\mathcal{M}^{x}\) is the Hardy-Littlewood maximal function in \(x\). By Lemma 3.4, we have \[\|f*\zeta^{\varepsilon}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N(d,s,q,K_{0})\|f \|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\] for \(0<\varepsilon<1\). Similarly, we have \[\|F^{i}*(D_{i}\zeta^{\varepsilon})\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq\frac {N(d,s,q,K_{0})}{\varepsilon}\|D_{i}\zeta\|_{L_{\infty}(\mathbb{R}^{d})}\|F\|_ {L_{s,q,w}(\mathbb{R}_{T}^{d})}\] for \(0<\varepsilon<1\). Hence it follows from (6.9) and Holder's inequality that \[\left|\int_{\mathbb{R}_{T}^{d}}u^{(\varepsilon)}\partial_{t}\psi\,dxdt\right| \leq N(d,s,q,K_{0})\left(\|f\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\frac{1}{ \varepsilon}\|F\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\right)\|\psi\|_{L_{s^{ \prime},q^{\prime},\tilde{w}}(\mathbb{R}_{T}^{d})},\] where \(\tilde{w}=w_{1}^{-1/(q-1)}w_{2}^{-1/(s-1)}\) and for all \(\psi\in C_{0}^{\infty}(\mathbb{R}_{T}^{d})\). Therefore by duality, \(\partial_{t}u^{(\varepsilon)}\) satisfies (6.7). This completes the proof of Lemma 6.5. Proof of Theorem 2.6.: By Theorem 6.3 and the method of continuity together with Theorem 4.2, it suffices to show that there exists a constant \(N=N(d,s,q,\nu,K_{0},T)>0\) such that \[\|u\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{ R}_{T}^{d})}+\|g\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|\mathbf{G}\|_{L_{s,q,w}( \mathbb{R}_{T}^{d})}).\] Since \(u\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}_{T}^{d})\), \(|\mathbf{F}|,g\in L_{s,q,w}(\mathbb{R}_{T}^{d})\), we can extend \(u\), \(\mathbf{F}\), and \(g\) to be zero for \(t<0\). Take a positive integer \(m\) to be specified below and set \[s_{j}=\frac{jT}{m},\quad j=-1,0,1,2,\ldots,m-1\] and \(\eta_{j}\in C^{\infty}(\mathbb{R})\), where \(\eta_{j}\) is defined in the proof of Theorem 2.5. It is easy to see that \(\eta_{k}u\in\mathring{\mathcal{H}}^{1}_{s,q,w}((s_{k-1},T)\times\mathbb{R}^{ d})^{d}\) for \(k=0,\ldots,m-1\). Note that \[\left\{\begin{aligned} \partial_{t}(\eta_{k}u)-D_{i}(a^{ij}D_{j}( \eta_{k}u))+\nabla(\eta_{k}p)&=\operatorname{div}(\eta_{k} \mathbf{F})+\eta_{k}^{\prime}u,\\ \operatorname{div}(\eta_{k}u)&=\eta_{k}\operatorname{ div}u=\eta_{k}g\end{aligned}\right. \tag{6.10}\] in \(\mathbb{R}_{T}^{d}\) and \((\eta_{k}u)(s_{k-1},\cdot)=0\) on \(\mathbb{R}^{d}\) for \(k=0,1,\ldots,m-1\). Similar to the non-divergence case, one can show that there exists \(\tilde{\mathbf{G}}^{k}\in L_{s,q,w}(\mathbb{R}_{T}^{d})^{d\times d}\) satisfying \((\eta_{k}g)_{t}=\operatorname{div}\operatorname{div}\tilde{\mathbf{G}}^{k}\) in the sense of (2.2) and \[\|\tilde{\mathbf{G}}^{k}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq(1+2m[w_{2}]^{1/ s}_{A_{s}})\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}. \tag{6.11}\] For simplicity, we write \[\|u\|_{(s_{k},s_{k+1})}=\|u\|_{L_{s,q,w}((s_{k},s_{k+1})\times\mathbb{R}^{d})}.\] Since \((u,p)\) satisfies (6.10), it follows from Lemma 6.5 that \[\begin{aligned} \|\partial_{t}(\eta_{k}u)^{\varepsilon}\|_{(0,T )}&\leq\frac{N}{\varepsilon}\left(\|\eta_{k}\mathbf{F}\|_{(0,T)}+ \|D_{j}(\eta_{k}u)\|_{(0,T)}+\|\eta_{k}p\|_{(0,T)}\right)\\ &\quad+N\|\eta_{k}^{\prime}u\|_{(0,T)}\end{aligned} \tag{6.12}\] for some \(N=N(d,s,q,K_{0},\nu)>0\). By Theorem 6.3, (6.10), and (6.11), there exists a constant \(N=N(d,s,q,\nu,K_{0},R_{0})>0\) such that \[\begin{split}&\|D(u\eta_{k})\|_{(s_{k},s_{k+1})}+\|\eta_{k}p\|_{(s_{ k},s_{k+1})}\\ &\leq\|Du\|_{(0,s_{k+1})}+\|p\|_{(0,s_{k+1})}\\ &\leq N\left(\|\mathbf{F}\|_{(0,T)}+(1+2m[w_{2}]_{A_{s}}^{1/s}) \|\mathbf{G}\|_{(0,T)}+\|g\|_{(0,T)}+\|u\|_{(0,s_{k+1})}\right)\end{split} \tag{6.13}\] for all \(k=0,\ldots,m-1\). Hence it follows from Lemmas 6.4, 5.4, (6.13), and (6.12) that \[\begin{split}&\|u\|_{(s_{k},s_{k+1})}\\ &\leq\|(u\eta_{k})^{\varepsilon}-u\eta_{k}\|_{(s_{k-1},s_{k+1})}+ \|(u\eta_{k})^{\varepsilon}\|_{(s_{k-1},s_{k+1})}\\ &\leq N\varepsilon\|D(u\eta_{k})\|_{(s_{k-1},s_{k+1})}+N\left( \frac{T}{m}\right)\|\partial_{t}(u\eta_{k})^{\varepsilon}\|_{(s_{k-1},s_{k+1 })}\\ &\leq N\varepsilon\left(\|\mathbf{F}\|_{(0,T)}+(1+2m[w_{2}]_{A_{ s}}^{1/s})\|\mathbf{G}\|_{(0,T)}+\|g\|_{(0,T)}+\|u\|_{(0,s_{k+1})}\right)\\ &\quad+N\left(\frac{T}{m\varepsilon}\right)\left(\|D_{j}(\eta_{k }u)\|_{(s_{k-1},s_{k+1})}+\|\eta_{k}p\|_{(s_{k-1},s_{k+1})}+\|\eta_{k}\mathbf{ F}\|_{(0,T)}\right)\\ &\quad+N\left(\frac{T}{m}\right)\|\eta_{k}^{\prime}u\|_{(0,T)}\\ &\leq N\varepsilon\left(\|\mathbf{F}\|_{(0,T)}+(1+2m[w_{2}]_{A_{ s}}^{1/s})\|\mathbf{G}\|_{(0,T)}+\|g\|_{(0,T)}+\|u\|_{(0,s_{k})}+\|u\|_{(s_{k},s_{k+1 })}\right)\\ &\quad+N\left(\frac{T}{m\varepsilon}\right)\left(\|\mathbf{F}\|_ {(0,T)}+\|u\|_{(0,s_{k})}+\|u\|_{(s_{k},s_{k+1})}\right)+N\|u\|_{(s_{k-1},s_{ k})}\\ &\leq N\left(\varepsilon+\frac{T}{m\varepsilon}\right)\|u\|_{(s_{ k},s_{k+1})}\\ &\quad+N\varepsilon\left(\|\mathbf{F}\|_{(0,T)}+(1+2m[w_{2}]_{A_{ s}}^{1/s})\|\mathbf{G}\|_{(0,T)}+\|g\|_{(0,T)}+\|u\|_{(0,s_{k})}\right)\\ &\quad+N\left(\frac{T}{m\varepsilon}\right)\left(\|\mathbf{F}\|_ {(0,T)}+\|u\|_{(0,s_{k})}\right)+N\|u\|_{(s_{k-1},s_{k})}\end{split}\] for some constant \(N=N(d,s,q,\nu,K_{0},R_{0})>0\). Choose \(\varepsilon>0\) sufficiently small and then choose \(m\) sufficiently large so that \(\|u\|_{(s_{k},s_{k+1})}\) is absorbed into the left-hand side. Then we have \[\|u\|_{(s_{k},s_{k+1})}\leq N\left(\|\mathbf{F}\|_{(0,T)}+\|\mathbf{G}\|_{(0, T)}+\|g\|_{(0,T)}+\|u\|_{(0,s_{k})}\right)+N\|u\|_{(s_{k-1},s_{k})}\] for some constant \(N=N(d,s,q,\nu,K_{0},T,R_{0})>0\) and \(k=0,\ldots,m-1\). By induction and noting that \(\|u\|_{(0,s_{0})}=0\), we get \[\|u\|_{(0,T)}\leq N\left(\|\mathbf{F}\|_{(0,T)}+\|\mathbf{G}\|_{(0,T)}+\|g\|_{( 0,T)}\right)\] for some constant \(N=N(d,s,q,\nu,K_{0},T,R_{0})>0\). This completes the proof of Theorem 2.6. ## 7. Interior mixed-norm derivative estimates for Stokes equations This section is devoted to proving Theorems 2.7 and 2.8, which will be given in Subsections 7.1 and 7.2, respectively. ### Interior Hessian estimates for Stokes equations in nondivergence form To prove Theorem 2.7, we use the following theorem that was implicitly proved in Dong-Phan [26, Theorem 1.11]. **Theorem 7.1**.: _Let \(1<s,q<\infty\), \(\nu\in(0,1)\), and \(1/2\leq r<R\leq 1\). There exists \(\delta=\delta(d,\nu,s,q)\in(0,1)\) such that under Assumption 2.4 (\(\delta\)), if \((u,p)\in\tilde{W}^{1,2}_{s,q}(Q_{R})^{d}\times W^{0,1}_{1}(Q_{R})\) is a strong solution to (1.1) in \(Q_{R}\), \(f\in L_{s,q}(Q_{R})^{d}\) and \(g\in W^{0,1}_{s,q}(Q_{R})\), then there exists a constant \(N=N(d,s,q,\nu,r,R,R_{0})>0\) such that_ \[\|D^{2}u\|_{L_{s,q}(Q_{r})}\leq N\left[\frac{1}{(R-r)^{6}}\|u\|_{L_{s,1}(Q_{R} )}+\|f\|_{L_{s,q}(Q_{R})}+\|Dg\|_{L_{s,q}(Q_{R})}\right]\] _for some constant \(b=b(d,q)>2\)._ For the sake of completeness, we give a proof of Theorem 7.1 by using following lemma. **Lemma 7.2** ([26, Lemma 4.13]).: _Let \(1/2\leq R<1\), \(R_{1}\in(0,R_{0})\), \(R_{1}\in(0,R_{0})\), \(\delta\in(0,1)\), \(\kappa\in(0,1/4)\), \(1<s,q<\infty\), \(q_{1}\in(1,\min\{s,q\})\), and \(1<q_{0}<q_{1}\). Suppose that \((u,p)\in\tilde{W}^{1,2}_{s,q}(Q_{R+R_{1}})^{d}\times W^{0,1}_{1}(Q_{R+R_{1}})\) is a strong solution to (1.1) in \(Q_{R+R_{1}}\) for some \(f\in L_{s,q}(Q_{R+R_{1}})^{d}\) and \(g\in W^{0,1}_{s,q}(Q_{R+R_{1}})\). Then_ \[\|D^{2}u\|_{L_{s,q}(Q_{R})} \leq N\kappa^{-(d+2)/q_{0}}\|f\|_{L_{s,q}(Q_{R+R_{1}})}+N\kappa^{-( d+2)/q_{0}}\|Dg\|_{L_{s,q}(Q_{R+R_{1}})}\] \[+N\left(\kappa^{-(d+2)/q_{0}}\delta^{1/q_{0}-1/q_{1}}+\kappa \right)\|D^{2}u\|_{L_{s,q}(Q_{R+R_{1}})}\] \[+N\kappa^{-(d+2)/q_{0}}R_{1}^{-1}\|Du\|_{L_{s,q}(Q_{R+R_{1}})}\] _for some constant \(N=N(d,s,q,\nu)>0\)._ Proof of Theorem 7.1.: Fix \(1/2\leq r<R\leq 1\). For \(k=1,2,\ldots\), we write \[r_{k}=R-\frac{R-r}{2^{k-1}},\quad k=1,2,\ldots.\] Then \(r_{1}=r\) and \(r_{k}\) is increasing satisfying \(\lim_{k\to\infty}r_{k}=R\). Let \(k_{0}\) be the smallest positive integer \(k\) such that \(2^{-k}(R-r)\leq R_{0}\). For \(k\geq k_{0}\), we apply Lemma 7.2 with \(R=r_{k}\) and \(R_{1}=2^{-k}(R-r)\). Since \(r_{k}+R_{1}=r_{k+1}\), we get \[\|D^{2}u\|_{L_{s,q}(Q_{r_{k}})} \leq N\kappa^{-(d+2)/q_{0}}\|f\|_{L_{s,q}(Q_{r_{k+1}})}+N\kappa^{-( d+2)/q_{0}}\|Dg\|_{L_{s,q}(Q_{r_{k+1}})}\] \[+N\left(\kappa^{-(d+2)/q_{0}}\delta^{1/q_{0}-1/q_{1}}+\kappa \right)\|D^{2}u\|_{L_{s,q}(Q_{r_{k+1}})}\] \[+N\kappa^{-(d+2)/q_{0}}\frac{2^{k}}{R-r}\|Du\|_{L_{s,q}(Q_{r_{k+1 }})}\] for some constant \(N=N(d,s,q,\nu)>0\). By the Gagliardo-Nirenberg interpolation inequality, we have \[\|Du\|_{L_{s,q}(Q_{r})}\leq N(d,q)\|D^{2}u\|_{L_{s,q}(Q_{r})}^{\theta}\|u\|_{L_{s,1}(Q_{r})}^{1-\theta}+N(d,q)\|u\|_{L_{s,1}(Q_{r})},\] where \[\frac{1}{2}<\theta=\frac{1+1/d-1/q}{1+2/d-1/q}<1.\] Then by Young's inequality, we get \[\|D^{2}u\|_{L_{s,q}(Q_{r_{k}})} \leq N\kappa^{-(d+2)/q_{0}}\|f\|_{L_{s,q}(Q_{r_{k+1}})}+N\kappa^{-( d+2)/q_{0}}\|Dg\|_{L_{s,q}(Q_{r_{k+1}})}\] \[+N\left(\kappa^{-(d+2)/q_{0}}\delta^{1/q_{0}-1/q_{1}}+\kappa \right)\|D^{2}u\|_{L_{s,q}(Q_{r_{k+1}})}\] \[+N\kappa^{-(d+2)/q_{0}}\left(\frac{2^{k/(1-\theta)}}{(R-r)^{1/(1-\theta)}}+\frac{ 2^{k}}{R-r}\right)\|u\|_{L_{s,1}(Q_{r_{k+1}})},\] where the constant \(N\) depends only on \(d\) and \(q\). Choose \(\kappa\) sufficiently small and then \(\delta\) sufficiently small so that \[N\left(\kappa^{-(d+2)/q_{0}}\delta^{1/q_{0}-1/q_{1}}+\kappa\right)\leq 3^{-1/(1- \theta)}.\] Then multiply both sides of the inequality by \(3^{-k/(1-\theta)}\) and sum over \(k=k_{0},k_{0}+1\),... to obtain \[\|D^{2}u\|_{L_{s,q}(Q_{r})}\leq N\left[\left(\frac{1}{(R-r)}+\frac{1}{(R-r)^{b }}\right)\|u\|_{L_{s,1}(Q_{R})}+\|f\|_{L_{s,q}(Q_{R})}+\|Dg\|_{L_{s,q}(Q_{R})} \right],\] where \(N=N(d,s,q,\nu,r,R,R_{0})>0\) and \(b=1/(1-\theta)\). Since \(0<R-r<1/2\), we get the desired estimate. This completes the proof of Theorem 7.1. Another ingredient to prove Theorem 2.7 is the following regularity result for Stokes equations when the exterior force \(f\) is bounded and has compact support. **Lemma 7.3**.: _Let \(1<q_{0},s,q<\infty\) and \(0<T<\infty\). There exists \(\delta>0\) such that under Assumption 2.4\((\delta)\), if \((u,p)\) is a strong solution to (1.1) in \(\mathbb{R}^{d}_{T}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying_ \[u\in\mathring{W}^{1,2}_{q_{0}}(\mathbb{R}^{d}_{T})^{d},\quad\nabla p\in L_{q_ {0}}(\mathbb{R}^{d}_{T})^{d},\] \(f\in L_{\infty}(\mathbb{R}^{d}_{T})^{d}\) _having compact support in \(\mathbb{R}^{d}_{T}\), and \(g=0\), then \((u,\nabla p)\in W^{1,2}_{s,q}(\mathbb{R}^{d}_{T})^{d}\times L_{s,q}(\mathbb{R} ^{d}_{T})^{d}\)._ Proof.: Choose \(s_{1}\in(\max\{s,q_{0}\},\infty)\) and \(q_{1}\in(\max\{q,q_{0}\},\infty)\). Let \(q_{*}=\min\{q,q_{0}\}\) and define \(w(t,x)=w(x)=(1+|x|)^{\alpha}\), where \(d(q_{1}/q_{*}-1)<\alpha<d(q_{1}-1)\). Then by Proposition 3.2 (iv), \(w\in A_{s,q_{1}}\). By Holder's inequality, we have \[\int_{\mathbb{R}^{d}}|g|^{r}\,dx\leq\left(\int_{\mathbb{R}^{d}}|g|^{q_{1}}(1+| x|)^{\alpha}\,dx\right)^{r/q_{1}}\left(\int_{\mathbb{R}^{d}}(1+|x|)^{-\frac{ \alpha r}{q_{1}-r}}\,dx\right)^{1-r/q_{1}}\] for \(r\in\{q,q_{0}\}\). Note that the integral \[\int_{\mathbb{R}^{d}}(1+|x|)^{-\frac{\alpha r}{q_{1}-r}}\,dx\] is finite if and only if \(\alpha>d(q_{1}/r-1)\), which is satisfied by the choice of \(q_{1}\). Since \((0,T)\) has a finite Lebesgue measure, this implies that \(L_{s_{1},q_{1},w}(\mathbb{R}^{d}_{T})\subset L_{s,q}(\mathbb{R}^{d}_{T})\cap L _{q_{0}}(\mathbb{R}^{d}_{T})\) for our specific weight \(w\). Similarly, \(W^{1,2}_{s_{1},q_{1},w}(\mathbb{R}^{d}_{T})\subset W^{1,2}_{s,q}(\mathbb{R}^{ d}_{T})\cap W^{1,2}_{q_{0}}(\mathbb{R}^{d}_{T})\) for our specific weight \(w\). By Theorem 2.5, there exist \(\delta_{1}>0\) and strong solutions \((v_{1},p_{1})\), \((v_{2},p_{2})\) to (1.1) in \(\mathbb{R}^{d}_{T}\) with \(v_{1}(0,\cdot)=v_{2}(0,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying \((v_{1},\nabla p_{1})\in\mathring{W}^{1,2}_{s_{1},q_{1},w}(\mathbb{R}^{d}_{T} )^{d}\times L_{s_{1},q_{1},w}(\mathbb{R}^{d}_{T})^{d}\) and \((v_{2},\nabla p_{2})\in\mathring{W}^{1,2}_{s,q}(\mathbb{R}^{d}_{T})^{d}\times L _{s,q}(\mathbb{R}^{d}_{T})^{d}\), where \(a^{ij}\) satisfies Assumption 2.4\((\delta_{1})\). Since \(\mathring{W}^{1,2}_{s_{1},q_{1},w}(\mathbb{R}^{d}_{T})\subset\mathring{W}^{1,2 }_{s,q}(\mathbb{R}^{d}_{T})\), by the uniqueness assertion of Theorem 2.5, we conclude that \(v_{1}=v_{2}\). Choose \(0<\delta_{2}<\delta_{1}\) so that the uniqueness assertion in \(\mathring{W}^{1,2}_{q_{0}}(\mathbb{R}^{d}_{T})\) of Theorem 2.5 holds for \(a^{ij}\) satisfying Assumption 2.4\((\delta_{2})\). Since \((v_{1},\nabla p_{1})\in\mathring{W}^{1,2}_{q_{0}}(\mathbb{R}^{d}_{T})^{d} \times L_{q_{0}}(\mathbb{R}^{d}_{T})^{d}\) and \((u,p)\) is a strong solution to (1.1) in \(\mathbb{R}^{d}_{T}\) satisfying \((u,\nabla p)\in\mathring{W}^{1,2}_{q_{0}}(\mathbb{R}^{d}_{T})^{d}\times L_{q_{0 }}(\mathbb{R}^{d}_{T})^{d}\), it follows from the uniqueness assertion that \(v_{1}=u\). Therefore, \(v_{1}=v_{2}=u\), which proves that \(u\) belongs to \(W^{1,2}_{s,q}(\mathbb{R}^{d}_{T})^{d}\). This completes the proof of Lemma 7.3. Proof of Theorem 2.7.: By taking mollification in \((t,x)\), we have \[\partial_{t}u^{(\varepsilon)}-a^{ij}(t,x)D_{ij}u^{(\varepsilon)}+\nabla p^{( \varepsilon)}=f^{(\varepsilon)}+h^{\varepsilon},\quad\operatorname{div}u^{( \varepsilon)}=g^{(\varepsilon)}\quad\text{in }Q_{3/4}\] with \(0<\varepsilon<1/4\), where \[h^{\varepsilon}(t,x)=[a^{ij}(t,x)D_{ij}u]^{(\varepsilon)}(t,x)-a^{ij}(t,x)D_{ ij}u^{(\varepsilon)}(t,x).\] By Theorem 2.5, there exist \(\delta_{1}>0\) and a unique strong solution \((u_{1}^{\varepsilon},p_{1}^{\varepsilon})\) to \[\partial_{t}u_{1}^{\varepsilon}-a^{ij}D_{ij}u_{1}^{\varepsilon}+\nabla p_{1}^ {\varepsilon}=h^{\varepsilon}1_{Q_{3/4}},\quad\operatorname{div}u_{1}^{ \varepsilon}=0\quad\text{in }(-1,0)\times\mathbb{R}^{d}\] with \(u_{1}^{\varepsilon}(-1,\cdot)=0\) on \(\mathbb{R}^{d}\) satisfying \[u_{1}^{\varepsilon}\in\mathring{W}_{q_{0}}^{1,2}((-1,0)\times\mathbb{R}^{d}) ^{d}\quad\text{and}\quad\nabla p_{1}^{\varepsilon}\in L_{q_{0}}((-1,0)\times \mathbb{R}^{d})^{d}.\] Moreover, we have \[\|u_{1}^{\varepsilon}\|_{W_{q_{0}}^{1,2}((-1,0)\times\mathbb{R}^{d})}+\| \nabla p_{1}^{\varepsilon}\|_{L_{q_{0}}((-1,0)\times\mathbb{R}^{d})}\leq N\|h^ {\varepsilon}\|_{L_{q_{0}}(Q_{3/4})}, \tag{7.1}\] where \(N\) is independent of \(\varepsilon\). By Lemma 7.3, there exists \(\delta_{2}>0\) such that under Assumption 2.4\((\delta_{2})\), \(u_{1}^{\varepsilon}\in W_{s,q}^{1,2}((-1,0)\times\mathbb{R}^{d})^{d}\). Moreover, if we define \(u_{2}^{\varepsilon}=u^{(\varepsilon)}-u_{1}^{\varepsilon}\) and \(p_{2}^{\varepsilon}=p^{(\varepsilon)}-p_{1}^{\varepsilon}\), then \(u_{2}^{\varepsilon}\in\mathring{W}_{s,q}^{1,2}(Q_{3/4})^{d}\) and \((u_{2}^{\varepsilon},p_{2}^{\varepsilon})\) is a solution to \[\partial_{t}u_{2}^{\varepsilon}-a^{ij}D_{ij}u_{2}^{\varepsilon}+\nabla p_{2}^ {\varepsilon}=f^{(\varepsilon)},\quad\operatorname{div}u_{2}^{\varepsilon}=g^{ (\varepsilon)}\quad\text{in }Q_{3/4}.\] Hence by Theorem 7.1, there exists \(0<\delta_{3}<\min\{\delta_{1},\delta_{2}\}\) such that under Assumption 2.4\((\delta_{3})\), we have \[\|D^{2}u_{2}^{\varepsilon}\|_{L_{s,q}(Q_{1/2})} \leq N\left(\|u_{2}^{\varepsilon}\|_{L_{s,1}(Q_{3/4})}+\|f^{( \varepsilon)}\|_{L_{s,q}(Q_{3/4})}+\|Dg^{(\varepsilon)}\|_{L_{s,q}(Q_{3/4})}\right)\] \[\leq N\left(\|u_{1}^{\varepsilon}\|_{L_{s,1}(Q_{3/4})}+\|u^{( \varepsilon)}\|_{L_{s,1}(Q_{3/4})}\right.\] \[\qquad\qquad\left.+\|f^{(\varepsilon)}\|_{L_{s,q}(Q_{3/4})}+\|Dg ^{(\varepsilon)}\|_{L_{s,q}(Q_{3/4})}\right) \tag{7.2}\] for some constant \(N=N(d,s,q,\nu,R_{0})>0\). Since \(u\in L_{s,1}(Q_{1})^{d}\), \(f\in L_{s,q}(Q_{1})^{d}\), and \(g\in W_{s,q}^{0,1}(Q_{1})\), we have \(u^{(\varepsilon)}\to u\) in \(L_{s,1}(Q_{3/4})\), \(f^{(\varepsilon)}\to f\) in \(L_{s,q}(Q_{3/4})\), and \(Dg^{(\varepsilon)}\to Dg\) in \(L_{s,q}(Q_{3/4})\). Note that \(h^{\varepsilon}\to 0\) in \(L_{q_{0}}(Q_{3/4})\) as \(\varepsilon\to 0\). Hence by (7.1) and Sobolev embedding theorem, we have \(\|u_{1}^{\varepsilon}\|_{L_{s,1}(Q_{3/4})}\to 0\) as \(\varepsilon\to 0\). This implies that there exists a constant \(N\) independent of \(\varepsilon\) such that \[\sup_{\varepsilon>0}\|D^{2}u_{2}^{\varepsilon}\|_{L_{s,q}(Q_{1/2})}\leq N.\] Hence by the weak compactness in \(L_{s,q}(Q_{1/2})\), there exists a subsequence \(\{D^{2}u_{2}^{\varepsilon_{j}}\}\) of \(\{D^{2}u_{2}^{\varepsilon}\}\) which converges weakly to a function \(v\) in \(L_{s,q}(Q_{1/2})\). On the other hand, since \(D^{2}u^{(\varepsilon)}\to D^{2}u\) strongly in \(L_{q_{0}}(Q_{3/4})\) and \(D^{2}u_{1}^{\varepsilon}\to 0\) strongly in \(L_{q_{0}}(Q_{3/4})\) by (7.1) as \(\varepsilon\to 0^{+}\), it follows that \(D^{2}u_{2}^{\varepsilon}\to D^{2}u\) strongly in \(L_{q_{0}}(Q_{3/4})\). Hence it follows that \(D^{2}u=v\) in \(Q_{1/2}\). Therefore, by taking liminf in (7.2), we get the desired result. This completes the proof of Theorem 2.7. ### Interior gradient estimates for Stokes equations in divergence form The following theorem is an analog of Theorem 7.1, which was implicitly proved in [26, Theorem 1.9]. **Theorem 7.4**.: _Let \(1<s,q<\infty\), \(\nu\in(0,1)\), and \(1/2\leq r<R\leq 1\). There exists \(\delta=\delta(d,\nu,s,q)\in(0,1)\) such that under Assumption 2.4\((\delta_{1})\), if \(u\in W^{0,1}_{s,q}(Q_{R})^{d}\) is a weak solution to (1.1) in \(Q_{R}\), \(\mathbf{F}\in L_{s,q}(Q_{R})^{d\times d}\) and \(g\in L_{s,q}(Q_{R})\), then there exists a constant \(N=N(d,s,q,\nu,r,R,R_{0})>0\) such that_ \[\|Du\|_{L_{s,q}(Q_{r})}\leq N\left[\frac{1}{(R-r)^{b}}\|u\|_{L_{s,1}(Q_{R})}+ \|\mathbf{F}\|_{L_{s,q}(Q_{R})}+\|g\|_{L_{s,q}(Q_{R})}\right]\] _for some \(b=b(d,q)>2\)._ We omit the proof of Theorem 7.4 since it is almost identical to that of Theorem 7.1 by using [26, Lemma 3.11] stated below. **Lemma 7.5**.: _Let \(1/2\leq R<1\), \(R_{1}\in(0,R_{0})\), \(R_{1}\in(0,R_{0})\), \(\delta\in(0,1)\), \(\kappa\in(0,1/4)\), \(1<s,q<\infty\), \(q_{1}\in(1,\min\{s,q\})\), and \(1<q_{0}<q_{1}\). Suppose that \(u\in W^{0,1}_{s,q}(Q_{R+R_{1}})^{d}\) is a weak solution to (1.3) in \(Q_{R+R_{1}}\) for some \(\mathbf{F}\in L_{s,q}(Q_{R+R_{1}})^{d\times d}\) and \(g\in L_{s,q}(Q_{R+R_{1}})\). Then_ \[\|Du\|_{L_{s,q}(Q_{R})} \leq N\kappa^{-(d+2)/q_{0}}\|\mathbf{F}\|_{L_{s,q}(Q_{R+R_{1}})}+N \kappa^{-(d+2)/q_{0}}\|g\|_{L_{s,q}(Q_{R+R_{1}})}\] \[+N\left(\kappa^{-(d+2)/q_{0}}\delta^{1/q_{0}-1/q_{1}}+\kappa \right)\|Du\|_{L_{s,q}(Q_{R+R_{1}})}\] \[+N\kappa^{-(d+2)/q_{0}}R_{1}^{-1}\|u\|_{L_{s,q}(Q_{R+R_{1}})}.\] _Remark_.: The conditions \(p\in L_{1}(Q_{R+R_{1}})\) and \(u_{t}\in\mathbb{H}_{1}^{-1}(Q_{R+R_{1}})\) are not essentially used in the proof in [26, Lemma 3.11]. It suffices to derive a vorticity equation from Stokes equations with simple coefficients. For simplicity, we assume that \(\mathbf{F}=0\). For \(k,l=1,\ldots,d\) and \(\psi\in C_{0}^{\infty}(Q_{1})\), define \(\phi=(D_{l}\psi)e_{k}-(D_{k}\psi)e_{l}\). Then it is easy to see that \(\operatorname{div}\phi(t,\cdot)=0\) in \(B_{1}\) for \(t\in(-1,0)\). For \(u=(u^{1},\ldots,u^{d})\), define \(\omega_{kl}=D_{k}u^{l}-D_{l}u^{k}\). If we use \(\phi\) as a test function in the definition of weak solutions, then it is easy to check that \(\omega_{kl}\) is a very weak solution of \[\partial_{t}\omega_{kl}-D_{i}(a^{ij}D_{j}\omega_{kl})=0\quad\text{in }Q_{1}.\] Another ingredient for proving Theorem 2.8 is the following regularity lemma similar to Lemma 7.3, which can be proved by using Theorem 2.6 instead of Theorem 2.5 of which proof is omitted. **Lemma 7.6**.: _Let \(1<q_{0},s,q<\infty\). There exists \(\delta>0\) such that under Assumption 2.4\((\delta)\), if \((u,p)\in\hat{\mathcal{H}}_{q_{0}}^{1}(\mathbb{R}_{T}^{d})^{d}\times L_{q_{0}} (\mathbb{R}_{T}^{d})\) is a weak solution of the problem (1.3) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}\), \(\mathbf{F}\in L_{\infty}(\mathbb{R}_{T}^{d})^{d\times d}\) having compact support in \(\mathbb{R}_{T}^{d}\) and \(g=0\), then \(u\in\mathcal{H}_{s,q}^{1}(\mathbb{R}_{T}^{d})^{d}\)._ Proof of Theorem 2.8.: We may assume that \(u\in L_{s,1}(Q_{1})^{d}\) because otherwise the desired inequality is trivial. Suppose first that \(s>q_{0}\). Choose \(1<s_{0},s_{1},s_{2},q_{1},q_{2}<\infty\) so that \[\frac{1}{s_{k}}\leq\frac{1}{2}+\frac{1}{s_{k+1}},\quad q_{0}=s_{0}<s_{1}\leq s _{2}=s,\quad\text{and }q_{1}=q_{2}=q.\] We first show that \(Du\in L_{s_{1},q}(Q_{3/4})\). Fix \(\psi\in C_{0}^{\infty}(Q_{7/8})^{d}\) with \(\operatorname{div}\psi(t,\cdot)=0\) in \(B_{7/8}\) for \(t\in(-(7/8)^{2},0)\) and \(0<\varepsilon<1/8\). Then if we use \(\phi=\tilde{\psi}^{(\varepsilon)}\) as a test function in the definition of weak solutions, where \(\tilde{\psi}^{(\varepsilon)}\) is defined in (6.1), then one can check that \(\ u^{(\varepsilon)}\) is a weak solution of \[\partial_{t}u^{(\varepsilon)}-D_{i}(a^{ij}D_{j}u^{(\varepsilon)})+\nabla p^{( \varepsilon)}=\operatorname{div}(\mathbf{F}^{(\varepsilon)}+\mathbf{H}^{ \varepsilon})\quad\text{in }Q_{7/8}\] and \[\operatorname{div}u^{(\varepsilon)}=g^{(\varepsilon)}\quad\text{in }Q_{7/8},\] where \(\mathbf{H}^{\varepsilon}=(H^{\varepsilon}_{1},\dots,H^{\varepsilon}_{d})\) and \(H^{\varepsilon}_{i}=(a^{ij}D_{j}u)^{(\varepsilon)}-a^{ij}D_{j}u^{(\varepsilon)}\). Then by Theorem 2.6, there exist \(\delta_{1}>0\) and a unique \((u^{\varepsilon}_{1},p^{\varepsilon}_{1})\in\hat{\mathcal{H}}^{1}_{q_{0}}((- 1,0)\times\mathbb{R}^{d})^{d}\times L_{q_{0}}((-1,0)\times\mathbb{R}^{d})\) satisfying \[\partial_{t}u^{\varepsilon}_{1}-D_{i}(a^{ij}D_{j}u^{\varepsilon}_{1})+\nabla p ^{\varepsilon}_{1}=\operatorname{div}(\mathbf{H}^{\varepsilon}1_{Q_{7/8}}), \quad\operatorname{div}u^{\varepsilon}_{1}=0\quad\text{in }(-1,0)\times \mathbb{R}^{d} \tag{7.3}\] with \(u^{\varepsilon}_{1}(-1,\cdot)=0\) on \(\mathbb{R}^{d}\). Moreover, we have \[\|u^{\varepsilon}_{1}\|_{\mathcal{H}^{1}_{q_{0}}((-1,0)\times\mathbb{R}^{d})} +\|p^{\varepsilon}_{1}\|_{L_{q_{0}}((-1,0)\times\mathbb{R}^{d})}\leq N\| \mathbf{H}^{\varepsilon}\|_{L_{q_{0}}(Q_{7/8})}, \tag{7.4}\] where the constant \(N\) is independent of \(\varepsilon\). Define \(u^{\varepsilon}_{2}=u^{(\varepsilon)}-u^{\varepsilon}_{1}\). By Lemma 7.6, there exists \(\delta_{2}>0\) such that under Assumption 2.4\((\delta_{2})\), \(u^{\varepsilon}_{2}\in\mathcal{H}^{1}_{s_{1},q}(Q_{7/8})^{d}\) is a weak solution to \[\partial_{t}u^{\varepsilon}_{2}-D_{i}(a^{ij}D_{j}u^{\varepsilon}_{2})+\nabla p ^{\varepsilon}_{2}=\operatorname{div}\mathbf{F}^{(\varepsilon)}\quad\text{ and}\quad\operatorname{div}u^{\varepsilon}_{2}=g^{(\varepsilon)}\quad\text{in }Q_{7/8}.\] By Theorem 7.4, there exists \(0<\delta_{3}<\min\{\delta_{1},\delta_{2}\}\) such that under Assumption 2.4\((\delta_{3})\), we have \[\|Du^{\varepsilon}_{2}\|_{L_{s_{1},q}(Q_{3/4})} \leq N\left(\|u^{\varepsilon}_{2}\|_{L_{s_{1},1}(Q_{7/8})}+\| \mathbf{F}^{(\varepsilon)}\|_{L_{s_{1},q}(Q_{7/8})}+\|g^{(\varepsilon)}\|_{L_ {s_{1},q}(Q_{7/8})}\right)\] \[\leq N\left(\|u^{(\varepsilon)}\|_{L_{s_{1},1}(Q_{7/8})}+\|u^{ \varepsilon}_{1}\|_{L_{s_{1},1}(Q_{7/8})}\right.\] \[\qquad\left.+\|\mathbf{F}^{(\varepsilon)}\|_{L_{s_{1},q}(Q_{7/8}) }+\|g^{(\varepsilon)}\|_{L_{s_{1},q}(Q_{7/8})}\right)\] for some constant \(N=N(d,s,q,\nu,R_{0})>0\). Since \(u\in L_{s,1}(Q_{1})^{d}\), we see that \(u^{(\varepsilon)}\to u\) in \(L_{s_{1},1}(Q_{7/8})\), \(\mathbf{F}^{(\varepsilon)}\to\mathbf{F}\), and \(g^{(\varepsilon)}\to g\) in \(L_{s_{1},q}(Q_{7/8})\). Since \(Du\in L_{q_{0}}(Q_{1})\), it follows that \(\mathbf{H}^{\varepsilon}\to 0\) in \(L_{q_{0}}(Q_{7/8})\) as \(\varepsilon\to 0\). By (7.4) and Lemma 3.1, we have \(\|Du^{\varepsilon}_{1}\|_{L_{q_{0}}(Q_{7/8})}\to 0\) and \(\|u^{\varepsilon}_{1}\|_{L_{s_{1},q_{0}}(Q_{7/8})}\to 0\) as \(\varepsilon\to 0+\), and hence \(\|u^{\varepsilon}_{1}\|_{L_{s_{1},1}(Q_{7/8})}\to 0\) as \(\varepsilon\to 0\). This implies that \[\sup_{\varepsilon>0}\|Du^{\varepsilon}_{2}\|_{L_{s_{1},q}(Q_{3/4})}\leq N\] for some constant \(N>0\). Hence by the weak compactness in \(L_{s_{1},q}(Q_{3/4})\), there exists a convergent subsequence \(\{Du^{\varepsilon_{j}}_{2}\}\) of \(\{Du^{\varepsilon}_{2}\}\) which converges weakly to a function \(v\) in \(L_{s_{1},q}(Q_{3/4})\). On the other hand, since \(Du^{(\varepsilon)}\to Du\) strongly in \(L_{q_{0}}(Q_{7/8})\) and \(Du^{\varepsilon}_{1}\to 0\) in \(L_{q_{0}}(Q_{7/8})\) by (7.4) as \(\varepsilon\to 0^{+}\), it follows that \(Du^{\varepsilon}_{2}\to Du\) strongly in \(L_{q_{0}}(Q_{7/8})\). Hence we conclude that \(v=Du\) in \(Q_{3/4}\). Therefore, under Assumption 2.4\((\delta_{3})\), \(Du\in L_{s_{1},q}(Q_{3/4})\) and \[\|Du\|_{L_{s_{1},q}(Q_{3/4})}\leq N\left(\|u\|_{L_{s_{1},1}(Q_{3/4})}+\| \mathbf{F}\|_{L_{s_{1},q}(Q_{7/8})}+\|g\|_{L_{s_{1},q}(Q_{7/8})}\right)\] for some constant \(N=N(d,s,q,\nu,R_{0})>0\). Since \(Du\in L_{s_{1},q}(Q_{3/4})\), we see that \(\mathbf{H}^{\varepsilon}\to 0\) in \(L_{s_{1},q}(Q_{5/8})\) as \(\varepsilon\to 0\). Then by Theorem 2.6, there exists \(0<\delta_{4}<\delta_{3}\) and a unique \((u_{1}^{\varepsilon},p_{1}^{\varepsilon})\in\mathring{\mathcal{H}}_{s_{1},q}^{1 }((-1,0)\times\mathbb{R}^{d})^{d}\times L_{s_{1},q}((-1,0)\times\mathbb{R}^{d})\) satisfying \[\partial_{t}u_{1}^{\varepsilon}-D_{i}(a^{ij}D_{j}u_{1}^{\varepsilon})+\nabla p _{1}^{\varepsilon}=\operatorname{div}(\mathbf{H}^{\varepsilon}1_{Q_{5/8}}), \quad\operatorname{div}u_{1}^{\varepsilon}=0,\quad u_{1}^{\varepsilon}(-1, \cdot)=0\quad\text{on }\mathbb{R}^{d},\] where \(a^{ij}\) satisfies Assumption 2.4 (\(\delta_{4}\)). Moreover, we have \[\|u_{1}^{\varepsilon}\|_{\mathcal{H}_{s_{1},q}^{1}((-1,0)\times\mathbb{R}^{d} )}+\|p_{1}^{\varepsilon}\|_{L_{s_{1},q}((-1,0)\times\mathbb{R}^{d})}\leq N\| \mathbf{H}^{\varepsilon}\|_{L_{s_{1},q}(Q_{5/8})}, \tag{7.5}\] where the constant \(N\) is independent of \(\varepsilon\). By (7.5) and Lemma 3.1, \(\|u_{1}^{\varepsilon}\|_{L_{s,q}(Q_{5/8})}\to 0\) as \(\varepsilon\to 0\). Hence following the above compactness argument, we can show that \(Du\in L_{s,q}(Q_{1/2})\). Moreover, we have \[\|Du\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{5/8})}+\|\mathbf{F}\|_{ L_{s,q}(Q_{5/8})}+\|g\|_{L_{s,q}(Q_{5/8})}\right)\] for some constant \(N=N(d,s,q,\nu,R_{0})>0\). In a similar way, we can also prove the case \(s\leq q_{0}\). This completes the proof of Theorem 2.8. _Remark 7.7_.: (i) If the viscosity coefficient \(a^{ij}\) depends only on \(t\), then we can show that Theorem 2.8 holds if \(u\in L_{s,1}(Q_{1})^{d}\) is a very weak solution to (1.3) in \(Q_{1}\) for some \(\mathbf{F}\in L_{s,q}(Q_{1})^{d}\) and \(g\in L_{s,q}(Q_{1})\). We say that \(u\in L_{s,1}(Q_{1})^{d}\) is a _very weak solution_ to (1.3) in \(Q_{1}\) if \[\int_{Q_{1}}u\cdot(\partial_{t}\phi+a^{ij}D_{ij}\phi)\,dxdt=-\int_{Q_{1}} \mathbf{F}:\nabla\phi\,dxdt\] for all \(\phi\in C_{0}^{\infty}(Q_{1})^{d}\) with \(\operatorname{div}\phi(t)=0\) in \(B_{1}\) for all \(t\in(-1,0)\), and \[-\int_{B_{1}}u\cdot\nabla\varphi\,dx=\int_{B_{1}}g\varphi\,dx\] for all \(\varphi\in C_{0}^{\infty}(B_{1})\) and a.e. \(t\in(-1,0)\). Let \(\phi\in C_{0}^{\infty}(\mathbb{R})\) and \(\zeta\in C_{0}^{\infty}(B_{1})^{d}\), where \(\phi=0\) if \(t\geq 0\), \(\int_{-1}^{0}\phi\,dt=1\), \(\operatorname{div}\zeta=0\) in \(B_{1}\), and \(\int_{B_{1}}\zeta\,dx=1\). Define \(\phi_{\eta}(t)=\eta^{-2}\varphi(t/\eta^{2})\) and \(\zeta_{\varepsilon}(x)=\varepsilon^{-d}\zeta(x/\varepsilon)\). For \((t,x)\in(-1+\eta^{2},0)\times B_{1-\varepsilon}\), define \[u^{(\eta,\varepsilon)}(t,x)=(u^{(\varepsilon)})^{(\eta)}(t,x) =\int_{-\eta^{2}}^{0}\int_{B_{\varepsilon}}u(t+s,x+y)\varphi_{ \eta}(s)\zeta_{\varepsilon}(y)\,dyds\] \[=\int_{Q_{1}}u(s,y)\phi_{\eta}(s-t)\zeta_{\varepsilon}(y-x)\,dsdy.\] Then it is easy to verify that for small \(\varepsilon,\eta>0\), \(u^{(\eta,\varepsilon)}\) is a weak solution to \[\partial_{t}u^{(\eta,\varepsilon)}-D_{i}(a^{ij}(t)D_{j}u^{(\eta,\varepsilon)} )+\nabla p^{(\eta,\varepsilon)}=\operatorname{div}(\mathbf{F}^{(\eta, \varepsilon)}+\mathbf{H}^{\eta,\varepsilon})\quad\text{in }Q_{3/4},\] where \(\mathbf{H}^{\eta,\varepsilon}=(H_{1}^{\eta,\varepsilon},\ldots,H_{d}^{\eta, \varepsilon})\), \[H_{i}^{\eta,\varepsilon}(t,x)=(a^{ij}D_{j}u^{(\varepsilon)})^{(\eta)}-a^{ij}D_{ j}u^{(\eta,\varepsilon)},\quad i=1,\ldots,d,\] and \[\operatorname{div}u^{(\eta,\varepsilon)}=g^{(\eta,\varepsilon)}\quad\text{in }Q_{3/4}.\] Following the argument as in the proof of Theorem 2.8, we can prove the desired result. We give a sketch of the proof. Since \(u\in L_{s,1}(Q_{1})^{d},\mathbf{H}^{\eta,\varepsilon}\in L_{s,q}(Q_{1})^{d\times d}\). By Theorem 4.2, there exists \((u_{1}^{\eta,\varepsilon},p_{1}^{\eta,\varepsilon})\in\tilde{\mathcal{H}}_{s,q}^ {1}((-(3/4)^{2},0)\times\mathbb{R}^{d})^{d}\times L_{s,q}((-(3/4)^{2},0)\times \mathbb{R}^{d})\) satisfying (7.3) with \(u_{1}^{\eta,\varepsilon}(-(3/4)^{2},\cdot)=0\) on \(\mathbb{R}^{d}\), where \(H^{\varepsilon}\) is replaced with \(H^{\eta,\varepsilon}\). Moreover, we have \[\|u_{1}^{\eta,\varepsilon}\|_{\mathcal{H}_{s,q}^{1}((-(3/4)^{2},0)\times \mathbb{R}^{d})}\leq N\|H^{\eta,\varepsilon}\|_{L_{s,q}(Q_{3/4})}\] for some constant \(N\) independent of \(\eta\). Define \(u_{2}^{\eta,\varepsilon}=u^{(\eta,\varepsilon)}-u_{1}^{\eta,\varepsilon}\). Then following the argument as in the proof of Theorem 2.8, we have \[\|Du_{2}^{\eta,\varepsilon}\|_{L_{s,q}(Q_{1/2})} \leq N\left(\|u_{2}^{\eta,\varepsilon}\|_{L_{s,1}(Q_{3/4})}+\| \mathbf{F}^{(\eta,\varepsilon)}\|_{L_{s,q}(Q_{3/4})}+\|g^{(\eta,\varepsilon)} \|_{L_{s,q}(Q_{3/4})}\right)\] \[\leq N\left(\|u_{1}^{\eta,\varepsilon}\|_{L_{s,1}(Q_{3/4})}+\|u^{( \eta,\varepsilon)}\|_{L_{s,1}(Q_{3/4})}\right.\] \[\qquad\left.+\|\mathbf{F}^{(\eta,\varepsilon)}\|_{L_{s,q}(Q_{3/4} )}+\|g^{(\eta,\varepsilon)}\|_{L_{s,q}(Q_{3/4})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). Note that for fixed \(\varepsilon>0\), \(H_{i}^{\eta,\varepsilon}\to 0\) in \(L_{s,q}(Q_{3/4})\) as \(\eta\to 0\). Hence it follows that \(\|u_{1}^{\eta,\varepsilon}\|_{L_{s,1}(Q_{3/4})}\to 0\) as \(\eta\to 0\) and \[\sup_{\eta}\|Du_{2}^{\eta,\varepsilon}\|_{L_{s,q}(Q_{1/2})}\leq N(\varepsilon),\] where \(N\) is independent of \(\eta\). Note that \(a^{ij}D_{j}u^{(\eta,\varepsilon)}\to a^{ij}D_{j}u^{(\varepsilon)}\) a.e as \(\eta\to 0\). Also, it follows that \[|D_{j}u^{(\eta,\varepsilon)}(t,x)-D_{j}u^{(\varepsilon)}(t,x)|\leq N\mathcal{ M}^{t}(D_{j}u^{(\varepsilon)})(t,x)\] for some constant \(N=N(d)>0\), where \(\mathcal{M}^{t}\) denotes the one-dimensional maximal function in \(t\). Hence by the Hardy-Littlewood maximal function theorem and the dominated convergence theorem, we can show that \(\mathbf{H}^{\eta,\varepsilon}\to 0\) in \(L_{s,q}(Q_{3/4})\) as \(\eta\to 0\) for fixed \(\varepsilon>0\). By a compactness argument as in the proof of Theorem 2.7, we get \[\|Du^{(\varepsilon)}\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u^{(\varepsilon)}\|_{L_ {s,1}(Q_{3/4})}+\|\mathbf{F}^{(\varepsilon)}\|_{L_{s,q}(Q_{3/4})}+\|g^{( \varepsilon)}\|_{L_{s,q}(Q_{3/4})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). Since \(u^{(\varepsilon)}\to u\) in \(L_{s,1}(Q_{3/4})\), \(\mathbf{F}^{(\varepsilon)}\to\mathbf{F}\), and \(g^{(\varepsilon)}\to g\) in \(L_{s,q}(Q_{3/4})\), it follows that \[\sup_{\varepsilon>0}\|Du^{(\varepsilon)}\|_{L_{s,q}(Q_{1/2})}\leq N.\] Hence by a previous compactness argument, it follows that \(Du\) exists in \(Q_{1/2}\) and is in \(L_{s,q}(Q_{1/2})\). Moreover, we have \[\|Du\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{3/4})}+\|\mathbf{F}\|_{L _{s,q}(Q_{3/4})}+\|g\|_{L_{s,q}(Q_{3/4})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). (ii) Similarly, if \(u\in L_{s,1}(Q_{1})^{d}\) satisfies \[-\int_{Q_{1}}u\cdot(\partial_{t}\phi+a^{ij}D_{ij}\phi)\,dxdt=-\int_{Q_{1}}f \cdot\phi\,dxdt\] for all \(\phi\in C_{0}^{\infty}(Q_{1})^{d}\) with \(\operatorname{div}\phi(t)=0\) for \(t\in(-1,0)\) and \[-\int_{B_{1}}u\cdot\nabla\varphi\,dx=\int_{B_{1}}g\varphi\,dx\] for all \(\varphi\in C_{0}^{\infty}(B_{1})\) and a.e. \(t\in(-1,0)\) with \(f\in L_{s,q}(Q_{1})^{d}\) and \(g\in W_{s,q}^{0,1}(Q_{1})\), then \(D^{2}u\in L_{s,q}(Q_{1/2})\) and \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}(Q _{1})}+\|g\|_{W_{s,q}^{0,1}(Q_{1})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). Indeed, by well-known mixed norm solvability results of heat equations in a bounded cylindrical domain (see e.g. [20]), there exists \(v\in W^{1,2}_{s,q}(Q_{1})^{d}\) such that \(\partial_{t}v-\Delta v=f\) in \(Q_{1}\) and \(v=0\) on \(\partial_{p}Q_{1}\). Moreover, we have \[\|v\|_{W^{1,2}_{s,q}(Q_{1})}\leq N\|f\|_{L_{s,q}(Q_{1})} \tag{7.6}\] for some constant \(N=N(d,s,q)>0\). Define \(w=u-v\). Then it is easy to show that \(w\) is a very weak solution to \[w_{t}-a^{ij}(t)D_{ij}w+\nabla p=D_{i}((a^{ij}-\delta^{ij})D_{j}v)\quad\text{and }\quad\operatorname{div}w=g-\operatorname{div}v\quad\text{in }Q_{1}.\] Hence it follows from the previous result and (7.6) that \(Dw\in L_{s,q}(Q_{3/4})\) and \[\begin{split}\|Dw\|_{L_{s,q}(Q_{3/4})}&\leq N\left( \|w\|_{L_{s,1}(Q_{1})}+\|Dv\|_{L_{s,q}(Q_{1})}+\|g\|_{L_{s,q}(Q_{1})}\right)\\ &\leq N\left(\|w\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}(Q_{1})}+\|g\|_ {L_{s,q}(Q_{1})}\right)\end{split} \tag{7.7}\] for some constant \(N=N(d,s,q,\nu)>0\). Since \(u=v+w\) and \(Dv\in L_{s,q}(Q_{3/4})\), we have \(Du\in L_{s,q}(Q_{3/4})\). Moreover, it follows from (7.6) and (7.7) that \[\begin{split}\|Du\|_{L_{s,q}(Q_{3/4})}&\leq N\left( \|w\|_{L_{s,1}(Q_{1})}+\|v\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}(Q_{1})}+\|g\|_{L_ {s,q}(Q_{1})}\right)\\ &\leq N\left(\|u\|_{L_{s,1}(Q_{1})}+\|f\|_{L_{s,q}(Q_{1})}+\|g\|_ {L_{s,q}(Q_{1})}\right)\end{split}\] for some constant \(N=N(d,s,q,\nu)>0\). For \(1\leq k\leq d\), observe that \(D_{k}u\) is a very weak solution to \[\partial_{t}v-D_{i}(a^{ij}D_{j}v)+\nabla p=\operatorname{div}\mathbf{F},\quad \operatorname{div}v=D_{k}g\quad\text{in }Q_{3/4},\] where \(\mathbf{F}^{ij}=f^{i}\delta_{jk}\). Hence it follows from the previous result that \(D(D_{k}u)\in L_{s,q}(Q_{1/2})\) and \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|D_{k}u\|_{L_{s,q}(Q_{3/4})}+\|f\|_ {L_{s,q}(Q_{3/4})}+\|D_{k}g\|_{L_{s,q}(Q_{3/4})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). Then by using interpolation inequality on \(Du\) and a standard iteration argument as in Lemma 7.2, one can prove that \[\|D^{2}u\|_{L_{s,q}(Q_{1/2})}\leq N\left(\|u\|_{L_{s,1}(Q_{3/4})}+\|f\|_{L_{s,q}(Q_{3/4})}+\|Dg\|_{L_{s,q}(Q_{3/4})}\right)\] for some constant \(N=N(d,s,q,\nu)>0\). We omit the details. ## 8. Boundary mixed-norm Hessian estimates for Stokes equations In this section, we briefly sketch how to obtain boundary mixed-norm Hessian estimates under the Lions boundary conditions. The details of this proof are omitted for the sake of brevity, but essentially only involve the same procedures in Sections 4, 5, and 7. As usual, we may assume that \(a^{ij}\) is symmetric. We first obtain a weighted mixed-norm estimates for Stokes equations in nondivergence form with simple coefficients, a weighted version of Dong-Kim-Phan [24, Theorem 1.4]. This result can be obtained by following an argument in Theorem 4.1 and an extension argument given in Dong-Kim-Phan [24, Theorem 1.4]. Then we obtain mean oscillation estimates for \(D\omega\) similar to Lemma 5.1 using the Holder estimate for \(D\omega\) which was proved in [24, Lemma 3.2] and following an argument in Dong-Kim [20, Lemma 5.13] and [24, Lemmas 5.1 and 5.3]. Then under Assumption 2.4 (\(\delta\)), we can prove weighted mixed-norm solvability results for Stokes equations in nondivergence form under the Lions boundary conditions following the proof of Theorem 2.5 and the method of continuity. In summary, we have the following theorem. **Theorem 8.1**.: _Let \(1<s,q<\infty\), \(0<T<\infty\), and let \(K_{0}\geq 1\) be constant, \(w=w_{1}(x)w_{2}(t)\), where_ \[w_{1}\in A_{q}(\mathbb{R}^{d},dx),\quad w_{2}\in A_{s}(\mathbb{R},dt),\quad[w]_ {A_{s,q}}\leq K_{0}.\] _There exists \(0<\delta<1\) depending only on \(d\), \(\nu\), \(s\), \(q\), and \(K_{0}\) such that under Assumption 2.4\((\delta)\), for every \(f\in L_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})^{d}\), \(g\in\hat{\mathcal{H}}^{1}_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})\), and \(g_{t}=\operatorname{div}G\) for some vector field \(G=(G_{1},\ldots,G_{d})\in L_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})^{d}\) in the sense that_ \[\int_{(0,T)\times\mathbb{R}^{d}_{+}}g\varphi_{t}\,dxdt=\int_{(0,T)\times \mathbb{R}^{d}_{+}}G\cdot\nabla\varphi\,dxdt\] _for any \(\varphi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d}_{+})\), there exists a unique strong solution \((u,p)\) to (1.1) in \((0,T)\times\mathbb{R}^{d}_{+}\) with \(u(0,\cdot)=0\) on \(\mathbb{R}^{d}_{+}\) satisfying_ \[u\in\mathring{W}^{1,2}_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})^{d},\quad\nabla p \in L_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})^{d},\] _and_ \[D_{d}u^{k}=u^{d}=0\quad\text{on }[0,T)\times\mathbb{R}^{d-1}\times\{0\}, \quad k=1,2,\ldots,d-1.\] _Moreover, we have_ \[\|u\|_{W^{1,2}_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})}+\|\nabla p \|_{L_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})}\] \[\leq N\left(\|f\|_{L_{s,q}((0,T)\times\mathbb{R}^{d}_{+})}+\|Dg \|_{L_{s,q,w}((0,T)\times\mathbb{R}^{d}_{+})}+\|G\|_{L_{s,q,w}((0,T)\times \mathbb{R}^{d}_{+})}\right),\] _where \(N=N(d,s,q,K_{0},\nu,R_{0},T)>0\)._ To prove Theorem 2.10, let \(\tilde{u}^{k}\) be the even extensions of \(u^{k}\) with respect to \(x_{d}\), \(k=1,\ldots,d-1\), \(\tilde{u}^{d}\) be the odd extensions of \(u^{d}\) with respect to \(x_{d}\). Let \(\tilde{f}^{k}(t,\cdot)\) be the even extension of \(f^{k}(t,\cdot)\) for \(k=1,\ldots,d-1\), and \(\tilde{f}^{d}(t,\cdot)\) be the odd extension of \(f^{d}(t,\cdot)\). Similarly, let \(\tilde{g}(t,\cdot)\) be the even extension of \(g(t,\cdot)\) with respect to \(x_{d}\). Let \(\tilde{p}\) be the even extension of \(p\) in \(x_{d}\). By (2.3), \(\tilde{u}\in W^{1,2}_{q_{0}}(Q_{1})^{d}\). Also, it is easy to verify that \(\tilde{p}\in W^{0,1}_{1}(Q_{1})\), \(\tilde{f}\in L_{s,q}(Q_{1})\), and \(\tilde{g}\in\mathring{W}^{1,2}_{s,q}(Q_{1})\). Moreover, \(\tilde{u}|_{Q^{+}_{1}}=u\), \(\tilde{p}|_{Q^{+}_{1}}=p\), \(\tilde{f}|_{Q^{+}_{1}}=f\), and \(\tilde{g}|_{Q^{+}_{1}}=g\). Define \(\overline{a}^{ij}(t,x^{\prime},x_{d})=a^{ij}(t,x^{\prime},x_{d})\) if \(x_{d}>0\). For \(x_{d}<0\), define \(\overline{a}^{ij}(t,x^{\prime},x_{d})\) to be \(a^{ij}(t,x^{\prime},-x_{d})\) if \(i,j=1,\ldots,d-1\), \(\overline{a}^{id}(t,x^{\prime},x_{d})=\overline{a}^{id}(t,x^{\prime},x_{d}):=- a^{id}(t,x^{\prime},-x_{d})\) for \(i=1,\ldots,d-1\). Finally, we define \(\overline{a}^{dd}(t,x)=a^{dd}(t,x^{\prime},-x_{d})\). By a direct computation, \((\tilde{u},\tilde{p})\) satisfies \[\partial_{t}\tilde{u}-\overline{a}^{ij}D_{ij}\tilde{u}+\nabla\tilde{p}=\tilde {f},\quad\operatorname{div}\tilde{u}=\tilde{g}\quad\text{in }Q_{1},\] and \[D_{d}\tilde{u}^{k}=\tilde{u}^{d}=0\quad\text{on }(-1,0]\times B^{\prime}_{1} \times\{0\}.\] Choose a mollifier \(\varphi_{\varepsilon}\) which is symmetric with respect to the \(x_{d}\) variable, i.e., \(\varphi(s,y^{\prime},-y_{d})=\varphi(s,y^{\prime},y_{d})\). Define \[\tilde{u}^{(\varepsilon)}(t,x^{\prime},x_{d})=\int_{Q_{\varepsilon}}\varphi_{ \varepsilon}(s,y^{\prime},y_{d})\tilde{u}(t-s,x-y^{\prime},x_{d}-y_{d})\,dyds\] for \((t,x)\in(-1+\varepsilon^{2},0)\times B_{1-\varepsilon}\), \(0<\varepsilon<1\). By a change of variables, we have \((\tilde{u}^{d})^{(\varepsilon)}(t,x^{\prime},0)=0\) since \(\varphi_{\varepsilon}\) is a symmetric mollifier with respect to the \(x_{d}\) variable. Similarly, we have \(D_{d}(u^{(\varepsilon)})^{k}(t,x^{\prime},0)=0\) for \(k=1,2,\ldots,d-1\). Hence \(\tilde{u}^{(\varepsilon)}\) satisfies the Lions boundary conditions. Now we mollify the equation and write \[\partial_{t}\tilde{u}^{(\varepsilon)}-\overline{a}^{ij}(t,x)D_{ij}\tilde{u}^{( \varepsilon)}+\nabla\tilde{p}^{(\varepsilon)}=\tilde{f}^{(\varepsilon)}+h^{ \varepsilon}\quad\text{on }Q_{1},\] where \[h^{\varepsilon}(t,x)=[\overline{a}^{ij}(t,x)D_{ij}\tilde{u}]^{(\varepsilon)}- \overline{a}^{ij}(t,x)D_{ij}\tilde{u}^{(\varepsilon)}.\] To apply Theorem 8.1, we need to extend \(\overline{a}^{ij}\) to the whole space so that the extended one satisfies Assumption 2.4\((\delta)\). Since \(a^{ij}\) satisfies Assumption 2.9\((\delta)\), there exists \(0<R_{0}<1/4\) such that for each \((t_{0},x_{0})\in\overline{Q_{2}^{+}}\), there exists \(\hat{a}^{ij}(t)\) satisfying uniform ellipticity conditions (1.2) such that \[\fint_{Q_{r}^{+}(t_{0},x_{0})}|\overline{a}^{ij}(t,x)-\hat{a}^{ij}(t)|\,dxdt\leq\delta\] for all \(0<r<R_{0}\). Choose \(\eta\in C_{0}^{\infty}(B_{7/4})\) satisfying \(\eta=1\) in \(B_{5/4}\) and define \[\tilde{a}^{ij}(t,x)=\overline{a}^{ij}(t,x)\eta(x)+\delta^{ij}(1-\eta(x)).\] Then \(\tilde{a}^{ij}\) is bounded and uniformly elliptic. By extending \(\tilde{a}^{ij}\) periodically in \(t\) if necessary, a direct computation shows that there exists \(0<R_{1}<R_{0}\) depending only on \(d\), \(\delta\), \(\nu\), \(R_{0}\) such that for any \((t_{0},x_{0})\in\mathbb{R}^{d+1}\), we have \[\fint_{Q_{r}(t_{0},x_{0})}|\tilde{a}^{ij}-(\tilde{a}^{ij})_{B_{r}(x_{0})}(t)| \,dxdt\leq 4\delta\] for all \(0<r<R_{1}\). By Theorem 8.1, there exists \((u_{1}^{\varepsilon},p_{1}^{\varepsilon})\) satisfying \((u_{1}^{\varepsilon},\nabla p_{1}^{\varepsilon})\in\hat{W}_{q_{0}}^{1,2}((-1, 0)\times\mathbb{R}_{+}^{d})^{d}\times L_{q_{0}}((-1,0)\times\mathbb{R}_{+}^{d })^{d}\), with \(u_{1}^{\varepsilon}(-1,\cdot)=0\) on \(\mathbb{R}_{+}^{d}\), \[\partial_{t}u_{1}^{\varepsilon}-\tilde{a}^{ij}D_{ij}u_{1}^{\varepsilon}+ \nabla p_{1}^{\varepsilon}=h^{\varepsilon}1_{Q_{3/4}^{+}},\quad\operatorname {div}\tilde{u}_{1}^{\varepsilon}=0\quad\text{in }(-1,0)\times\mathbb{R}_{+}^{d},\] and \[D_{d}(u_{1}^{\varepsilon})^{k}=(u_{1}^{\varepsilon})^{d}=0\quad\text{on }(-1,0)\times\mathbb{R}^{d-1}\times\{0\}\] for \(k=1,\ldots,d-1\). Moreover, we have \[\|u_{1}^{\varepsilon}\|_{W_{q_{0}}^{1,2}((-1,0)\times\mathbb{R}_{+}^{d})} \leq N\|h^{\varepsilon}\|_{L_{q_{0}}(Q_{3/4}^{+})}\] for some constant \(N\) independent of \(\varepsilon\). Define \(u_{2}^{\varepsilon}=\tilde{u}^{(\varepsilon)}-u_{1}^{\varepsilon}\) and \(p_{2}^{\varepsilon}=\tilde{p}^{(\varepsilon)}-p_{1}^{\varepsilon}\). Then using Theorem 8.1 as in Lemma 7.3, one can prove that \((u_{2}^{\varepsilon},p_{2}^{\varepsilon})\in\tilde{W}_{s,q}^{1,2}(Q_{3/4}^{+}) ^{d}\times W_{1}^{0,1}(Q_{3/4}^{+})\) is a strong solution to (1.1) in \(Q_{3/4}^{+}\) satisfying the Lions boundary conditions on \((-(3/4)^{2},0]\times B_{3/4}^{\prime}\times\{0\}\), \(\tilde{f}^{(\varepsilon)}\), and \(\tilde{g}^{(\varepsilon)}\) instead of \(f\) and \(g\) by following the proof of Lemma 7.3. Then using a similar idea as in the proof of Theorem 2.7, we can prove the desired estimates by using Dong-Kim-Phan [24, Theorem 1.2] instead of Dong-Phan (Theorem 7.1). We leave the details to interested readers. ## Appendix A Proof of Theorems 4.1 This section is devoted to a proof of Theorem 4.1 which concerns the solvability of Stokes equations in nondivergence form with simple coefficients. Proof of Theorem 4.1.: The proof is almost identical to that of Theorem 4.1 in [24]. The proof of existence part is split into five steps. From Step 1 to Step 3, the key differences are using Lemma 3.7 instead of Lemma 4.1 in [24] and Theorem 3.8 when we construct vorticity from given external force. Following the construction of pressure in Step 4 of the proof of Theorem 4.1 in [24], if we define \(h=f-\partial_{t}u+a^{ij}D_{ij}u\), then one can show that \[|\nabla p^{\varepsilon}(t,x)|\leq N(Mh)(t,x)\] for some constant \(N=N(d)>0\) and for all \(\varepsilon>0\) and \((t,x)\in\mathbb{R}_{T}^{d}\). Hence it follows from Lemma 3.4 that \[\|\nabla p^{\varepsilon}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N(d,s,q,K_{0}) \|h\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\] is bounded uniformly in \(\varepsilon>0\). By subtracting a function of \(t\), we may assume that \((p^{\varepsilon}(\cdot,t))_{B_{1}}=\) for \(t\in(0,T)\). Then for each \(R>1\), it follows from the Poincare inequality (Lemma 3.3) that \[\|p^{\varepsilon}(t,\cdot)\|_{L_{q,w_{1}}(B_{R})}\leq N(d,q,K_{0},R)\|\nabla p ^{\varepsilon}(t,\cdot)\|_{L_{q,w_{1}}(B_{R})}\] for each \(t\in[0,T]\). By taking \(L_{s,w_{2}}\)-norm, we get \[\|p^{\varepsilon}\|_{L_{s,q,w}((0,T)\times B_{R})}\leq N(d,s,q,K_{0},R)\|h\|_{ L_{s,q,w}((0,T)\times B_{R})},\] which is uniformly bounded in \(\varepsilon\). Hence by weak compactness results in weighted \(L_{q}\)-spaces, one can conclude that there exists a locally integrable function \(p:\mathbb{R}_{T}^{d}\to\mathbb{R}\) such that \(\nabla p\in L_{s,q,w}(\mathbb{R}_{T}^{d})^{d}\) and \((u,p)\) satisfies equation (4.1) in \(\mathbb{R}_{T}^{d}\). Although Step 5 is also similar to that of Step 5 in the proof of Theorem 4.1 in [24], we give a detailed proof for the sake of convenience. _Step 5_. Since \(C_{0}^{\infty}(\mathbb{R}_{T}^{d})\) is dense in \(L_{s,q,w}(\mathbb{R}_{T}^{d})\), we need to show that there exist \(g^{m}\) and \(G^{m}\) vanish for large \(|x|\) uniformly in \(t\in[0,T]\), \[g^{m}\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}_{T}^{d}),\quad G^{m}\in L _{s,q,w}(\mathbb{R}_{T}^{d})^{d},\quad\partial_{t}g^{m}=\operatorname{div}G^{ m}\quad\text{in }\mathbb{R}_{T}^{d},\] and \[\|g-g^{m}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|Dg-Dg^{m}\|_{L_{s,q,w}(\mathbb{R }_{T}^{d})}+\|G-G^{m}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\to 0\] as \(m\to\infty\). Choose a sequence of smooth functions \(\{\chi_{m}\}\) on \(\mathbb{R}^{d}\) such that \(\chi_{m}=1\) on \(B_{m/2}\) and \(\chi_{m}=0\) outside \(B_{m}\), \(m=1,2,3,\dots\). Define \[c_{m}(t):=\frac{\int_{B_{m}}\nabla\chi_{m}(y)\cdot G(t,y)dy}{\int_{B_{m}}\chi_ {m}(y)dy}.\] Note that \[\int_{B_{m}}(-\nabla\chi_{m}\cdot G+c_{m}(t)\chi_{m}(x))\,dx=0\] for a.e. \(t\in(0,T)\). Hence by Theorem 3.6, using the integral representation of solutions, we can find \(H^{m}\) in \((0,T)\times B_{m}\) such that \[\left\{\begin{aligned} \operatorname{div}H^{m}&=- \nabla\chi_{m}\cdot G+c_{m}(t)\chi_{m}(x)&\text{in }(0,T)\times B_{m},\\ H^{m}&=0&\text{on }(0,T)\times\partial B _{m},\end{aligned}\right.\] and \[\begin{split}&\|DH^{m}\|_{L_{s,q,w}((0,T)\times B_{m})}\\ &\leq N(d,s,q,K_{0})\left(\|\nabla\chi_{m}\cdot G\|_{L_{s,q,w}((0,T) \times B_{m})}+\|c_{m}(t)\chi_{m}(x)\|_{L_{s,q,w}((0,T)\times B_{m})}\right). \end{split}\] (A.1) By Holder's inequality and the \(A_{q}\)-condition, we have \[\begin{split}&|c_{m}(t)|\|\chi_{m}\|_{L_{q,w_{1}}(B_{m})}\\ &\leq\frac{1}{\left|\int_{B_{m}}\chi_{m}(y)dy\right|}\left(\int_{ B_{m}}|\nabla\chi_{m}(y)||G(t,y)|dy\right)\left(\int_{B_{m}}|\chi_{m}(x)|^{q}w_{1}(x )\,dx\right)^{1/q}\\ &\leq\frac{\|(\nabla\chi_{m})G(t)\|_{L_{q,w_{1}}(B_{m})}}{\left| \int_{B_{m}}\chi_{m}(y)dy\right|}\left(\int_{B_{m}}w_{1}(y)^{-1/(q-1)}dy \right)^{(q-1)/q}\left(\int_{B_{m}}|\chi_{m}(x)|^{q}w_{1}(x)\,dx\right)^{1/q} \\ &\leq\frac{|B_{m}|}{\left|\int_{B_{m}}\chi_{m}(y)dy\right|}[w_{1} ]_{A_{q}}^{1/q}\|(\nabla\chi_{m})G(t)\|_{L_{q,w_{1}}(B_{m})}\leq N[w_{1}]_{A_{ q}}^{1/q}\|(\nabla\chi_{m})G(t)\|_{L_{q,w_{1}}(B_{m})},\end{split}\] where \(N\) is independent of \(m\). This implies that \[\begin{split}&\|\nabla\chi_{m}\cdot G\|_{L_{s,q,w}((0,T)\times B _{m})}+\|c_{m}(t)\chi_{m}(x)\|_{L_{s,q,w}((0,T)\times B_{m})}\\ &\leq\|\nabla\chi_{m}\cdot G\|_{L_{s,q,w}((0,T)\times B_{m})}+N[ w_{1}]_{A_{q}}^{1/q}\|\nabla\chi_{m}G\|_{L_{s,q,w}((0,T)\times B_{m})}\\ &\leq Nm^{-1}\|1_{B_{m}\setminus B_{m/2}}G\|_{L_{s,q,w}(\mathbb{ R}_{T}^{d})},\end{split}\] (A.2) where \(N\) is independent of \(m\). Hence it follows from (A.1), (A.2), the weighted Poincare inequality (Lemma 3.3) on \(B_{m}\) and the fact that \(\|1_{B_{m}\setminus B_{m/2}}G\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\to 0\) as \(m\to\infty\) that \[\begin{split}\|H^{m}\|_{L_{s,q,w}((0,T)\times B_{m})}& \leq Nm\|DH^{m}\|_{L_{s,q,w}((0,T)\times B_{m})}\\ &\leq N\|1_{B_{m}\setminus B_{m/2}}G\|_{L_{s,q,w}(\mathbb{R}_{T}^ {d})}\to 0\end{split}\] as \(m\to\infty\). Define \[g^{m}(t,x):=\chi_{m}(x)g(t,x)+\chi_{m}(x)\int_{0}^{t}c_{m}(s)ds\] and \[G^{m}(t,x):=\begin{cases}\chi_{m}(x)G(t,x)+H^{m}(t,x)&\text{in }(0,T) \times B_{m},\\ 0&\text{in }(0,T)\times(\mathbb{R}^{d}\setminus B_{m}).\end{cases}\] Then it is easy to see that \(g^{m}\in\mathring{\mathcal{H}}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})\) and \(\partial_{t}g^{m}(t,x)=\operatorname{div}G^{m}(t,x)\) in the sense of (2.1), and the rest of the result follows from the dominated convergence theorem. This completes the proof of existence part of Theorem 4.1. It remains us to show the uniqueness part. We first take the curl operation to the equation in the weak sense. Then \(\omega_{kl}=D_{k}u^{l}-D_{l}u^{k}\in L_{s,q,w}(\mathbb{R}_{T}^{d})\) is a very weak solution to the heat equation with simple coefficients, i.e., \[\int_{\mathbb{R}_{T}^{d}}\omega_{kl}(\partial_{t}\psi+a^{ij}(t)D_{ij}\psi)\,dxdt =0\] (A.3) for all \(\psi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\). By a standard density argument, the identity holds for all \(\psi\in W^{1,2}_{s^{\prime},q^{\prime},\tilde{w}}(\mathbb{R}_{T}^{d})\) with \(\psi(T,x)=0\), where \(\tilde{w}=w_{1}^{-1/(q-1)}w_{2}^{-1/(s-1)}\). By Theorem 3.8 (i), given \(\varphi\in C_{0}^{\infty}(\mathbb{R}_{T}^{d})\), there exists a unique \(\psi\in W^{1,2}_{s^{\prime},q^{\prime},\tilde{w}}(\mathbb{R}_{T}^{d})\) with \(\psi(T,x)=0\) such that \[\partial_{t}\psi+a^{ij}(t)D_{ij}\psi=\varphi\quad\text{in }\mathbb{R}_{T}^{d}.\] If we put this \(\psi\) in (A.3), then we have \[\int_{\mathbb{R}_{T}^{d}}\varphi\omega_{kl}\,dxdt=0\] for all \(\varphi\in C_{0}^{\infty}(\mathbb{R}_{T}^{d})\). Hence \(\omega_{kl}\) is identically zero in \(\mathbb{R}_{T}^{d}\). Since \(u\in\hat{W}^{1,2}_{s,q,w}(\mathbb{R}_{T}^{d})^{d}\) satisfies \[\Delta u^{l}=\sum_{k\neq l}D_{k}(D_{k}u^{l}-D_{l}u^{k})=0\quad\text{in } \mathbb{R}_{T}^{d}\] for all \(l=1,\ldots,d\), it follows from the mean value property of harmonic functions, Holder's inequality, and the \(A_{q}\)-condition that \[|u(t,x)| \leq\fint_{B_{R}(x)}|u(t,y)|\,dy\] \[\leq\frac{1}{|B_{R}(x)|}|B_{R}(x)|^{1-1/q}\|u(t,\cdot)\|_{L_{q,w_ {1}}(\mathbb{R}^{d})}\left(\fint_{B_{R}(x)}w_{1}^{-\frac{1}{q-1}}\,dy\right)^ {1-1/q}\] \[\leq\frac{[w_{1}]_{A_{q}}^{1/q}}{w_{1}(B_{R})^{1/q}}\|u(t,\cdot) \|_{L_{q,w_{1}}(\mathbb{R}^{d})}\] for a.e. \(t\in(0,T)\), for all \(x\in\mathbb{R}^{d}\), and for all \(R>0\). Since \(w_{1}(B_{R})\to\infty\) as \(R\to\infty\) (Proposition 3.2 (vi)), it follows that \(u=0\) for a.e. on \(\mathbb{R}_{T}^{d}\) and hence \(\nabla p=0\). This completes the proof of Theorem 4.1. ## Appendix B Proof of Theorem 4.2 This section is devoted to a proof of Theorem 4.2 which concerns the solvability of Stokes equations in divergence form with simple coefficients. Proof of Theorem 4.2.: We first show the existence of weak solutions. Consider \[\left\{\begin{aligned} \partial_{t}u_{1}-D_{i}(a^{ij}D_{j}u_{1})+ \nabla\pi&=\operatorname{div}\mathbf{F}\quad\text{in }\mathbb{R}_{T}^{d},\\ \operatorname{div}u_{1}&=0&\text{in } \mathbb{R}_{T}^{d},\\ u_{1}&=0&\text{on }\{t=0\}\times \mathbb{R}^{d}\end{aligned}\right.\] (B.1) and \[\left\{\begin{aligned} \partial_{t}u_{2}-D_{i}(a^{ij}D_{j}u_{2})+ \nabla\tilde{\pi}&=0&\text{in }\mathbb{R}_{T}^{d},\\ \operatorname{div}u_{2}&=g&\text{in } \mathbb{R}_{T}^{d},\\ u_{2}&=0&\text{on }\{t=0\}\times \mathbb{R}^{d}.\end{aligned}\right.\] (B.2) Write \(\mathbf{F}=(f^{1},f^{2},\ldots,f^{d})\), where \(f^{i}\) is a vector field. Then by Theorem 4.1, there exists a strong solution \((v_{k},\pi_{k})\) satisfying \[v_{k}\in\hat{W}^{1,2}_{s,q,w}(\mathbb{R}_{T}^{d})^{d},\quad\nabla\pi_{k}\in L _{s,q,w}(\mathbb{R}_{T}^{d})^{d},\] and \[\partial_{t}v_{k}-D_{i}(a^{ij}(t)D_{j}v_{k})+\nabla\pi_{k}=f^{k},\quad \operatorname{div}v_{k}=0\] (B.3) for \(k=1,\ldots,d\). Moreover, we have \[\|D^{2}v_{k}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|\nabla\pi_{k}\|_{L_{s,q,w}( \mathbb{R}_{T}^{d})}\leq N_{1}\|f^{k}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\] and \[\|v_{k}\|_{W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\nabla\pi_{k}\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})}\leq N_{2}\|f^{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] for some constants \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\), \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\) and for all \(k=1,\ldots,d\). Define \(u_{1}=\sum_{k=1}^{d}D_{k}v_{k}\) and \(\pi=\sum_{k=1}^{d}D_{k}\pi_{k}\). Then \((u_{1},\pi)\in\mathring{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\times L _{s,q,w}(\mathbb{R}^{d}_{T})\) is a weak solution of (B.1). Indeed, since \((v_{k},\pi_{k})\) is a strong solution of (B.3), we have \[-\int_{\mathbb{R}^{d}_{T}}v_{k}\cdot\partial_{t}\phi\,dxdt+\int_{\mathbb{R}^{ d}_{T}}(a^{ij}(t)D_{j}v_{k})\cdot D_{i}\phi-\pi_{k}\operatorname{div}\phi\,dxdt= \int_{\mathbb{R}^{d}_{T}}f^{k}\cdot\phi\,dxdt\] for all \(\phi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})^{d}\). For \(\psi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})^{d}\), put \(\phi=D_{k}\psi\) in the identity. Then we have \[-\int_{\mathbb{R}^{d}_{T}}(D_{k}v_{k})\partial_{t}\psi\,dxdt+\int_ {\mathbb{R}^{d}_{T}}(a^{ij}(t)D_{jk}v_{k})\cdot D_{i}\psi-(D_{k}\pi_{k}) \operatorname{div}\psi\,dxdt\] \[=-\int_{\mathbb{R}^{d}_{T}}f^{k}\cdot D_{k}\psi\,dxdt\] for \(k=1,\ldots,d\). Hence by summing it over \(k\), we get \[-\int_{\mathbb{R}^{d}_{T}}u_{1}\cdot\partial_{t}\psi\,dxdt+\int_{\mathbb{R}^{ d}_{T}}(a^{ij}D_{j}v)\cdot D_{i}\psi-\pi\operatorname{div}\psi\,dxdt=-\int_{ \mathbb{R}^{d}_{T}}\mathbf{F}:\nabla\psi\,dxdt\] for all \(\psi\in C^{\infty}_{0}([0,T)\times\mathbb{R}^{d})^{d}\). Moreover, it follows from (4.3) and (4.4) that \[\|Du_{1}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\pi\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})} \leq N_{1}\sum_{k=1}^{d}\|D^{2}v_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_ {T})}\] (B.4) \[\leq N_{1}\sum_{k=1}^{d}\|f_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})} \leq N_{1}\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] and \[\|u_{1}\|_{\mathcal{H}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})}+\|\pi\|_{L_{s,q,w}( \mathbb{R}^{d}_{T})} \leq N_{2}\sum_{k=1}^{d}\left(\|v_{k}\|_{W^{1,2}_{s,q,w}(\mathbb{ R}^{d}_{T})}+\|\nabla\pi_{k}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\right)\] (B.5) \[\leq N_{2}\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] for some constants \(N_{1}=N_{1}(d,s,q,K_{0},\nu)>0\) and \(N_{2}=N_{2}(d,s,q,K_{0},\nu,T)>0\). To find a solution \((w,\tilde{\pi})\) to (B.2), define \[\tilde{\pi}=\sum_{i,j=1}^{d}\mathcal{R}_{i}\mathcal{R}_{j}(G^{ij}-ga^{ij}(t)),\] where \(\mathcal{R}_{j}\) denotes the \(j\)th Riesz transform. Then by \(L_{q,w_{1}}\)-boundedness of Riesz transforms (see e.g. [64, SS4.2, Chapter V]), we have \(\tilde{\pi}\in L_{s,q,w}(\mathbb{R}^{d}_{T})\) and \[\|\tilde{\pi}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N(\|\mathbf{G}\|_{L_{s,q,w} (\mathbb{R}^{d}_{T})}+\|g\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})})\] for some constant \(N=N(d,s,q,K_{0},\nu)>0\). Since \[-\mathcal{R}_{i}\mathcal{R}_{j}(\Delta\psi)=D_{ij}\psi\] for all \(\psi\in C_{0}^{\infty}(\mathbb{R}^{d})\) and \(\mathcal{R}_{i}\mathcal{R}_{j}\) is self-adjoint on \(L_{2}\), it follows that \[-\int_{\mathbb{R}^{d}_{T}}\tilde{\pi}(t,x)\Delta\psi(x)\,dx=\int_{\mathbb{R}^{d }}(G^{ij}(t,x)-a^{ij}(t)g(t,x))D_{ij}\psi(x)\,dx\] (B.6) for all \(\psi\in C_{0}^{\infty}(\mathbb{R}^{d})\) and for a.e. \(t\in(0,T)\). By (B.6) and the compatibility condition (4.5), the identity \[-\int_{\mathbb{R}^{d}_{T}}\tilde{\pi}\Delta\psi\,dxdt=-\int_{\mathbb{R}^{d}_{T }}g(\psi_{t}+a^{ij}(t)D_{ij}\psi)\,dxdt\] (B.7) holds for all \(\psi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\). On the other hand, it follows from Theorem 3.8 (i) that there exists a unique \(\Phi\in\mathring{W}^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})\) satisfying \[\partial_{t}\Phi-a^{ij}(t)D_{ij}\Phi=\tilde{\pi}\quad\text{in }\mathbb{R}^{d}_{T}.\] (B.8) Moreover, we have \[\|D^{2}\Phi\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{1}\|\tilde{\pi}\|_{L_{s, q,w}(\mathbb{R}^{d}_{T})}\] and \[\|\Phi\|_{W^{1,2}_{s,q,w}(\mathbb{R}^{d}_{T})}\leq N_{2}\|\tilde{\pi}\|_{L_{s,q,w}(\mathbb{R}^{d}_{T})}\] for some constants \(N_{1}=N(d,s,q,\nu,K_{0})>0\) and \(N_{2}=N(d,s,q,\nu,K_{0},T)>0\). We show that \(-\Delta\Phi=g\). Since \(\tilde{\pi}\) satisfies (B.7) and \(\Phi\) satisfies (B.8), we have \[\int_{\mathbb{R}^{d}_{T}}(\partial_{t}\Phi-a^{ij}(t)D_{ij}\Phi)\Delta\psi\,dxdt =\int_{\mathbb{R}^{d}_{T}}g(\partial_{t}\psi+a^{ij}(t)D_{ij}\psi)\,dxdt\] for all \(\psi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\). Integration by part gives \[\int_{\mathbb{R}^{d}_{T}}(\partial_{t}\Phi-a^{ij}(t)D_{ij}\Phi) \Delta\psi\,dxdt =\int_{\mathbb{R}^{d}_{T}}\Phi\Delta(-\partial_{t}\psi-a^{ij}(t)D _{ij}\psi)\,dxdt\] \[=-\int_{\mathbb{R}^{d}_{T}}\Delta\Phi(\partial_{t}\psi+a^{ij}(t)D _{ij}\psi)\,dxdt\] for all \(\psi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\). Hence \[-\int_{\mathbb{R}^{d}_{T}}(\Delta\Phi)(\partial_{t}\psi+a^{ij}(t)D_{ij}\psi) \,dxdt=\int_{\mathbb{R}^{d}_{T}}g(\partial_{t}\psi+a^{ij}(t)D_{ij}\psi)\,dxdt\] (B.9) for all \(\psi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\). Then by a standard density argument, we see that the identity holds for all \(\psi\in W^{1,2}_{s^{\prime},q^{\prime},\tilde{w}}(\mathbb{R}^{d}_{T})\) with \(\psi(T,x)=0\), where \(\tilde{w}=w_{1}^{-1/(q-1)}w_{2}^{-1/(s-1)}\). Given \(\varphi\in C_{0}^{\infty}(\mathbb{R}^{d}_{T})\), it follows from Theorem 3.8 (i) that there exists a unique \(\psi\in W^{1,2}_{s^{\prime},q^{\prime},\tilde{w}}(\mathbb{R}^{d}_{T})\) satisfying \(\psi(T,x)=0\) and \[\partial_{t}\psi+a^{ij}(t)D_{ij}\psi=\varphi\quad\text{in }\mathbb{R}^{d}_{T}.\] Hence by (B.9), we have \[-\int_{\mathbb{R}^{d}_{T}}(\Delta\Phi)\varphi\,dxdt=\int_{\mathbb{R}^{d}_{T}}g \varphi\,dxdt\] for all \(\varphi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})\), which implies that \(-\Delta\Phi=g\) in \(\mathbb{R}_{T}^{d}\). Put \(u_{2}=-\nabla\Phi\). Then by (B.8), it is easy to show that \((u_{2},\tilde{\pi})\) is a weak solution to (B.2) satisfying \[\begin{split}\|Du_{2}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|\tilde{ \pi}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}&\leq N_{1}\|\tilde{\pi} \|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\\ &\leq N_{1}\left(\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+ \|g\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\right)\end{split}\] (B.10) and \[\|u_{2}\|_{\mathcal{H}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})}+\|\tilde{\pi}\|_{L_{s, q,w}(\mathbb{R}_{T}^{d})}\leq N_{2}\left(\|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{ d})}+\|g\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\right)\] (B.11) for some constants \(N_{1}=N_{1}(d,s,q,\nu,K_{0})>0\) and \(N_{2}=N_{2}(d,s,q,\nu,K_{0},T)>0\). Since \(u_{2}=-\nabla\Phi\) and \(-\Delta\Phi=g\), it follows that \(u_{2}\in\mathring{\mathcal{H}}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})\) and \[\begin{split}\int_{\mathbb{R}^{d}}\nabla u_{2}^{l}\cdot\nabla \phi\,dx&=-\int_{\mathbb{R}^{d}}\nabla(D_{l}\Phi)\cdot\nabla \phi\,dx\\ &=\int_{\mathbb{R}^{d}}\nabla\Phi\cdot\nabla(D_{l}\phi)\,dx=- \int_{\mathbb{R}^{d}}g(D_{l}\phi)\,dx\end{split}\] for all \(\phi\in C_{0}^{\infty}(\mathbb{R}^{d})\) and for a.e. \(t\in(0,T)\). Hence it follows from Corollary 3.9 that \[\|Du_{2}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\leq N_{1}\|g\|_{L_{s,q,w}(\mathbb{ R}_{T}^{d})}\] (B.12) for some constant \(N_{1}=N_{1}(d,s,q,\nu,K_{0})>0\) Define \(u=u_{1}+u_{2}\) and \(p=\pi+\tilde{\pi}\). Then \((u,p)\) is a weak solution to (4.2) in \(\mathbb{R}_{T}^{d}\) satisfying \(u\in\mathring{\mathcal{H}}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})^{d}\) and \(p\in L_{s,q,w}(\mathbb{R}_{T}^{d})\). By (B.4) and (B.12), we have \[\begin{split}\|Du\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}& \leq\|Du_{1}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|Du_{2}\|_{L_{s,q,w}( \mathbb{R}_{T}^{d})}\\ &\leq N_{1}\left(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+ \|g\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\right)\end{split}\] for some constant \(N_{1}=N_{1}(d,s,q,\nu,K_{0})>0\). Similarly, by (B.4) and (B.10), we have \[\begin{split}\|p\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}& \leq\|Du_{1}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|p_{1}\|_{L_{s,q,w}( \mathbb{R}_{T}^{d})}\\ &\quad+\|Du_{2}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|p_{2}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}\\ &\leq N_{1}\left(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+ \|\mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|g\|_{L_{s,q,w}(\mathbb{R}_{T}^ {d})}\right).\end{split}\] Moreover, it follows from (B.5) and (B.11) that \[\|u\|_{\mathcal{H}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})}+\|p\|_{L_{s,q,w}(\mathbb{ R}_{T}^{d})}\leq N_{2}\left(\|\mathbf{F}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\| \mathbf{G}\|_{L_{s,q,w}(\mathbb{R}_{T}^{d})}+\|g\|_{L_{s,q,w}(\mathbb{R}_{T}^ {d})}\right)\] for some constants \(N_{1}=N_{1}(d,s,q,\nu,K_{0})>0\) and \(N_{2}=N_{2}(d,s,q,\nu,K_{0},T)>0\). It remains to show the uniqueness of weak solutions. Suppose that \((u,p)\) satisfies \[u\in\mathring{\mathcal{H}}_{s,q,w}^{1}(\mathbb{R}_{T}^{d})^{d},\quad p\in L_{ s,q,w}(\mathbb{R}_{T}^{d}),\] and \[\int_{\mathbb{R}_{T}^{d}}u\cdot(\partial_{t}\phi+a^{ij}(t)D_{ij}\phi)+p\, \mathrm{div}\,\phi\,dxdt=0\] (B.13) for all \(\phi\in C_{0}^{\infty}([0,T)\times\mathbb{R}^{d})^{d}\). For \(\psi\in C_{0}^{\infty}(\mathbb{R}_{T}^{d})\), put \(\phi=\nabla\psi\) in (B.13). Since \(\mathrm{div}\,u=0\) in \(\mathbb{R}_{T}^{d}\), we get \[\int_{\mathbb{R}_{T}^{d}}p\Delta\psi\,dxdt=0\] for all \(\psi\in C_{0}^{\infty}(\mathbb{R}_{T}^{d})\). This implies that \(p\) is harmonic in \(\mathbb{R}^{d}\) a.e. \(t\in(0,T)\). Then following exactly the same argument as in the proof of uniqueness part of Theorem 4.1, one can show that \(p\) is identically zero. By (B.13), \(u\in\hat{\mathcal{H}}^{1}_{s,q,w}(\mathbb{R}^{d}_{T})^{d}\) is a weak solution to \[\partial_{t}u-D_{i}(a^{ij}D_{j}u)=0\quad\text{in }\mathbb{R}^{d}_{T}.\] Therefore, it follows from Theorem 3.8 (ii) that \(u\) is identically zero, which completes the proof of Theorem 4.2.
2302.13223
Optimal free-surface pumping by an undulating carpet
Examples of fluid flows driven by undulating boundaries are found in nature across many different length scales. Even though different driving mechanisms have evolved in distinct environments, they perform essentially the same function: directional transport of liquid. Nature-inspired strategies have been adopted in engineered devices to manipulate and direct flow. Here, we demonstrate how an undulating boundary generates large-scale pumping of a thin liquid near the liquid-air interface. Two dimensional traveling waves on the undulator, a canonical strategy to transport fluid at low Reynolds numbers, surprisingly lead to flow rates that depend non-monotonically on the wave speed. Through an asymptotic analysis of the thin-film equations that account for gravity and surface tension, we predict the observed optimal speed that maximizes pumping. Our findings reveal a novel mode of pumping with less energy dissipation near a free surface compared to a rigid boundary.
Anupam Pandey, Zih-Yin Chen, Jisoo Yuk, Yuming Sun, Chris Roh, Daisuke Takagi, Sungyon Lee, Sunghwan Jung
2023-02-26T03:32:22Z
http://arxiv.org/abs/2302.13223v1
# Optimal free-surface pumping by an undulating carpet ###### Abstract Examples of fluid flows driven by undulating boundaries are found in nature across many different length scales. Even though different driving mechanisms have evolved in distinct environments, they perform essentially the same function: directional transport of liquid. Nature-inspired strategies have been adopted in engineered devices to manipulate and direct flow. Here, we demonstrate how an undulating boundary generates large-scale pumping of a thin liquid near the liquid-air interface. Two dimensional traveling waves on the undulator, a canonical strategy to transport fluid at low Reynolds numbers, surprisingly lead to flow rates that depend non-monotonically on the wave speed. Through an asymptotic analysis of the thin-film equations that account for gravity and surface tension, we predict the observed optimal speed that maximizes pumping. Our findings reveal a novel mode of pumping with less energy dissipation near a free surface compared to a rigid boundary. ## I Introduction The necessity to manipulate flow and transport liquids is primitive to many biophysical processes such as embryonic growth and development [1; 2], mucus transport in bronchial tree [3; 4; 5], motion of food within intestine [6; 7], animal drinking [8; 9]. Engineered systems also rely on efficient liquid transport such as in heat sinks and exchangers for integrated circuits [10; 11], micropumps [12; 13] and lab-on-a-chip devices [14]. Transporting liquids at small scales requires non-reciprocal motion to overcome the time reversibility of low Reynolds number flows. Deformable boundaries in the form of rhythmic undulation of cilia beds and peristaltic waves are nature's resolutions to overrule this reversibility and achieve directional liquid transport. While peristaltic pumps have become an integral component of biomedical devices, artificial ciliary metasurfaces that can actuate, pump, and mix flow have been realized only recently [15; 16; 17; 18; 19]. Design strategy of valveless micropumps essentially relies on a similar working principle as cilia-lined walls; sequential actuation of a channel wall by electrical or magnetic fields creates a travelling wave which drags the liquid along with it [20; 21]. While the primary focus of micropumps has been on the transport of liquids enclosed within a channel, numerous technological applications require handling liquids near fluid-fluid interfaces. In particular, processes such as self-assembly, encapsulation, emulsification involving micron-sized particles critically rely on the liquid flow near interfaces [22; 23]. Thus the ability to maneuver interfacial flows will open up new avenues for micro-particle sensing and actuating at interfaces. Interestingly, the apple snail _Pomacea canaliculata_ leverages its flexible foot to create large-scale surface flows that fetch floating food particles from afar while feeding underwater in a process called _pedal surface collection_[24; 25]; the physics of which is yet to be fully understood [26; 27]. Here we reveal how a rhythmically undulating solid boundary pumps viscous liquid at the interface, and transports floating objects from distances much larger than its size. Surprisingly, pumping does not increase proportionally to the speed of the traveling wave, and we observe non-monotonicity in the average motion of surface floaters as the wave speed is gradually increased. Detailed measurements of the velocity field in combination with an analysis of the lubrication theory unravel the interfacial hydrodynamics of the problem that emerges from a coupling between capillary, gravity, and viscous forces. We find that the non-monotonic flow is a direct consequence of whether the interface remains flat or conforms to the phase of the undulator. Through the theoretical analysis, we are able to predict the optimal wave speed that maximizes pumping, and this prediction is in excellent agreement with experiments. Finally, we show how pumping near an interface is a less dissipative strategy to transport liquid compared to pumping near a rigid boundary. ## II Results ### Experiments A 3D printed undulator capable of generating travelling waves is attached to the bottom of an acrylic tank. The tank is filled with a viscous liquid (silicone oil or glycerin-water mixture) such that the mean depth of liquid above the undulator (\(H\)) remains much smaller the undulator wavelength (\(\lambda\)), i.e. \(H/\lambda\ll 1\). The undulator is driven by a servo motor attached to a DC power source. Millimetric styrofoam spheres are sprinkled on the liquid surface and their motion is tracked during the experiment to estimate the large scale flow of liquid. Additionally, we characterize the flow within the thin film of liquid directly in contact with the undulator by performing 2D particle image velocimetry (PIV) measurements. Our experimental design is essentially a mesoscale realization of the Taylor's sheet [28] placed near a free surface [29; 30]; the crucial difference, however, is that the sheet or undulator is held stationary here, in contrast to free swimming. Images of the undulator are shown in fig. 1a and 1b. The primary component of this design is a helical spine encased with a series of hollow, rectangular links that are interconnected through a thin top surface [31] (see SI and supplementary movies 1 and 2 for details). The links along with the top surface forms an outer shell that transforms the helix rotation to planar travelling wave of the form, \(\delta\sin[(x-V_{w}t)/\lambda]\). The pitch and radius of the helix determine the wavelength \(\lambda\) and amplitude \(\delta\) of the undulations respectively. By modulating the angular frequency of the helix, we are able to vary the wave speed \(V_{w}=\omega\lambda\) from 15 to 120 mm/s (\(\lambda\) is fixed at 50 mm). We perform experiments with undulators of length \(\lambda\) and \(2\lambda\), and the results remain invariant of the undulator size. For given \(V_{w}\), shapes of undulator surface are shown in fig. 1c for one period of oscillation. **Large-scale flow** - Figure 1d shows the trajectories of floating styrofoam particles generated by 30 mins of continuous oscillations in 1000 cSt silicone oil contained in an acrylic tank of dimensions 61 cm \(\times\) 46 cm (supplementary movie 3 shows motion of surface floaters for different \(V_{w}\)). Traveling waves on the actuator move in the downward direction as shown by the direction of \(V_{w}\) in fig. 1d. Thus the floaters are dragged towards the undulator by forming the large-scale flow. The color codes on the trajectories represent time: blue and yellow colors represent the initial and final positions, respectively. Placing the undulator near a side wall of the tank, we measure the floaters' motion over a decade in distance. However, some particles are recirculated back due to the nearby wall. We disregard these trajectories in our analysis. Fluid motion at the interface is traced by the styrofoam floaters because of their low density (\(\rho_{p}\simeq 50\) kg/m\({}^{3}\)), which ensures that the Stokes number, \(St=\rho_{p}R_{p}V_{w}/\eta\) remains very small (\(\simeq 10^{-2}\)) (based on typical wave speed of \(V_{w}=100\) mm/s, particle radius of \(R_{p}=1\) mm, Figure 1: **Large-scale transport of floaters by the undulating carpet**. The actuator, shown in panels a) and b), is comprised of a helix rotating inside a blue shell. Rotation of the helix causes an oscillatory motion of the shell forming a traveling wave on the surface. It is placed at a mean depth \(H\) below the liquid surface. c) Shape of the undulations over a period of oscillation. These shapes are captured by a traveling sine wave of \(\delta\sin(x-V_{w}t)/\lambda\). d) Trajectories of floating styrofoam particles after 30 mins of continuous oscillation in 1000 cSt silicone oil for a fixed actuation speed \(V_{w}\). This panel is a top view image with the actuator position marked at the bottom of the frame. The color coding of dark to light indicates the arrow of time. e) Magnified trajectories of particles located straight ahead of the actuator. The filled circles represent initial positions of the styrofoam particles. f) Particle velocity as a function of distance for increasing wave speeds (\(V_{w}\)). Different wave speeds are marked by the color coding. Distances are measured from the edge of actuator, as shown in panel e). Each of the curves is an average over 20 trajectories. Particle velocity exhibits a non-monotonic behavior with \(V_{w}\) with maximum velocities measured at intermediate wave speeds. The inset confirms this behavior by showing particle velocity at a fixed location, \(x=50\) mm for different \(V_{w}\). Error bars in this plot represent standard deviation in velocity magnitude. The gray line is prediction from eq. (8). and viscosity of silicone oil of \(\eta=10^{-3}\) Pa\(\cdot\)s). To this end, we focus on the floaters that are initially located straight ahead of the actuator to analyze the variation of velocity with distance. These trajectories are shown in fig. 1e with black circles representing the initial positions. For a given \(V_{w}\), we interpolate 20 trajectories to construct a velocity-distance curve which is shown in fig. 1f (see SI for details of these measurements). Here, \(|\bar{V}|=(V_{x}^{2}+V_{y})^{1/2}\) is the magnitude of the velocity at the liquid-air interface and \(x\) is the distance from the edge of the actuator. We disregard the first 20 mm of data to avoid edge effects. The color code on the curves represents the magnitude of \(V_{w}\). Interestingly, we observe a nonmonotonic response with the particle velocity reaching the maximum value at an intermediate \(V_{w}\). Once \(|\bar{V}|\) at a given location (\(x=50\) mm) is plotted against the wave speeds (inset of fig. 1f), it becomes apparent that the maximum surface flow is achieved for \(V_{w}\simeq 80\) mm/s. Since the overall flow in the liquid is driven by the hydrodynamics within the thin film of liquid atop the undulator, we focus on quantifying the velocity field and flow rate in this region. **Dimensionless groups** - Before we discuss the experimental results further, it is instructive to identify the relevant dimensionless groups which dictate the response of the system. The system has eight dimensional parameters: three length scales given by film thickness \(H\), amplitude (\(\delta\)) and wavelength (\(\lambda\)) of the undulator, the velocity scale \(V_{w}\), gravitational constant \(g\), and three fluid properties set by surface tension (\(\gamma\)), density (\(\rho\)), dynamic viscosity (\(\eta\)). These parameters lead to 5 dimensionless groups, namely, \(\epsilon=\delta/H\), \(a=H/\lambda\), Reynolds number \(Re=\rho V_{w}\lambda a^{2}/\eta\), Capillary number \(Ca=\eta V_{w}/(\gamma a^{3})\), and Bond number \(Bo=\rho g\lambda^{2}/\gamma\). Here both \(Re\) and \(Ca\) are defined for the thin-film limit, \(a\ll 1\). We choose two working liquids, silicone oil (\(\eta_{s}=0.97\) Pa\(\cdot\)s) and glycerin water mixture, GW (\(\eta_{GW}=0.133\) Pa\(\cdot\)s). For each of the liquids, the thickness, \(H\) (maintaining \(a\ll 1\)) and wave speed, \(V_{w}\) are varied independently. Across all experiments, \(Re\) remains lower than 1 (\(0.01-0.35\)). Thus inertial effects are subdominant and the problem is fully described by \(\epsilon\), \(Ca\), and \(Bo\). We vary \(Ca\), the ratio of viscous to capillary forces, over three orders in magnitude, \(1.4-1140\). The value of \(Bo\), representing the strength of gravitational forces to surface tension, is 1133 and 426 for silicone oil and glycerin water mixture respectively. As we will demonstrate in the next sections, \(Ca/Bo\) which represents the ratio of viscous force to gravitational force, turns out to be the key governing parameter. **Non-monotonic flow rate** - The flow field within the thin film of liquid above the undulator is characterized by performing 2D PIV at a longitudinal plane in the middle of the undulator (see Materials & Methods section for details). Figure 2a shows a long-exposure image of illuminated tracer particles, giving a qualitative picture of the flow. The particles essentially oscillate up-down with the actuator, but exhibit net horizontal displacement over a period due to the traveling wave. The presence of the shear-free interface is also crucial to the transport mechanism; the interfacial curvature induces a capillary pressure that modifies the local flow field. The coupling between the two deforming boundaries determines the flow within the gap. Snapshots of typical velocity fields for the two liquids are shown in fig. 2b. The top panel is a silicone oil flow field with \(V_{w}=23\) mm/s and \(H=10\) mm (\(Ca=132\), \(Bo=1133\)), while the bottom panel represents flow field of glycerin-water mixture with \(V_{w}=17\) mm/s and \(H=11\) mm (\(Ca=3\), \(Bo=426\)). Higher \(Ca\) leads to larger deformation of the free surface. Colors in the plot represent the magnitude of horizontal velocity component, \(V_{x}\); portion of the liquid that follows the wave is shown in red, whereas a blue region represents part of the liquid that moves in the opposite direction to Figure 2: **Thin-film flow atop the undulator**. a) A sketch of the actuator and a long exposure image of a typical flow-field measurement, showing motion of the tracer particles in the thin film. The free surface deforms in response to the flow. b) Results of PIV for two different capillary numbers, \(Ca=132\) (top panel), \(Ca=3\) (bottom panel). In both these panels the bottom boundary is the actuator surface, while the top boundary is the liquid interface. The color coding represents the horizontal component of the velocity field, \(V_{x}\); red signifies flow along \(V_{w}\) while blue signifies flow opposite to \(V_{w}\). the wave. In fact, the velocity vectors at a given location switch directions depending on the phase of the actuator (see supplementary movies 4 and 5). Thus, to estimate the net horizontal transport of liquid across a section, we first integrate \(V_{x}\) across thickness in the middle of the undulator (marked by the black dashed line in fig. 2b i) which yields an instantaneous flow rate \[Q=\int_{h_{a}}^{h_{f}}V_{x}\,dz. \tag{1}\] Here, \(h_{a}\) and \(h_{f}\) are the positions of the bottom and top boundaries from the reference point, respectively. Figure 3a plots \(Q\) as a function of time, measured in silicone oil for three distinct wave speeds. It shows that \(Q\) oscillates with the the same time period as the undulator (\(\tau=\lambda/V_{w}\)), but there is a net flow of liquid along the traveling wave. Thus a time-averaged flow rate, \[\langle Q\rangle=\frac{1}{\tau}\int_{0}^{\tau}Q\,\mathrm{d}t, \tag{2}\] gives a measure of liquid transport by the undulator. Figure 3b gives a comprehensive picture of the flow rate measured across all the experiments. \(\langle Q\rangle\) is plotted against the characteristic flow rate \(V_{w}H\). The geometric prefactor of \(\epsilon^{2}\) is a direct consequence of the thin geometry of the flow [32]. Two interesting observations are in order. Regardless of the fluid properties, the flow rates at first increase linearly with \(\epsilon^{2}V_{w}H\). All the data sets other than the GW exhibit a non-monotonic behavior with flow rates reaching maximum values at intermediate \(\epsilon^{2}V_{w}H\). Thus, we find that the non-monotonic surface flow observed in fig. 1f is a direct consequence of the flow within the thin film above the undulator. It is important to note that these measurements remain invariant of the undulator size as shown in the SI where we compare the time averaged flow rates measured in single and double wave undulators. In the next section, we develop a theoretical model to explain how the geometrical and material parameters combine to give the optimal wave speed that maximizes flow rate. ### Theoretical framework **Thin-film equation** - We consider the two dimensional geometry depicted in fig. 4a for the theoretical model. An infinite train of periodic undulations of the form \(h_{a}=\delta\sin(x-V_{w}t)/\lambda\) propagates on the actuator located at a mean depth of \(H\) from the free surface. We analyze the flow in the thin-film limit, such that \(a=H/\lambda\ll 1\). A key aspect of the problem is that the shape of the interface, \(h_{f}\) is unknown along with the flow field. The explicit time dependence in this problem is a direct manifestation of the traveling wave on the boundary. Thus in a coordinate system (\(X,\,Z\)) moving with the wave, the flow becomes steady. A simple Galilean transformation relates these coordinates to the laboratory coordinates (\(x,\,z\)): \(X=x-V_{w}t\), and \(Z=z\). Thus, we first solve the problem in the wave frame and then transform the solution to the lab frame. Leaving the details of the derivation in Materials & Methods, here we present the key results. In the thin-film limit, the separation of vertical and horizontal scales leads to a predominantly horizontal flow-field, and both mass and momentum conservation equations are integrated across the film thickness to reach an ordinary differential equation involving the free surface shape, \(h_{f}\) and volume flow rate \(q\). Introducing dimensionless variables \(\bar{X}=X/\lambda\), \(\bar{h}_{f}=h_{f}/H\), \(\bar{h}_{a}=h_{a}/H\), and \(\bar{q}=q/V_{w}H\) we get \[\bar{q}=\frac{1}{3}\left(\frac{1}{Ca}\bar{h}^{\prime\prime\prime}_{f}-\frac{B _{O}}{Ca}\bar{h}^{\prime}_{f}\right)(\bar{h}_{f}-\bar{h}_{a})^{3}-(\bar{h}_{f }-\bar{h}_{a}), \tag{3}\] Figure 3: **Non-monotonic flow rate**. a) Instantaneous flow rate in silicone oil over multiple periods of oscillation. The data sets represent increasing \(V_{w}\), from white to black. These measurements are taken at a cross section marked by the dashed line in fig. 2b i. b) Time-averaged flow rate, \(\langle Q\rangle\) is plotted against the flux scale \(\epsilon^{2}V_{w}H\) of the problem. The circles represent the experiment in silicone oil with bigger markers denoting larger \(H\): 7.5 mm (red), 9.5 mm (green), 11.5 mm (blue), 14 mm (orange). The squares represent GW experiments with \(H=11\) mm. The dashed line is the theoretical prediction, given by \(\langle Q\rangle=3\epsilon^{2}V_{w}H/2\). **FIG. 4: Theoretical & numerical solutions of thin-film flow**. a) The thin film geometry with relevant quantities. We consider an infinite train of traveling undulations of amplitude \(\delta\) and wavelength \(\lambda\) moving at a speed of \(V_{w}\). The coordinate frame \((X,Z)\) travels with the undulations. The red curve represents the bottom boundary in motion. A liquid layer of mean thickness \(H\) reside on top the deformable bottom boundary. Shape of the free surface is given by \(h_{f}\), while the bottom surface is given by \(h_{a}\). b) Numerical solution of the thin-film equation is plotted in terms of the flow rate as a function of \(Ca\), for different \(Bo\). The two largest Bond numbers correspond to the experimental values. c) The rescaled experimental data of fig. 3b are in excellent agreement with the theoretical prediction of (7), plotted as the solid black line. The small \(V_{w}\) (\(Ca/Bo\ll 1\)) limit is given by \(\left\langle\bar{Q}\right\rangle=3\epsilon^{2}/2\), while large \(V_{w}\) (\(Ca/Bo\gg 1\)) limit is given by \(\left\langle\bar{Q}\right\rangle=\epsilon^{2}(\text{Ca/Bo})^{-2}/6\). where both \(\bar{q}\) and \(\bar{h}_{f}\) are unknowns, and \(\bar{h}_{a}=\epsilon\sin\bar{X}\) is known. We close the problem by imposing the following constraint on \(\bar{h}_{f}\): \[\int_{0}^{2\pi}\bar{h}_{f}\,\text{d}\bar{X}=2\pi, \tag{4}\] which states that the mean film thickness over one wavelength does not change due to deformation. Along with periodic boundary conditions, equations (3) and (4) form a set of nonlinear coupled equations whose solutions depend on the three parameters, \(Ca\), \(Bo\), and \(\epsilon\). For chosen \(Bo\) and \(\epsilon\), these equations are solved by a shooting method for a wide range of \(Ca\). To be able to compare the numerical results with the experimental data of fig. 3, we transform the results to the lab frame using the relation \(\bar{Q}=\bar{q}+\left(\bar{h}_{f}(\bar{x},\bar{t})-\bar{h}_{a}(\bar{x},\bar{t})\right)\). Owing to the periodic nature of \(\bar{h}_{f}\) and \(\bar{h}_{a}\), the time-averaged flow rate simplifies to \(\left\langle\bar{Q}\right\rangle=\bar{q}+1\). Figure 4b shows the numerical solution of \(\left\langle\bar{Q}\right\rangle\) as a function of \(Ca\) for different \(Bo\). All curves exhibit the same qualitative behavior; at low \(Ca\), the scaled flow rate reaches a constant value as \(\left\langle Q\right\rangle\sim V_{w}H\), which is analogous to what we observe in fig. 3b. At large \(Ca\), however, we recover a decreasing flow rate as \(\left\langle Q\right\rangle\sim(V_{w}H)^{-\alpha}\) with \(\alpha>0\). The transition between the two regimes scales with the \(Bo\). Thus the thin-film equation captures the qualitative behavior found in the experiments. **Asymptotic solution** - For \(Bo\gg 1\), the third-order term in Eq. (3) can be neglected, which simplifies the governing equation to \[\bar{q}=-\frac{1}{3}\frac{Bo}{Ca}\bar{h}_{f}^{\prime}(\bar{h}_{f}-\bar{h}_{a} )^{3}-(\bar{h}_{f}-\bar{h}_{a}). \tag{5}\] Indeed \(Bo\) values in experiments are large (433 and 1132) justifying the above simplification. Furthermore, we assume that the amplitude of the wave, \(\delta\) is much smaller than \(H\), \(\epsilon\ll 1\). Interestingly \(Ca/Bo\), the single parameter dictating the solution of eq. (5), does not contain surface tension. This ratio is reciprocal to the Galileo number which plays a crucial role in the stability of thin films driven by gravity [33]. Here we look for asymptotic solutions of the form, \(\bar{h}_{f}=1+\epsilon\bar{h}_{f1}+\epsilon^{2}\bar{h}_{f2}+\mathcal{O}( \epsilon^{3})\) and \(\bar{q}=q_{0}+\epsilon\bar{q}_{1}+\epsilon^{2}\bar{q}_{2}+\mathcal{O}( \epsilon^{3})\)[34]. We insert these expansions in eqns. (5) and (4), and solve the equations in orders of \(\epsilon\). Leaving the solution of \(\bar{h}_{f}\) in the SI, here we present the solution of \(\bar{q}\) which becomes \[\bar{q}=-1+\frac{3\epsilon^{2}}{2\left(1+9(Ca/Bo)^{2}\right)}. \tag{6}\] Thus the time averaged flow rate in the lab frame is given by \[\frac{\left\langle\bar{Q}\right\rangle}{\epsilon^{2}}=\frac{3}{2\left(1+9(Ca/ Bo)^{2}\right)}. \tag{7}\] This is the key result of the theoretical model. It demonstrates that the flow rate is quadratic in amplitude of the traveling wave, which is why we incorporated \(\epsilon^{2}\) in the horizontal scale of fig. 3b. Importantly, eq. (7) captures the non-monotonic behavior of the experiments. Once the data in fig. 3b are rescaled, all the different cases collapse onto a master curve which is in excellent agreement with the black solid line representing eq. (7), as shown in fig. 4c. **Optimal wave speed** - The physical picture behind the nonmonotonic nature of the flow rate becomes clear once the free surface shapes are found. For a given \(Bo\) with a low \(Ca\), the liquid-air interface behaves as an infinitely taut membrane with minimal deformations. Thus, a liquid parcel moves primarily in the horizontal direction, and the flow rate is given purely by the kinematics \(\left(V_{w},H,\delta\right)\). Indeed for \(Ca/Bo\ll 1\) eq. (7) simplifies to give \(\left\langle\bar{Q}\right\rangle/\epsilon^{2}=3/2\). In the dimensional form, this relation explains the increase in the flow rate with the wave speed, \(\left\langle Q\right\rangle=3\delta^{2}V_{w}/2H\). Thus, the flow is independent of the liquid properties which we have noted in fig. 3b. As \(Ca\) increases, the interface starts to deform up and down by conforming with the undulating actuator. In this limit, the translational velocity of tracer particles decreases thereby lowering the flow rate. Indeed, in the limit of \(Ca/Bo\gg 1\), we find a decreasing flow rate given by \(\left\langle\bar{Q}\right\rangle=\epsilon^{2}(Ca/Bo)^{-2}/6\). These two asymptotic limits are shown as dashed lines in fig. 4c. The flow rate attains a maximum at the intersection of these two lines where \(Ca/Bo=1/3\). In the dimensional form, this particular value of \(Ca/Bo\) gives the optimal wave speed at which flow rate peaks, \[V_{w}^{\rm(max)}=\left(\frac{\rho gH^{3}}{3\eta\lambda}\right). \tag{8}\] The optimal wave speed emerges from a competition between hydrostatic pressure (\(\sim\rho gH\)) and lubrication pressure (\(\sim\eta V_{w}\lambda/H^{2}\)); surface tension drops out in the above expression. Eq. (8) gives the optimal speed at which the undulator maximizes pumping. Now we are in a position to examine whether eq. (8) captures the peak surface velocities observed in fig. 1f. Plugging in the density (\(\rho=970\) kg/m\({}^{3}\)), viscosity (\(\eta_{s}=.97\) Pa\(\cdot\)s), \(H=10.8\) mm, \(\lambda=50\) mm, we find \(V_{w}^{\rm(max)}=82.3\) mm/s which matches very well with the observation (shown as the gray line in fig. 1f inset). **Pumping Efficacy** - The flow rate achieved by this mechanism comes at the expense of the power needed to drive the undulator. This power expenditure equals the viscous dissipation within the flow. To this end, we estimate the efficacy of the mechanism by comparing the output, \(\left\langle\bar{Q}\right\rangle\) to the input, viscous dissipation \(\tilde{\mathcal{E}}\) (see Materials & Methods for a derivation of \(\tilde{\mathcal{E}}\)). To demonstrate the benefit of having an interface on the pumping capability of this mechanism, we compare with the flux and dissipation for a rigid top boundary. These results are shown in fig. 5. The data points represent dimensionless flux plotted against dissipation for a wide range of \(Ca/Bo\). The \(\epsilon\ll 1\) asymptotic result, shown as the black dashed line in fig. 5, captures these results perfectly, giving the following algebraic relation between the two \[\left\langle\bar{Q}\right\rangle=\frac{\tilde{\mathcal{E}}}{2\pi}. \tag{9}\] Importance of the free surface becomes apparent when the above result is compared to the scenario of the thin film bounded by a rigid, solid wall on top. As shown in the SI, for a rigid top boundary, both the flow rate and dissipation is given purely by the ratio of \(\delta/H\). We find that the flow dissipates 4 times more energy to achieve the same amount of flow, \[\left\langle\bar{Q}\right\rangle\simeq\frac{\tilde{\mathcal{E}}}{8\pi}. \tag{10}\] This is plotted as the solid black line in fig. 5. Thus it is clear that the liquid-air interface facilitates pumping by promoting horizontal transport of fluid parcels at a lower power consumption. ## Discussion In summary, we have demonstrated the pumping capability of a sub-surface undulating carpet; the travelling wave triggers a large-scale flow beyond its body size. A direct observation of the liquid motion above the undulator in combination with a quantitative analysis of the thin film equations, yields the optimal speed at which Figure 5: **Pumping efficacy of the undulator**. The dimensionless flow rate is plotted against the dimensionless dissipation for a wide range of \(Ca/Bo\) values. The data points represent numerical results, which are obtained for \(\epsilon=0.3\). The dashed line is the asymptotic prediction of eq. (9). The solid line is the result for a top rigid boundary and represents eq. (10). this device transports the maximum amount of liquid for given geometric and fluid properties. This optimal wave speed scales inversely with the wavelength of the undulations and linearly with the cube of the film thickness. It is interesting to note that the key governing parameter, \(Ca/Bo\) can be interpreted as a ratio of two velocities - wave speed (\(V_{w}\)) to a characteristic relaxation or leveling speed (\(V_{r}=\rho gH^{3}/\eta\lambda\)) at which surface undulations flatten out. This leveling process is dominated by gravity since the scale of undulations (\(\sim\lambda\)) is much larger than the capillary length (\(\sqrt{\gamma/\rho g}\)). Thus for \(V_{r}\gg V_{w}\), the undulator essentially works against a relaxed, flat interface, and liquid parcels primarily exhibit horizontal displacement over a period. In the other limit of \(V_{r}\ll V_{w}\), the free surface tends to beat in phase with the travelling boundary amplifying the vertical displacement, and subsequently reducing the net transport. Our study demonstrates that the large-scale surface flow is a direct manifestation of the thin-film hydrodynamics above the undulator by showing how the optimal pumping speed captures the peak velocities in surface floaters. However, a quantitative analysis connecting the above two aspects of the flow field is necessary to exploit the full potential of this mechanism. Additionally, in the unexplored inertial regime, we expect the mechanism to showcase interesting dynamics due to the coupling between surface waves and finite-size particles [35]. We believe that this work opens up new pathways for self-assembly and patterning at the mesoscale [36; 37], bio-inspired strategies for remote sensing and actuation within liquids [38; 39], and control of interfacial flows using active boundaries [40; 41]. ## Materials & Methods **Modeling & printing of the undulator** - The models are designed in Fusion 360 (Autodesk). The helix is 3D printed in a Formlab Form 2 SLA printer by photo-crosslinking resin, whereas the outer shell comprising the top surface and rectangular links is printed in a Ultimaker S5 (Ultimaker Ltd.) using a blue TPE (thermoplastic elastomer). Due to the relative flexibility of TPE, the outer shell conforms to the helix. The helix is connected to a mini servo motor which is driven by DC power supply. All other parts (Base, Undulator holders, etc.) are printed using PLA (Polylactic acid) filaments on an Ultimaker S5 (Ultimaker Ltd.) printer. **Measurement of the flow-field** - We perform particle image velocimetry measurements on the thin liquid layer above the undulator. The viscous liquid is seeded with 10 \(\mu\)m glass microspheres (LaVision). A 520 nm 1W laser sheet (Laserland) illuminates a longitudinal plane in the middle of the undulator. Images are recorded by a Photron Fastcam SAZ camera at 500 frames per second. Image cross-corelation is performed in the open source PIVlab [42] to construct the velocity field. **Theoretical Modeling** - The separation of scales (\(H\ll\lambda\)) in the thin film geometry leads to a set of reduced momentum equations and a flow field that is predominantly horizontal. Thus integration of the \(X\)-momentum equation with no slip boundary condition on the undulator (\(Z=h_{a}\)) and no shear stress condition at the free surface (\(Z=h_{f}\)) results in \[v_{X}=\frac{1}{2\eta}\frac{\mathrm{d}p}{\mathrm{d}X}\left[(Z^{2}-h_{a}^{2})-2h _{f}(Z-h_{a})\right]-V_{w}. \tag{11}\] Similarly, we integrate the \(Z\)-momentum equation and apply the Young-Laplace equation at the free surface, which yields the following expression for the pressure \(p\), \[p=-\gamma h_{f}^{\prime\prime}+\rho g(h_{f}-Z). \tag{12}\] Integration of the continuity equation gives the volume flow rate, \(q=\int_{h_{a}}^{h_{f}}v_{X}\mathrm{d}Z\), (per unit depth in this two dimensional case). Plugging eqs. (11) and (12) into the expression of flow rate gives the following ODE, \[q=\frac{1}{3\eta}(\gamma h_{f}^{\prime\prime\prime}-\rho gh_{f}^{\prime})(h_{f }-h_{a})^{3}-V_{w}(h_{f}-h_{a}). \tag{13}\] This equation relates the yet unknown constant \(q\) and the unknown free surface shape \(h_{f}\). We close the problem by imposing the following additional constraint on \(h_{f}\): \[\int_{0}^{2\pi\lambda}h_{f}\,\mathrm{d}X=2\pi H\lambda, \tag{14}\] which states that the mean film thickness over one wavelength remains the same as that of the unperturbed interface, \(H\). In dimensionless form the above set of equations take the form of eqs. (3) and (4) of the main text. For a direct comparison with experiments, we transform the flow rate, \(q\) back to the lab frame which is an explicit function of time, \(Q(x,t)=q+V_{w}(h_{f}(x,t)-\delta\sin(x-V_{w}t)/\lambda)\). We seek for periodic free surface shapes, such that \(h_{f}=H+periodic\) terms. The time averaged flow rate thus simplifies to \[\langle Q\rangle=q+V_{w}H, \tag{15}\] where the integration is performed at a fixed spatial location. In dimensionless form this equation becomes, \(\langle\bar{Q}\rangle=\bar{q}+1\), as mentioned in the main text. To estimate the efficacy of the pumping mechanism, we compare the output, flow rate to the energy dissipation within the flow, which is given by, \[\mathcal{E}=\eta\int_{h_{a}}^{h_{z}}\int_{0}^{2\pi\lambda}\left(\frac{\partial v _{X}}{\partial Z}\right)^{2}\mathrm{d}X\mathrm{d}Z. \tag{16}\] Using the expression for velocity field of eq. (11), we integrate on \(Z\) to find that the free surface shape \(h_{f}\) fully determine the amount of dissipation in the flow. In dimensionless form, dissipation becomes \[\bar{\mathcal{E}}=\frac{1}{3}\left(\frac{Bo}{Ca}\right)^{2}\int_{0}^{2\pi} \bar{h_{f}^{{}^{\prime}2}}\left(\bar{h}_{f}-\bar{h}_{a}\right)^{3}\mathrm{d} \bar{X}, \tag{17}\] where \(\bar{\mathcal{E}}=\mathcal{E}H/(\eta\lambda V_{w}^{2})\). ## Acknowledgements We thank Yohan Sequeira and Sarah MacGregor for initial contributions. **Funding:** C.R., D.T., S.L., and S.J. acknowledge the support of NSF through grant no CMMI-2042740. A.P. acknowledges startup funding from Syracuse University. **Author contributions:** A.P., S.J. conceived the idea. A.P., J.Y., Y.S., C.R., and S.J. designed and performed experiments, and analyzed data. Z.C., D.T., and S.L. developed the theoretical and numerical models. All authors wrote the paper collectively. **Competing interests:** The authors declare that they have no competing interests. **Data and materials availability:** All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. All data and Mathematica scripts are available on the Open Science Framework (DOI 10.17605/OSF.IO/ERZ79).
2304.05917
A Phoneme-Informed Neural Network Model for Note-Level Singing Transcription
Note-level automatic music transcription is one of the most representative music information retrieval (MIR) tasks and has been studied for various instruments to understand music. However, due to the lack of high-quality labeled data, transcription of many instruments is still a challenging task. In particular, in the case of singing, it is difficult to find accurate notes due to its expressiveness in pitch, timbre, and dynamics. In this paper, we propose a method of finding note onsets of singing voice more accurately by leveraging the linguistic characteristics of singing, which are not seen in other instruments. The proposed model uses mel-scaled spectrogram and phonetic posteriorgram (PPG), a frame-wise likelihood of phoneme, as an input of the onset detection network while PPG is generated by the pre-trained network with singing and speech data. To verify how linguistic features affect onset detection, we compare the evaluation results through the dataset with different languages and divide onset types for detailed analysis. Our approach substantially improves the performance of singing transcription and therefore emphasizes the importance of linguistic features in singing analysis.
Sangeon Yong, Li Su, Juhan Nam
2023-04-12T15:36:01Z
http://arxiv.org/abs/2304.05917v1
# A Phoneme-Informed Neural Network Model for Note-Level Singing Transcription ###### Abstract Note-level automatic music transcription is one of the most representative music information retrieval (MIR) tasks and has been studied for various instruments to understand music. However, due to the lack of high-quality labeled data, transcription of many instruments is still a challenging task. In particular, in the case of singing, it is difficult to find accurate notes due to its expressiveness in pitch, timbre, and dynamics. In this paper, we propose a method of finding note onsets of singing voice more accurately by leveraging the linguistic characteristics of singing, which are not seen in other instruments. The proposed model uses mel-scaled spectrogram and phonetic posteriorgram (PPG), a frame-wise likelihood of phoneme, as an input of the onset detection network while PPG is generated by the pre-trained network with singing and speech data. To verify how linguistic features affect onset detection, we compare the evaluation results through the dataset with different languages and divide onset types for detailed analysis. Our approach substantially improves the performance of singing transcription and therefore emphasizes the importance of linguistic features in singing analysis. \({}^{1}\)Gradeudate School of Culture Technology, KAIST, Daejeon, Republic of Korea \({}^{2}\)Institute of Information Science, Academia Sinica, Taipei, Taiwan singing transcription, onset detection, phoneme classification, music information retrieval ## 1 Introduction Note-level singing transcription is an music information retrieval (MIR) task that predicts attributes of note events (i.e., onset time, offset time, and pitch value) from audio recordings of singing voice. Although this task has been studied for a long time, the performance of singing transcription is generally inferior to those of other musical instruments such as polyphonic piano music [1, 2]. The lack of large-scale labeled datasets is one of the major technical barriers. In addition, singing voice has highly diverse expressiveness in terms of pitch, timbre, dynamics, as well as phonation of lyrics. For example, singing techniques such as vibrato, bending, and portamento make it difficult to find note boundaries and note-level pitches. This variability makes even manual note transcription by human experts difficult [3]. This in turn has resulted in the lack of high-quality labeled datasets. Another important characteristic of singing voice which is well distinguished from other instruments is that it conveys linguistic information through lyrics and this influences note segmentation. Given that most singing notes are syllabic (i.e., one syllable of text is set to one note of music) and melsimatic (i.e., one syllable is some with multiple notes), the relationship between the change of syllables and the change of notes is sophisticated. This makes certain kinds of note patterns of singing voice not seen in any other instruments. Therefore, we need to consider such linguistic characteristic in automatic singing transcription models. In this paper, we propose a neural network model that incorporates linguistic information into the input to improve note-level singing transcription for singing voice. Similar to earlier research, we use log-scaled mel-spectrogram as a primary input. In addition to that, we take phonetic posteriorgram (PPG) from a pre-trained phoneme classifier as the second input. As shown in Figure 1, PPG shows a pattern distinct from the ones of mel-spectrogram, and it can be noted that the transition pattern of PPG can better describe the onset event at 1.2 and 2.8 second. We propose a two-branch neural network model based on a convolutional recurrent neural network (CRNN) backbone to represent both of the input features effectively. In the experiment, we conduct an ablation study to examine the effectiveness of model design, mel-spectrogram, and PPG. Also, we compare the effects of mel-spectrogram and PPG on transition and re-onset, the two types of challenging onset events in singing transcription. Finally, we demonstrate that our proposed model outperforms a few state-of-the-art note-level singing transcription models, especially in terms of onset detection. ## 2 Related Works Traditional studies mainly used various types of spectral difference for onset detection of audio signals [4]. The spectral difference is particularly successful at finding percussive onsets but it performs poorly on expressive instruments that have soft onsets. Deep neural networks have been actively applied to singing voice as well. Nishikimi _et al_. [5] suggested an attention-based encoder-decoder network with long short-term memory (LSTM) modules. Fu _et al_. Figure 1: An example of singing voice: mel-spectrogram (top), piano roll with onsets and pitches of notes (middle), and phonetic posteriorgram (PPG) (bottom) from singing (phonemes with probability under 0.5 in this example were omitted). [6] proposed a hierarchical structure of note change states to segment singing notes and used multi-channel features to increase the performance. Hsu et al. [7] suggested a semi-supervised AST framework. More recently, [8] proposed the object detection-based approach to significantly improve the performance of singing voice onset/offset detection. While the majority of them relied on note onset and offset information from melody labels, one recent attempted to use phoneme information as part of input features for note segmentation [9]. However, the performance was not convincing. In this work, we present a neural network architecture to make an effective use of the phoneme information. ## 3 Proposed Method ### Model Architecture Our proposed model architecture consists of two branch networks and a single RNN with a dense layer as illustrated in Figure 2. One branch network takes log-scaled mel-spectrogram \(X\) and the other branch network takes phonetic posteriorgram (PPG) \(\hat{P}\) from a pre-trained phoneme classifier. Both of the branches are CRNN where CNN architectures are a modified version of _ConvNet_ proposed in [10], which is commonly used in the piano transcription task [1, 11]. To get the wider time-scale receptive field, we changed the first convolution layer with a dilated convolution with 2 dilation on the time frame axis. To predict the note events, we combined the two branch networks by concatenating the outputs and connecting them to an additional RNN layer and a dense layer. The output layer is represented with a 3-dimensional sigmoid vector where each element detects onset, offset, and activation as binary states. The activation indicates whether the note is on or off at each frame. ### Framewise Phoneme Classifier We extracted the phonetic information using a phoneme classifier which returns the output as a PPG. We implemented it using a single CRNN network with a dense layer. We used the original _ConvNet_ architecture for the CNN part. We tried two loss functions to train the phoneme classifier network. One is the framewise cross entropy loss, which is possible when we have time-aligned phoneme labels. Since it is difficult to obtain time-aligned phoneme labels in frame-level especially for singing voice, we also used the connectionist temporal classification (CTC) loss function [12] which can handle the alignment between the predicted phoneme sequence (\(\tilde{p}\)) and the ground truth phoneme sequence (\(S\)) which have unequal lengths. The CTC algorithm predicts phoneme sequences with inserted blank labels along the possible prediction paths \(\mathcal{B}\). Since the CTC loss function is optimized for predicting the entire sequence, the prediction pattern tends to be spiky and sparse and thus it does not find the boundaries of phonemes well [12, 13]. To solve this problem, we used two layers of bidirectional LSTM layers and a single dense layer that reconstruct the input log-scaled mel-spectrogram (\(\hat{X}\)). This was proposed to enhance the time alignment when the CTC loss is used [14]. For the reconstruction loss (\(\mathcal{L}_{\text{mega}}\)), we normalized the log-scaled mel-spectrogram from \(-1\) to \(1\) (\(X\)) and applied the \(\tanh\) function for the activation and used the \(L_{2}\) loss function. These loss functions are defined as: \[\mathcal{L}_{\text{CTC}} =-\log\sum_{\hat{p},\mathcal{B}(\hat{p})=p}\prod_{t=0}^{T-1} \mathbb{P}(\hat{p}_{t}|X)\,, \tag{1}\] \[\mathcal{L}_{\text{recon}} =\left\|\hat{X}-\hat{X}\right\|^{2},\] \[\mathcal{L}_{\text{PPG}} =\mathcal{L}_{\text{CTC}}+\mathcal{L}_{\text{recon}}\,,\] where \(T\) is the total number of time steps, \(p\) is the ground truth phoneme sequence and \(\mathbb{P}(\hat{p}_{t}|X)\) is the PPG at time \(t\). ### Label Smoothing Unlike other instruments, synthesized or auto-aligned onset/offset labels are hardly available in the case of the singing datasets [15]. In addition, since singing onsets are temporally soft, has a soft onset, to locate the exact onset positions of singing by means of with a waveform or mel-spectrogram is by no means straightforward. Such softness of the onset is one of the factors that makes the onset of singing voices more challenging to train. Previous frame-wise onset detection studies [6, 7] extended the duration of the onset label to solve this problem. Following these previous studies, we also used a smoothing method to increase the length of the onset and offset label. Specifically, we smoothed the 1-D one-hot onset label sequence \(y_{\text{sn}}:=y_{\text{on}}[n]\) (\(n\) denotes the time index) and the offset label sequence \(y_{\text{off}}:=y_{\text{off}}[n]\) through the linear convolution with a scaled triangular window function \(w_{\text{in}}[n]\) to improve the precision simultaneously. The scale factor of the triangular function \(N\) stands for the number of frames with nonzero values. To make the center of the label to \(1\) after the smoothing, we only used the odd numbers for the scale factor \(N\). The convolution process is represented as \[w[n] =\begin{cases}1-\left|\frac{n}{(N+1)/2}\right|&\text{if }|n|\leq\frac{(N+1)}{2}\\ 0&\text{otherwise.}\end{cases} \tag{2}\] \[y_{\text{on},s}[n] =y_{\text{on}}[n]*w_{\text{in}}[n]\] \[y_{\text{off},s}[n] =y_{\text{off}}[n]*w_{\text{in}}[n]\] where the operation \(*\) represents the linear convolution and \(n\) is the frame index. ### Note Decoding To find the positions of onsets from the prediction output, we set a constant threshold and set the frame with the maximal value above the threshold as the position of onset. When finding the offset of a note, we first find the offset candidates between the current onset time and the next onset time. The offset candidate is either the highest peak of the offset prediction or the time frame that the activation Figure 2: The proposed model architecture prediction goes lower than 0.5. If multiple offset candidates exist, we set the offset to the latest offset candidate. If no offset candidate is found, the offset of the note is set to the time frame of the next onset. The threshold of onset and offset is set to 0.2. In order to determine the threshold, we evaluated the validation set using a threshold ranging from 0.1 to 0.9 in increments of 0.1 to identify the optimal threshold. For note-level singing transcription, we estimated the note-level pitch from frame-wise F0s of the note segment to find the pitch of the note, following [6]. We extracted F0s with the PYIN algorithm [16], which is one of the most accurate pitch trackers. To compress the F0 contour to the note-level pitch, we used the weighted median algorithm, which finds the 50% percentile in the ordered elements with given weights. In this experiment, we use the normalized Hann window function with the same length of the note segment frames as the weight of the weighted median to reduce the influence of the F0 near the boundaries, which are the most expressive part. Since the sum of all weight values should be one, the Hann window function is normalized by dividing by the sum of the window elements. ## 4 Experiments ### Datasets We used SSVD v2.0 as the primary dataset [8]. It contains multiple sight-singing recordings, consisting of 67 singing audio files for the train and validation set, and 127 audio files for the test set. The human labeled annotations include onset, offset, and averaged note pitch. To use both phoneme and note labels given the audio, we also used the 50 songs in Korean from the CSD dataset [17], which have both note and phoneme labels of a female professional singer. Since the original note annotations of CSD was targeted for singing voice synthesis, we found it needs some refinement for the note transcription task. Thus, we re-annotated 50 songs of CSD for our experiment, following the rule suggested by [3]. The re-annotated label of CSD can be found on our GitHub page 1. The refined CSD is split 35, 5, and 10 songs for train, validation, and test set each. Footnote 1: [https://github.com/seyong92/CSD_reannotation](https://github.com/seyong92/CSD_reannotation) To train the phoneme classifier, we used TIMIT [18] which contains English speech with time-aligned phoneme labels for the model with SSVD v2.0. TIMIT contains 5.4 hours of audio of English speech. While training the phoneme classifier network, we reduced the phoneme types to 39 following the CMU pronouncing dictionary [19]. For the model with CSD, we used the unaligned phoneme label in CSD to train. To compare the transcription performance of the proposed model with previous work, we also used the ISMIR2014 [3] dataset, which contains 38 songs sung by both adults and children, as a test set. ### Evaluation Metrics We evaluated the models with the mir_eval library [20] for onset/offset detection and note-level transcription. We used the metrics proposed in [3]: F1-measure of COn (correct onset), COff (correct offset), COnOff (correct onset and offset), COnP(correct onset and pitch), and COnPOff (Correct onset, offset and pitch). We used the default parameters of mir_eval, which sets the onset tolerance to 50 ms, the offset tolerance to larger value between 50 ms and 0.2 of note duration, and the 50 cents for the pitch tolerance. Also, we report the results when the onset/off thresholds are 100 ms considering the softness of singing onsets. ### Training Details We computed 80 bin mel-spectrogram \(X\) with 320 samples in hop size (20 ms) and 1024 samples in FFT size after resampling audio files to 16 kHz. For the modified _ConvNet_ module, we set 48/48/96 nodes to the convolutional layers and 768 nodes to the dense layer. We used 768 nodes in all bidirectional LSTM layers and set the last FC layer in the note onset/activation detector to have two separate nodes for onset and activation detection, respectively. For the label smoothing, we used a scale factor of 5 to extend the label length to 100 ms, which shows the best results in our experiment. To train the note onset/offset detection network, we used the AdamW optimizer [21] with a batch size of 8 and a learning rate of 1e-6. We reduced the learning rate with a reducing factor of 0.98 for every 1000 steps. While training, we used the random audio segment with 5 seconds. The validation set was evaluated for every 500 steps and we stopped training when there is no advance in the model for 10 validation steps. To train the phoneme classifier, we used the Adam optimizer with a batch size of 16 and a learning rate of 2e-4. We reduced the learning rate with a reducing factor of 0.98 for every 900 steps. We validated the model with every 500 steps for the phoneme classifier and trained the model while there is no advance in the model for 5 validation steps. ## 5 Results and Discussions ### Ablation Study We conducted an ablation study to see the effect of input features and model architectures. The proposed model shown in Figure 2 \begin{table} \begin{tabular}{l|c|c c c c|c c c c c} \hline \hline \multirow{2}{*}{Training dataset Evaluation dataset} & \multicolumn{4}{c}{SSVD v2.0} & \multicolumn{4}{c}{CSD-refined} & \multirow{2}{*}{SSVD v2.0} \\ & & & \multicolumn{2}{c}{ISMIR2014} & \multicolumn{2}{c}{SSVD v2.0} & \multicolumn{2}{c}{CSD-refined} & \multicolumn{2}{c}{ISMIR2014} & \multicolumn{2}{c}{SSVD v2.0} \\ \hline & Feature & COn & COff & COn & COff & COn & COff & COn & COff & COn & COff \\ \hline (a) Single CRNN & \(X\) & 0.8244 & 0.7751 & 0.8956 & 0.8983 & 0.9797 & 0.9719 & 0.8812 & 0.7524 & 0.8866 & 0.8007 \\ (b) Dual CRNNs + one RNN & \(X,X\) & 0.9133 & 0.8513 & 0.9486 & 0.9566 & 0.9888 & 0.9838 & 0.9904 & 0.7636 & 0.8988 & 0.8089 \\ \hline (c) Single CRNN & \(\hat{P}\) & 0.8655 & 0.7776 & 0.9223 & 0.9105 & 0.9890 & 0.9660 & 0.9048 & 0.7685 & 0.9063 & 0.8296 \\ (d) Dual CRNNs + one RNN & \(\hat{P},\hat{P}\) & 0.9094 & 0.8310 & 0.9342 & 0.9470 & 0.9907 & 0.9638 & 0.9090 & 0.7733 & 0.9142 & 0.8336 \\ \hline (e) Dual CNNs + one RNN & \(X,\hat{P}\) & 0.9024 & 0.8349 & 0.9439 & 0.9420 & 0.9877 & 0.9791 & 0.9016 & 0.7852 & 0.9098 & **0.8340** \\ (f) Dual CNNs + two RNNs & \(X,\hat{P}\) & 0.9230 & 0.8538 & 0.9496 & 0.9531 & 0.9914 & 0.9839 & **0.9150** & **0.7804** & **0.9199** & 0.8328 \\ (g) Dual CRNNs + one RNN & \(X,\hat{P}\) & **0.9305** & **0.8576** & **0.9569** & **0.9692** & **0.9923** & **0.9864** & 0.9145 & 0.7723 & 0.9166 & 0.8257 \\ \hline \hline \end{tabular} \end{table} Table 1: Onset/Offset detection results from various neural network architectures with two input features. \(X\) and \(\hat{P}\) denote mel-spectrogram and PPG, respectively. (g) corresponds to the neural network architecture in Figure 2. corresponds to "Dual CRNNs + one RNN" in (g). We first compare it to a single CRNN model with only one type of features (either mel spectrogram in (a) or PPG in (c)). Considering that the model architecture can affect the performance, we also compared the proposed model to the same "Dual CRNNs + one RNN" but with one type of input features for both inputs (either mel spectrogram in (b) or PPG in (d)). Given the proposed model, we also removed the RNN module in each CRNN branch in (e), and then stacked another RNN module on top of (e) in (f). Table 1 show the onset/offset detection results of all compared models. Single CRNNs with only one input features in (a) and (c) have significantly lower accuracy than the proposed model in (g). The gap is relatively lower when the model was trained with CSD. Interestingly, the single CRNN model with PPG consistently outperformed the one with mel spectrogram. The results from the same model architecture with different input features in (b), (d), and (g) shows that using both mel-spectrogram and PPG is more effective than using either one of them. However, the gaps are less significant than those in the comparison with single CRNN in (a) and (c). This indicates that model architecture is also important to improve the performance. Likewise, the results in (e), (f), and (g) show that the design choice of neural network affects the performance. Since CSD is a small dataset, the proposed model have a tendency to overfit it. Overall, the propose model in (g) shows the best performance. We further investigated the effect of the input features by looking into the recall accuracy for two special types of onsets: re-onset and transition. They are note onsets which have 20 ms or less apart from the offset of the previous note. The difference between the two types is whether the pitch changes (transition) or not (re-onset). The re-onset usually occurs when the syllable in lyrics or energy changes while continuing the same pitch. Note that, since our model does not predict the onset types, only recall accuracy can be computed. As shown in Figure 3, the models with mel-spectrogram (in red) tend to detect more transitions, indicating that it is more sensitive to pitch change. On the other hand, the models with PPG (in blue) tend to detect more re-onsets, showing that it captures phonetic changes well. Lastly, the models with both features have more balanced accuracy in both transition and re-onset. The demo examples, more analysis, and pre-trained models are available on the companion website. 2 Footnote 2: [https://seyong92.github.io/phoneme-informed-transcription-blog/](https://seyong92.github.io/phoneme-informed-transcription-blog/) ### Comparison with Prior Work Table 2 shows the comparison with prior work on the ISMIR2014 dataset, which has been widely used for singing voice onset/offset detection (or note segmentation). For fair comparison, we retrained a recent state-of-the-art model [8] with the same dataset we used for the proposed model. Our proposed model outperforms the state-of-the-art model in onset F-score in both tolerances while it is slightly worse in offset F-score in 50ms tolerance. The publicly available note transcription software (TONY) and model package (Omnizart) have significantly lower accuracy than the two models. Finally, to see the performance for singing note transcription including pitch information, we measured COnP and COnPOff on ISMIR2014 and SSVD v2.0 in Table 3. The results show that the proposed model achieves consistently better performances than TONY and Omnizart. ## 6 Conclusion We presented a neural network architecture for note-level singing transcription that takes advantage of PPG on top of mel-spectrogram. Through the ablation study, we examined various architectures along with the two input features, showing that the additional phonetic information is effective in singing onset/offset detection. Also, we showed that the proposed model outperforms the compared models on ISMIR2014 and SSVD v2.0. For future work, we plan to explore models that effectively handle weak supervision from noisy melody and lyrics labels on a large-scaled dataset [24]. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline Model & \multicolumn{3}{c}{COn (50ms)} & \multicolumn{3}{c}{COn (100ms)} & \multicolumn{3}{c}{COff (50ms)} & \multicolumn{3}{c}{COff (100ms)} \\ & P & R & F & P & R & F & P & R & F & P & R & F \\ \hline TONY [22] & 0.7068 & 0.6326 & 0.6645 & 0.8402 & 0.7486 & 0.7877 & 0.7862 & 0.6981 & 0.7358 & 0.8405 & 0.7471 & 0.7870 \\ Omnizart [7, 23] & 0.7797 & 0.8229 & 0.7951 & 0.8667 & 0.9153 & 0.8843 & 0.7698 & 0.8132 & 0.7852 & 0.8394 & 0.8842 & 0.8554 \\ MusicYOLO (retrained) [8] & 0.9427 & 0.8970 & 0.9176 & **0.9711** & 0.9247 & 0.9456 & **0.8924** & **0.8504** & **0.8693** & **0.9476** & 0.9024 & 0.9227 \\ **Proposed** & **0.9448** & **0.9188** & **0.9305** & 0.9652 & **0.9387** & **0.9506** & 0.8701 & 0.8473 & 0.8576 & 0.9429 & **0.9176** & **0.9290** \\ \hline \hline \end{tabular} \end{table} Table 2: Onset/Offset detection results on ISMIR2014. Both of MusicYOLO and the proposed model were trained with SSVD v2.0. Omnizart is a pretrained note transcription model package (not with SSVD v2.0). Tony is a free, open-source application for pitch and note transcription. \begin{table} \begin{tabular}{c c c c} \hline \hline & ISMIR2014 & \multicolumn{2}{c}{SSVD v2.0} \\ \hline Model & COnP & COnPOff & COnP & COnPOff \\ \hline Tony [22] & 0.6009 & 0.4621 & 0.7311 & 0.6794 \\ Omnizart [7, 23] & 0.6174 & 0.4992 & 0.6047 & 0.5151 \\ **Proposed** & **0.8975** & **0.7728** & **0.8558** & **0.8303** \\ \hline \hline \end{tabular} \end{table} Table 3: Note transcription results on ISMIR2014 and SSVD v2.0. The proposed model was trained with SSVD v2.0 Figure 3: Transition and re-onset recall of the models in the ablation study on ISMIR2014. The red triangle is the model with mel-spectrogram, the blue square is the model with PPG, and the green circle is the model with both features.
2305.03097
Federated Ensemble-Directed Offline Reinforcement Learning
We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Na\"{i}vely combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real-world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. We provide our code and a video of our experiments at \url{https://github.com/DesikRengarajan/FEDORA}.
Desik Rengarajan, Nitin Ragothaman, Dileep Kalathil, Srinivas Shakkottai
2023-05-04T18:25:34Z
http://arxiv.org/abs/2305.03097v2
# Federated Ensemble-Directed ###### Abstract We consider the problem of federated offline reinforcement learning (RL), a scenario under which distributed learning agents must collaboratively learn a high-quality control policy only using small pre-collected datasets generated according to different unknown behavior policies. Naively combining a standard offline RL approach with a standard federated learning approach to solve this problem can lead to poorly performing policies. In response, we develop the Federated Ensemble-Directed Offline Reinforcement Learning Algorithm (FEDORA), which distills the collective wisdom of the clients using an ensemble learning approach. We develop the FEDORA codebase to utilize distributed compute resources on a federated learning platform. We show that FEDORA significantly outperforms other approaches, including offline RL over the combined data pool, in various complex continuous control environments and real world datasets. Finally, we demonstrate the performance of FEDORA in the real-world on a mobile robot. ## 1 Introduction Federated learning is an approach wherein clients learn collaboratively by sharing their locally trained models (not their data) with a federating agent, which periodically combines their models and returns the federated model to the clients for further refinement (Kairouz et al., 2021; Wang et al., 2021). Federated learning has seen much success in supervised learning applications due to its ability to generate well-trained models using small amounts of data at each client while preserving privacy and reducing the usage of communication resources. Recently, there is a growing interest in employing federated learning for _online_ RL problems where each client collects data online by following its own Markovian trajectory, while simultaneously updating the model parameters (Khodadadian et al., 2022; Nadiger et al., 2019; Qi et al., 2021). However, such an online learning approach requires sequential interactions with the environment or the simulator, which may not be feasible in many real-world applications. Instead, each clients may have pre-collected operational data generated according to a client-specific behavior policy. The _federated offline reinforcement learning_ problem is to learn the optimal policy using these heterogeneous offline data sets distributed across the clients and collected by different unknown behavior policies, without sharing the data explicitly. The framework of offline reinforcement learning (Levine et al., 2020) offers a way to learn the policy only using the offline data collected according a behavior policy, without any direct interactions with the environment. However, naively combining an off-the-shelf offline RL algorithm such as TD3-BC (Fujimoto and Gu, 2021) with an off-the-shelf federated supervised learning approach such as FedAvg (McMahan et al., 2017) will lead to a poorly performing policy, as we show later (see Fig. 1-3). Federated offline RL is significantly more challenging than its supervised learning counterpart and the centralized offline RL because of the following reasons. (i) _Ensemble heterogeneity:_ the client data sets are collected according to different behavior policies with different performance levels. It is vital to capture the collective insight of these policies rather than merely average them. (ii) _Pessimistic value computation:_ popular offline RL algorithms involve a pessimistic term with respect to the offline data for minimizing the distribution shift. However, in a federated setting, it is important capture the information from the data of other clients. (iii) _Data heterogeneity:_ Client drift due to multiple local update is a known problem in federated learning. Moreover, training the offline policies at each client also suffers from the data heterogeneity problem inherent to federated learning. In this work, we propose the federated ensemble-directed offline RL algorithm (FEDORA), which can overcome these challenges to obtain high-quality control policies collaboratively. Our proposed approach recognizes that individual client policies are of varying levels of expertise and refines the federated policy based on the relative merits of each client policy in an ensemble learning manner. The algorithm combines the insight across critics similarly and ensures optimism across the federated and local critics, thereby setting ambitious targets for training. FEDORA addresses the issue of data heterogeneity by regularizing the client policies with respect to both the federated policy and the local dataset. Finally, FEDORA prunes the influence of irrelevant data by gradually decaying the reliance on a dataset based on the quality of the policy it can generate. Federated offline RL has not been well studied in the literature, and to the best of our knowledge, no other work presents an approach that addresses the general problem. The primary contribution of this work is to systematically identify the fundamental challenges of this problem and design methods that explicitly tackle each of them. The idea of learning over an ensemble of policies of different qualities to leverage their collective insight is unique to our method. We develop a framework for implementing FEDORA on a single system and over distributed compute resources. We evaluate FEDORA on standard MuJoCo environments and real world datasets and show that it outperforms several other approaches, including pooling all the data and performing offline RL centrally. We also demonstrate FEDORA's excellent performance via real-world experiments on a TurtleBot robot (Amsters and Slaets, 2020). We provide the code for our implementation and the videos of the robot experiments at [https://github.com/DesikRengarajan/FEDORA](https://github.com/DesikRengarajan/FEDORA). ## 2 Related Work **Offline RL:** The goal of offline RL is to learn a policy from a fixed dataset generated according to a behavior policy (Levine et al., 2020). One of the key challenges of the offline RL approach is the distribution shift problem where the state-action visitation distribution of learned policy may be different from that of the behavior policy which generated the offline data. It is known that this distribution shift may lead to poor performance of the learned policy (Levine et al., 2020). A common method used by the offline RL algorithms to tackle this problem is to learn a policy that is close to the behavior policy that generated the data. This is achieved by using some kind of regularization either on the actor or critic (Fujimoto and Gu, 2021; Fujimoto et al., 2019; Kumar et al., 2020, 2019; Wu et al., 2019). Some offline RL algorithms perform weighted versions of behavior cloning or imitation learning on either the whole or subset of the dataset (Wang et al., 2018; Peng et al., 2019; Chen et al., 2020). With the success of large language models, many recent works have framed offline RL as a sequence modeling problem and have used transformers to solve it (Chen et al., 2021; Janner et al., 2021). **Federated Learning:**McMahan et al. (2017) introduced FedAvg, a federation strategy where clients collaboratively learn a joint model without sharing data. Reddi et al. (2021) generalized FedAvg by separating the client learning and server learning process. A key problem in federated learning is data heterogeneity where clients have non-identically distributed data which causes unstable and slow convergence (Wang et al., 2021; Karimireddy et al., 2020; Li et al., 2020). To tackle the issue of data heterogeneity, Li et al. (2020) propose FedProx, a variant of FedAvg, where a proximal term is introduced to prevent the local model from deviating too much from the server model. Karimireddy et al. (2020) take a different approach to tackle data heterogeneity and the resultant client drift by using control variates. **Federated Reinforcement Learning:** Federated learning has been extended to the online RL setting recently. Khodadadian et al. (2022) analyzed the performance the federated tabular Q-learning. Qi et al. (2021) combined traditional online RL algorithms with FedAvg for various applications. Some works propose methods to vary the weighting scheme of FedAvg according to performance metrics such as the length of arally in the game of Pong (Nadiger et al., 2019) or average return in the past 10 training episodes (Lim et al., 2021) to achieve better performance or personalization. Wang et al. (2020) propose a method to compute weights using attention over performance metrics of clients such as average reward, average loss, and hit rate for an edge caching application. Hebert et al. (2022) use a transformer encoder to learn contextual relationships between agents in the online RL setting. Hu et al. (2021) propose an alternative approach to federation where reward shaping is used to share information among clients. Xie and Song (2023) propose a KL divergence-based regularization between the local and global policy to address the issue of data heterogeneity in an online RL setting. In the offline RL setting, Zhou et al. (2022) propose federated dynamic treatment regime algorithm by formulating offline federated learning using a multi-site MDP model constructed using linear MDPs. However, this approach relies on running the local training to completion followed by just one step of federated averaging. Unlike this work, our proposed method does not assume linear MDPs, which is a limiting assumption in many real-world problems. Moreover, we use the standard federated learning philosophy of periodic federation followed by multiple local updates. _To the best of our knowledge, ours is the first work that propose a general federated offline RL algorithm for clients with heterogeneous data._ ## 3 Preliminaries **Federated Learning:** The goal of federated learning is to minimize the following objective, \[F(\theta)=\mathbb{E}_{i\sim\mathcal{P}}\left[F_{i}(\theta)\right], \tag{1}\] where \(\theta\) represents the parameter of the federated (server) model, \(F_{i}\) denotes the local objective function of client \(i\), and \(\mathcal{P}\) is the distribution over the set of clients \(\mathcal{N}\). The FedAvg algorithm (McMahan et al., 2017) is a popular method to solve Eq. (1) in a federated way. FedAvg divides the training process into rounds, where at the beginning of each round \(t\), the server broadcasts its current model \(\theta^{t}\) to all the clients, and each client initializes its current local model to the current server model. Clients perform multiple local updates on their own dataset \(\mathcal{D}_{i}\) to obtain an updated local model \(\theta^{t}_{i}\). The server then averages these local models proportional to the size of their local dataset to obtain the server model \(\theta^{t+1}\) for the next round of federation, as \[\theta^{t+1}=\sum_{i=1}^{|\mathcal{N}|}w_{i}\theta^{t}_{i},\quad w_{i}=\frac{ |\mathcal{D}_{i}|}{|\mathcal{D}|},\quad|\mathcal{D}|=\sum_{i=1}^{|\mathcal{N} |}|\mathcal{D}_{i}|. \tag{2}\] **Reinforcement Learning:** We model RL using the Markov Decision Process (MDP) framework denoted as a tuple \((\mathcal{S},\mathcal{A},R,P,\gamma,\mu)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, and \(P:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) denotes the transition probability function that gives the probability of transitioning to a state \(s^{\prime}\) by taking action \(a\) in state \(s\), \(\gamma\) is the discount factor, and \(\mu\) is the distribution of the initial state \(s_{0}\). A policy \(\pi\) is a function that maps states to actions (deterministic policy) or states to a distribution over actions (stochastic policy). The goal of RL is to maximize the infinite horizon discounted reward of policy \(\pi\), defined as \(J(\pi)=\mathbb{E}_{\pi,P,\mu}\left[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})\right]\), which is the expected cumulative discounted reward obtained by executing policy \(\pi\). The state-action value function (or Q function) of a policy \(\pi\) at state \(s\) and executing action \(a\) is the expected cumulative discounted reward obtained by taking action \(a\) in state \(s\) and following policy \(\pi\) thereafter: \(Q^{\pi}(s,a)=\mathbb{E}_{\pi,P}\left[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t })|s_{0}=s,a_{0}=a\right]\). **Offline Reinforcement Learning:** The goal of offline RL is to learn a policy \(\pi\) only using a static dataset \(\mathcal{D}\) of transitions \((s,a,r,s^{\prime})\) collected using a behavior policy \(\pi_{\mathrm{b}}\) without any additional interactions with the environment. Offline RL algorithms typically aim to learn a policy that maximizes the value, with some kind of regularization with respect to the behavior policy to ensure that the learned policy does not deviate from the behavior policy. This regularization is done to prevent distribution shift, a significant problem in offline RL, where the difference between the learned policy and behavior policy can lead to erroneous Q-value estimation of state-action pairs not seen in the dataset (Kumar et al., 2020; Levine et al., 2020). In this work, we use TD3-BC (Fujimoto and Gu, 2021) as our candidate offline RL algorithm for federation. Our choice is motivated by the simplicity of the TD3-BC approach and its superior empirical performance in benchmark problems. The TD3-BC algorithm is a behavior cloning (BC) regularized version of the TD3 algorithm (Fujimoto et al., 2018). The policy in TD3-BC is updated using a linear combination of TD3 objective and behavior cloning loss, where the TD3 objective ensures policy improvement and the BC loss prevents distribution shift. More precisely, the TD3-BC objective can be written as \[\pi =\operatorname*{arg\,max}_{\pi}\ U_{\mathcal{D}}(\pi), \tag{3}\] \[\text{where}\quad U_{\mathcal{D}}(\pi) =\mathbb{E}_{s,a\sim\mathcal{D}}\left[\lambda Q^{\pi}(s,\pi(s))-( \pi(s)-a)^{2}\right], \tag{4}\] and \(\lambda\) is a hyperparameter that determines the relative weight of the BC term. ## 4 Federated Offline Reinforcement Learning In real-world offline RL applications, the data is typically obtained from the operational policies of multiple agents (clients) with different levels of expertise. Often, clients prefer to keep their operational data local without sharing it with others. A federated offline RL algorithm aims to learn the optimal policy for the underlying RL problem using the available offline data without the clients sharing their data with one another or the server. We denote the set of clients as \(\mathcal{N}\). Each client \(i\in\mathcal{N}\) has the offline dataset \(\mathcal{D}_{i}=\{(s_{j},a_{j},r_{j},s^{\prime}_{j})^{m_{i}}_{j=1}\}\) generated according to a behavior policy \(\pi_{i}^{b}\). We assume that the underlying MDP model \(P\) and reward function \(R(\cdot,\cdot)\) are identical for all the clients, and the statistical differences between the offline datasets \(\mathcal{D}_{i}\) are only due to the difference in behavior policies \(\pi_{i}^{b}\) used for collecting the data. In a standard federated learning algorithm such as FedAvg, each client performs multiple parameter updates before sending its parameters to the server. It is known that performing multiple local updates in federated learning can reduce the communication cost significantly without compromising on the optimality of the converged solution (Kairouz et al., 2021; Wang et al., 2021). In federated offline RL, since each client has to perform multiple steps of policy evaluation and policy update using its local offline data \(\mathcal{D}_{i}\), it is reasonable to consider a client objective function that is consistent with a standard offline RL algorithm objective. We choose the objective function used in the TD3-BC algorithm (Fujimoto and Gu, 2021), i.e., \(U_{\mathcal{D}_{i}}\) given in Eq. (3), as the client objective function. Our choice is motivated by the simplicity of the TD3-BC objective function and its empirical success in a variety of environments. Similar to the standard federated learning objective given in Eq. (1), we can now define the federated offline RL objective as \[U(\pi_{\text{fed}})=\sum_{i=1}^{|\mathcal{N}|}w_{i}U_{\mathcal{D}_{i}}(\pi_{ \text{fed}}), \tag{5}\] where \(w_{i}\) are weights to be determined. One approach to leveraging experiences across users without sharing data would be to combine existing federated learning techniques with offline RL algorithms. _Is such a naive federation strategy sufficient to learn an excellent federated policy collaboratively? Furthermore, is federation even necessary?_ In this section, we aim to understand the challenges of federated offline RL with the goal of designing an algorithmic framework to address these challenges. We start with an example illustrating the issues in designing a federated offline RL algorithm. We consider the Hopper environment from MuJoCo (Todorov et al., 2012), with \(|\mathcal{N}|=10\), \(|\mathcal{D}_{i}|=5000\), and we use the data from the D4RL dataset (Fu et al., 2020). However, instead of using the data generated by the same policy for all clients, we consider the setting where five clients use the data from the hopper-expert-v2 dataset (which was generated using a completely trained (expert) SAC policy) and five clients use the data from the hopper-medium-v2 dataset (which was generated using a partially trained (medium) policy achieving only a third of the expert performance). The clients and the server are unaware of the quality (expert or medium) of the data. Fig. 1 shows the performance comparison of multiple algorithms, where the mean and the standard deviation are calculated over 4 seeds. **Combining All Data (Centralized):** Combining the data and learning centrally is the ideal scenario in supervised learning. However, as seen in Fig. 1, performing centralized training by combining data generated using different behavior policies in offline RL can be detrimental. Sharing data can exacerbate the distributional shift between the learned policy and the individual datasets, leading to divergence of the learned policy and causing poor performance. We note that such performance deterioration due to combining data from behavior policies with different expertise levels is observed in the offline RL literature (Yu et al., 2021; Fujimoto and Gu, 2021; Kumar et al., 2020). **Individual Offline RL:** Here, agents apply offline RL to their own datasets without collaborating with others. In Fig. 1, we observe that clients with either expert or medium data do not learn well and exhibit a large standard deviation. This observation may be attributed to no client having sufficient data to learn a good policy. **Naive Federated Offline RL:** A simple federation approach is to use the offline RL objective as the local objective and apply FedAvg (Eq. (2)). However, offline RL algorithms typically comprise two components - an actor and a critic. It is unclear a priori which components should be federated, so we conduct experiments where we federate only the actor (Fed-A) or both the actor and the critic (Fed-AC). Surprisingly, these naive strategies result in federated policies that perform worse than individual offline RL, as witnessed in Fig. 1. ### Issues with Federated Offline RL Our example illustrates several fundamental issues that must be addressed while designing viable federated offline RL algorithms, including: **1. Ensemble Heterogeneity:** Performing offline RL over heterogeneous data yields a set of policies of different qualities. It is crucial to leverage the information contained in these varied policies rather than simply averaging them. However, federation after a single-step local gradient at each client using weights in the manner of FedAvg, \(w_{i}=|\mathcal{D}_{i}|/|\sum_{i=1}^{|\mathcal{N}|}|\mathcal{D}_{i}|\), is equivalent to solving the offline RL problem using the combined dataset of all clients (Wang et al., 2021). This approach leads to poor performance due to the resulting distribution shift, as shown in Fig. 1. How should we optimally federate the ensemble of policies learned by the clients? **2. Pessimistic Value Computation:** Most offline RL algorithms involve a pessimistic term with respect to the offline data for minimizing the distribution shift. Training a client critic using only the local data with this pessimistic term could make it pessimistic towards actions poorly represented in its dataset but well represented in other clients' data. How do we effectively utilize the federated critic along with the locally computed critic to set ambitious targets for offline RL at each client? **3. Data Heterogeneity:** Federated learning calls for performing multiple local gradient steps at each client before federation to enhance communication efficiency. However, numerous epochs would bias a client's local model to its dataset. This client drift effect is well known in federated (supervised) learning and could lead to policies that are not globally optimal. In turn, this could cause the federated policy's performance to be worse than training locally using only the client's data, as seen in Fig. 1. How should we regularize local policies to prevent this? ## 5 FedORA Design Approach We desire to develop a Federated Ensemble-Directed Offline RL Algorithm (FEDORA) that addresses the issues outlined in Section 4 in a systematic manner. Three fundamental requirements drive our approach. First, the clients jointly possess an ensemble of local policies of different levels of quality, and the server must leverage the collective knowledge embedded in this ensemble during federation. Second, the quality of these Figure 1: Performance comparison of federated and centralized offline RL algorithms. policies must be assessed using an ensemble of critics that depend on local data for policy evaluation. Finally, after each round of federation, clients must update their local policies via offline RL utilizing both their local data and the received federated policy. We begin by considering the objective of ensemble learning at federation. We define \(p_{i}\) as a measure of the relative priority of the local policy obtained from client \(i\). Then, the weights \(w_{i}\) in the federated offline RL objective (Eq. (5)) can be defined as \[w_{i}=\frac{p_{i}|\mathcal{D}_{i}|}{|\mathcal{D}|},\quad|\mathcal{D}|=\sum_{i=1 }^{|\mathcal{N}|}p_{i}|\mathcal{D}_{i}|. \tag{6}\] Based on this objective, we now design FEDORA accounting for each of the three requirements indicated above. In what follows, we will denote the round of federation using a super-script \(t\) and local update step within a round of federation by the super-script \(k\). More precisely, \(\pi^{(t,k)}_{i}\) denotes the policy of client \(i\) in round \(t\) of federation after \(k\) local policy updates. Since all clients initialize their local policies to the federated policy at the beginning of each round of federation, \(\pi^{(t,0)}_{i}=\pi^{t}_{\text{fed}}\) for each client \(i\). We also denote \(\pi^{t}_{i}=\pi^{(t,K)}_{i}\), where \(K\) is the maximum number of local updates. Since all clients initialize their local critics to the federated critic, we can similarly define \(Q^{(t,k)}_{i}\), \(Q^{(t,0)}_{i}=Q^{t}_{\text{fed}}\), and \(Q^{t}_{i}=Q^{(t,K)}_{i}\) for the local critic. ### Ensemble-Directed Learning over Client Policies As seen in Fig. 1, naively federating offline RL results in policies that perform worse than learning on aggregated data, which is itself sub-optimal (and not feasible in a federated setting). Furthermore, simply averaging over the clients' policies without considering their qualities implies that low-quality policies will have the same influence on the federated policy as high-quality ones. Such an averaging technique could lead to a scenario where the federated policy's performance is worse than training locally using only a client's data. We desire an ensemble learning approach that continually refines the federated policy based on the relative merits of the client policies over multiple rounds of federation. The figure of merit for a given client \(i\)'s local policy at round \(t\), \(\pi^{t}_{i}\), is its infinite horizon discounted reward \(J\left(\pi^{t}_{i}\right)\). However, while obtaining an unbiased estimate of the performance of a policy in the online RL setting is straightforward, it is hard to do so in offline RL since we do not have access to the environment for executing the policy and evaluating its performance. Therefore, we use the offline data to get a proxy estimate \(J^{t}_{i}\) for the quality of the final local policy \(\pi^{t}_{i}\) as \(J^{t}_{i}=\mathbb{E}_{s\sim\mathcal{D}_{i}}\left[Q^{t}_{i}(s,\pi^{t}_{i}(s)) \right],\) where \(Q^{t}_{i}\) is the local critic function at round \(t\) after \(K\) local updates (computation of \(Q^{t}_{i}\) and \(\pi^{t}_{i}\) are described later). We then compute the relative priority \(p^{t}_{i}\) of client \(i\) at round \(t\) using a soft-max function as \[p^{t}_{i}=\frac{e^{\beta J^{t}_{i}}}{\sum_{i=1}^{|\mathcal{N}|}e^{\beta J^{t} _{i}}}, \tag{7}\] where \(\beta\) is a tunable temperature parameter. The weights \(w_{i}\)s in the federated offline RL objective (Eq. (5)) can now be defined as in Eq. (6) using the relative priority \(p^{t}_{i}\)s. Finally, the server updates the federated policy as \[\pi^{t+1}_{\text{fed}}=\sum_{i=1}^{|\mathcal{N}|}w^{t}_{i}\pi^{t}_{i}. \tag{8}\] ### Federated Optimism for Critic Training The critic in our algorithm plays two major roles. First, offline RL for policy updates at each client requires policy evaluation using local data. Second, policy evaluation by the critic determines the priority \(p^{t}_{i}\) of the local policy at client \(i\) for ensemble learning during each round \(t\) of federation. We desire a local critic at each client that can utilize the knowledge from the ensemble of critics across all clients while also being tuned to the local data used for policy evaluation. A critic based on offline data suffers from extrapolation errors as state-actions pairs not seen in the local dataset will be erroneously estimated, greatly impacting actor-critic style policy updates in federated offline RL. Since the federated policy is derived from the set of local policies, it may take actions not seen in any client's local dataset. This problem is exacerbated when the local policy at the beginning of each communication round is initialized to the federated policy. We introduce the notion of _federated optimism_ to train local critics, wherein critics leverage the wisdom of the crowd and are encouraged to be optimistic. We achieve this federated optimism via two steps. First, we use an ensemble-directed federation of the critics, where the local critic of client \(i\) at round \(t\) is weighed according to its merit to compute the federated critic as \[Q_{\text{fed}}^{t+1}=\sum_{i=1}^{|\mathcal{N}|}w_{i}^{t}Q_{i}^{t}. \tag{9}\] Such a priority-weighted averaging ensures that the critics from clients with good policies significantly influence the federated critic. Second, for the local critic update, we choose the target value as the maximum value between the local critic and the federated critic, given by \[\tilde{Q}_{i}^{(t,k)}(s,a)=\max\left(Q_{i}^{(t,k)}(s,a),Q_{\text{fed}}^{t}(s,a )\right), \tag{10}\] where \(\tilde{Q}_{i}^{(t,k)}(s,a)\) is the target value of state \(s\) and action \(a\) at the \(t^{\text{th}}\) round of federation after \(k\) local critic updates. This ensures that the local critic has an optimistic (but likely feasible) target seen by the system. Using this optimistic target in the Bellman error, we update the local critic as \[Q_{i}^{(t,k+1)}=\underset{Q}{\arg\min}\ \mathbb{E}_{(s,a,r,s^{\prime}) \sim\mathcal{D}_{i}}[(r+\gamma\tilde{Q}_{i}^{(t,k)}(s^{\prime},a^{\prime})-Q(s,a))^{2}], \tag{11}\] where \(a^{\prime}=\pi_{i}^{(t,k)}\). In practice, instead of solving the entire optimization problem described above, we obtain \(Q_{i}^{(t,k+1)}\) after a single gradient update w.r.t. the objective function. ### Proximal Policy Update for Heterogeneous Data While essential in order to set ambitious estimates, an optimistic critic might erroneously estimate the value of \(\tilde{Q}_{i}^{(t,k)}\). Therefore, regularizing the local policy update w.r.t. both the local data and the federated policy is crucial. For regularization w.r.t. to the local offline data, we use the same method as in the TD3-BC algorithm and define the local loss function \(\mathcal{L}_{\text{local}}\) as \[\mathcal{L}_{\text{local}}(\pi)=\mathbb{E}_{(s,a)\sim\mathcal{D}_{i}}[-Q_{i}^ {(t,k)}(s,\pi(s))+(\pi(s)-a)^{2}]. \tag{12}\] We then define the actor loss \(\mathcal{L}_{\text{actor}}\) as \[\mathcal{L}_{\text{actor}}(\pi)=\mathcal{L}_{\text{local}}(\pi)+\mathbb{E}_{( s,a)\sim\mathcal{D}_{i}}[(\pi(s)-\pi_{\text{fed}}^{t}(s))^{2}], \tag{13}\] where the second term is a regularization w.r.t. to the federated policy. The local policy is updated using \(\mathcal{L}_{\text{actor}}\) as \[\pi_{i}^{t,k+1}=\underset{\pi}{\arg\min}\ \mathcal{L}_{\text{actor}}(\pi). \tag{14}\] Similar to the local critic update, we perform only one gradient update w.r.t. \(\mathcal{L}_{\text{actor}}\) to obtain \(\pi_{i}^{t,k+1}\), instead of solving the complete optimization problem above. ### Decaying the Influence of Local Data FEDORA uses a combination of local data loss and a proximal term for its policy update Eq. (12)-(14). However, the local data loss might hamper the updated policy's performance since the local dataset may be generated according to a non-expert behavior policy. Hence, a client must decay the influence of its local data if it is reducing the performance of the updated policy by lowering the influence of \(\mathcal{L}_{\text{local}}\) in \(\mathcal{L}_{\text{actor}}\). To do so, we first evaluate the performance of the federated policy using the federated critic and local data at round \(t\). For this evaluation, we use the proxy estimate \(J_{i}^{\text{fed},l}=\mathbb{E}_{s\sim\mathcal{D}_{l}}\left[Q_{\text{fed}}^{ t}(s,\pi_{\text{fed}}^{t}(s))\right]\). We compare this value with the performance of the updated policy, \(J_{i}^{t}\), which is obtained using the updated critic. This difference provides us with an estimate of the improvement the local data provides. We decay the influence of \(\mathcal{L}_{\text{local}}\) by a factor \(\delta\) if \(J_{i}^{\text{fed},t}\geq J_{i}^{t}\). We summarize FEDORA in Algorithm 1 and 2. ``` 1:function train_client(\(\pi_{\text{fed}}^{t}\), \(Q_{\text{fed}}^{t}\)) 2:\(\pi_{i}^{(t,0)}=\pi_{\text{fed}}^{t},\quad Q_{i}^{(t,0)}=Q_{\text{fed}}^{t}\) 3:for\(1\leq k<K\)do 4: Update Critic by one gradient step w.r.t. Eq. (11) 5: Update Actor by one gradient step w.r.t. Eq. (14) 6:endfor 7: Decay \(\mathcal{L}_{\text{local}}\) by \(\delta\) if \(J_{i}^{\text{fed},t}\geq J_{i}^{t}\) 8:endfunction ``` **Algorithm 1** FEDORA: Outline of Client \(i\)'s Algorithm ## 6 Experimental Evaluation We conduct experiments to answer three broad questions: **(i) Comparative Performance:** How does FEDORA perform compared to other approaches with client data generated by heterogeneous behavior policies?, **(ii) Sensitivity to client updates and data quality:** How does the performance depend on the number of local gradient steps at clients, the randomness in the available number of agents for federation, and the quality of the data at the clients?, and **(iii) Ablation:** How does the performance depend on the different components of FEDORA? ### Implementing FEDORA We implement FEDORA over the Flower federated learning platform (Beutel et al., 2020), which supports learning across devices with heterogeneous software stacks, compute capabilities, and network bandwidths. Flower manages all communication across clients and the server and permits us to implement the custom server-side and client-side algorithms of FEDORA easily. However, since Flower is aimed at supervised learning, it only transmits and receives a single model at each federation round, whereas we desire to federate both policies and critic models. We solve this limitation by simply appending both models together, packing and unpacking them at the server and client sides appropriately. While 'FEDORA-over-Flower' is an effective solution for working across distributed compute resources, we also desire a simulation setup that can be executed on a single machine. This approach sequentially executes FEDORA at each selected client, followed by a federation step, thereby allowing us to evaluate the different elements of FEDORA in an idealized federation setup. ### Methodology **Baselines:** We consider the following baselines. **(i) Fed-A: Naive Actor Federation:** The local objective of all clients follows TD3-BC (Eq. 4). The server performs FedAvg over the actor's parameters, whereas each client learns the critic locally. **(ii) Fed-AC: Naive Actor-Critic Federation:** The local objective of all clients follows TD3-BC and the server performs FedAvg over the parameters of both the actor and the critic. **(iii) Fed-AC-Prox:** We add a proximal term to Fed-AC, which has been shown to help in federated supervised learning when clients have heterogeneous data (Li et al., 2020). **(iv) Heterogeneous Data-Aware Federated Learning (HDAFL)** We extend HDAFL (Yang et al., 2020) to the offline RL setting by dividing the actor network into generic and client-specific parts and then federating only the generic part during each round. **(v) Centralized:** We perform offline RL over the pooled data by combining the data present in all clients and performing TD3-BC. **Experimental Setup:** We focus on a scenario where clients are collaboratively learning to solve the same task, but the behavior policies used to collect data for each client could differ. We run experiments with the number of clients \(|\mathcal{N}|=50\), with each client having a local dataset of size \(|\mathcal{D}_{i}|=5000\). Of these 50 clients, 25 are provided with data from the D4RL (Fu et al., 2020) expert dataset, while the other 25 are provided with data from the D4RL medium dataset. The clients (and the server) are unaware of the nature of their datasets. We choose \(|\mathcal{N}_{t}|=20\) clients at random to participate in each round \(t\) of federation. For each plot, we evaluate the performance with four different seeds. The solid lines or bars in the plots correspond to the mean estimate and the shaded region in the plots shows the estimate's standard deviation. We evaluate the performance of FEDORA and the baselines over three MuJoCo tasks: Hopper, HalfCheetah, and Walker2D. During a round of federation, each client performs 20 epochs of local training in all algorithms, which is roughly 380 local gradient steps in our experimental setup. We also show results on the CityLearn (Vazquez-Canteli et al., 2020) environment, which aims for urban-scale energy management based on data collected from a collection of residential buildings. Finally we demonstrate the performance of FEDORA in the real-world using a TurtleBot, a two-wheeled differential drive robot on a obstacle avoidance problem. ### Comparative Performance of FEDORA In Fig. 3, we plot the cumulative episodic reward of the server/federated policy during each round of communication/federation. We observe that FEDORA outperforms all federated baselines and achieves performance equivalent to or better than centralized training. Furthermore, the federated baselines fail to learn a good server policy even after training for many communication rounds and plateau at lower levels compared to FEDORA, emphasizing that the presence of heterogeneous data hurts their performance. In the Hopper environment, FEDORA's performance exceeds the centralized training baseline. To further understand the effect of data coming from Figure 2: Comparison of FEDORA and centralized training under heterogeneous data. multiple behavior policies on centralized training, we consider a scenario where 50 clients with datasets of size \(|D_{i}|=5000\) participate in federation, with 25 clients having expert data and the other 25 having random data, i.e., data generated from a random policy. We compare the performance of FEDORA with the centralized setting. From Fig. 2, we notice that combining data of all clients deteriorates the performance of the centralized policy. This observation highlights the fact that performing centralized training with data collected using multiple behavior policies can be detrimental. ### Sensitivity to Client Updates and Data Quality Communication efficiency is a crucial aspect of a good federated learning algorithm is its communication efficiency. Ideally, the federated policy should achieve peak performance within a small number of communication rounds. An intriguing question is whether increasing the number of local epochs to further train each client's policy before federation could improve communication efficiency. However, multiple local epochs can be detrimental in the presence of heterogeneous data, leading to client drift and resulting in a highly heterogeneous set of client policies. To explore this question, we conduct experiments studying the effect of varying the number of local epochs performed in a federation round on the algorithm's performance in the Hopper environment, as shown in Fig. 4(a). We observe that increasing the number of epochs leads to faster learning, and for epochs greater than 5, our algorithm learns within 1000 communication rounds. This result suggests that FEDORA can effectively extract wisdom from a heterogeneous ensemble of policies. Not all clients may participate in every round of federation due to their location on distributed compute that might be intermittently accessible. Participation of only a subset of clients would vary both the quality of the ensemble of policies and the number of model updates available to the federating agent. We evaluate how FEDORA learns under various fractions of clients participating in each round of federation and illustrate the results in Fig. 4(b). The plots show that the fraction of clients has only minor effects on the performance of the global policy. This finding leads us to conclude that FEDORA is fairly robust towards variations in the fraction of clients during federation. Figure 4: Effect of varying the number of (a) local gradient steps, (b) participating clients in each round, and (c) expert clients in FEDORA. Figure 3: Evaluation of algorithms on different MuJoCo environments. Finally, we consider heterogeneity in the quality of data available to the clients, affecting their ability to learn policies from their local datasets effectively. We conduct experiments where we vary the percentage of clients with expert datasets participating in federation. From the results presented in Fig. 4(c), we observe that FEDORA performs well even when only 20% of the total clients have expert-quality data. ### FEDORA Ablation Study We present ablation studies in the appendix due to space constraints. In summary, ensemble learning across the federated policies is the most crucial component of FEDORA, as expected, followed by federated optimism of the critic. Other elements contribute incrementally to FEDORA's ultimately satisfying performance, as seen in Appendix B Fig. 7. ### Federated Offline RL with CityLearn Data Real-world environments often have a large state space and are stochastic in nature. We run federated experiments on CityLearn (Vazquez-Canteli et al., 2020) to assess the effectiveness of FEDORA on such large-scale systems. The motivation to evaluate on CityLearn and the details of our experiments involving 10 clients are shared in Appendix C. In Appendix C Fig. 9, it can be observed that FEDORA outperforms other federated offline RL algorithms as well as centralized training, which learns using TD3-BC on the data aggregated from every client. These findings indicate that FEDORA can perform well in large-scale stochastic environments. ### Real-World Experiments We evaluated the performance of FEDORA on TurtleBot (Amsters and Slaets, 2020), a two-wheeled differential drive robot shown in Fig. 5. The goal is to collaboratively learn a control policy to navigate waypoints while avoiding obstacles using offline data distributed across multiple robots (clients). This scenario is relevant to several real-world applications, such as cleaning robots in various houses, which aim to collaboratively learn a control policy to navigate and avoid obstacles using data distributed across different robots. Collaborative learning is essential in this setting because a single robot might not have enough data to learn from or have encountered adequately different scenarios. Additionally, federated learning overcomes the privacy concerns associated with sharing data among the robots. In our setup, 20 clients participate in the learning process, each with a dataset consisting of 4 trajectories. We collect data using four behavior policies with varying levels of expertise, as seen in Fig. 6(a), and each robot's data is obtained from a single behavior policy. The training occurs over 100 communication rounds, each consisting of 20 local epochs. Fig. 6(b) shows the trajectories obtained by the learned policies of different algorithms, and only FEDORA is able to successfully complete the trajectory without colliding with the obstacle. Fig. 6(c) shows the cumulative reward comparison and FEDORA clearly outperforms other baselines. We provide more details on this robot experimental setup in Appendix D. ## 7 Conclusion We presented an approach for federated offline RL, accounting for the heterogeneity in the quality of the ensemble of policies that generated the data at the clients. We solved multiple challenging issues by systematically developing an empirically well-performing ensemble-directed approach entitled FEDORA, Figure 5: TurtleBot3 Burger. which extracts the collective wisdom of the policies and critics and discourages excessive reliance on irrelevant local data. We believe that our approach likely has value even in the federated supervised learning context when the data is drawn from sources of variable qualities, which we intend to explore formally in the future.
2301.11164
A Graph Neural Network with Negative Message Passing for Graph Coloring
Graph neural networks have received increased attention over the past years due to their promising ability to handle graph-structured data, which can be found in many real-world problems such as recommended systems and drug synthesis. Most existing research focuses on using graph neural networks to solve homophilous problems, but little attention has been paid to heterophily-type problems. In this paper, we propose a graph network model for graph coloring, which is a class of representative heterophilous problems. Different from the conventional graph networks, we introduce negative message passing into the proposed graph neural network for more effective information exchange in handling graph coloring problems. Moreover, a new loss function taking into account the self-information of the nodes is suggested to accelerate the learning process. Experimental studies are carried out to compare the proposed graph model with five state-of-the-art algorithms on ten publicly available graph coloring problems and one real-world application. Numerical results demonstrate the effectiveness of the proposed graph neural network.
Xiangyu Wang, Xueming Yan, Yaochu Jin
2023-01-26T15:08:42Z
http://arxiv.org/abs/2301.11164v1
# A Graph Neural Network with Negative Message Passing for Graph Coloring ###### Abstract Graph neural networks have received increased attention over the past years due to their promising ability to handle graph-structured data, which can be found in many real-world problems such as recommended systems and drug synthesis. Most existing research focuses on using graph neural networks to solve homophilous problems, but little attention has been paid to heterophily-type problems. In this paper, we propose a graph network model for graph coloring, which is a class of representative heterophilous problems. Different from the conventional graph networks, we introduce _negative message passing_ into the proposed graph neural network for more effective information exchange in handling graph coloring problems. Moreover, a new loss function taking into account the self-information of the nodes is suggested to accelerate the learning process. Experimental studies are carried out to compare the proposed graph model with five state-of-the-art algorithms on ten publicly available graph coloring problems and one real-world application. Numerical results demonstrate the effectiveness of the proposed graph neural network. graph neural networks, graph coloring, negative message passing, self-information, aggregator ## I Introduction Different types of data, such as traditional unordered data, time series structured data, and graph-structured data may be encountered in solving real-world optimization problems. Graph-structured data contains rich relationship information between the attributes, making it more challenging to effectively learn the knowledge in the data using traditional machine learning models. Recently, graph neural networks (GNNs) have become extremely popular in the machine learning community, thanks to their strong ability to capture relational information in the data. Many different types of GNNs have been proposed, and they can be usually categorized according to different aggregation methods, such as graph convolutional network (GCN) [1], GraphSAGE [2], graph attention network (GAT) [3], among others. Since GNN enables the nodes to aggregate their neighbors' information, it becomes a powerful tool for solving problems in social networking [4], bioinformatics [5], and community detection [6], to name a few. Node classification is often considered as bi- or multi-class classification problems by inputting the features of nodes and edges and outputting each node's probability belonging to different classes. During the optimization process, each node combines information of its own and its neighbors to generate embedding vectors. By representing node and edge features in a higher dimensional space, embedding vectors are used to solve problems by downstream machine learning tasks in a non-autoregression or autoregression way [7]. However, most of the optimization problems mentioned above are continuous and homophilous. That is, two nodes between one edge tend to be very similar, i.e., have similar embedding vectors. These homophilous problems described as graphs have many real-world applications, and have been studied extensively [8, 9, 10]. On the contrary, some problems contain heterophily property [11] where the connected nodes have as different embeddings as possible. For example, graph coloring problems (GCPs) are a type of classical heterophilous problems because they are defined to have different colors between the connected nodes. GCPs have attracted increased research interest since many real-world applications can be formulated as graph coloring problems, such as arranging timetables and schedules, managing air traffic flow, and many others [12]. In fact, GCPs, like most combinatorial optimization problems (COPs), are NP-hard problems, which require highly intensive computational costs to obtain the exact optimal solutions. Luckily, some approaches have been proposed to find approximate optimal solutions of COPs with the help of GNNs within an acceptable period of computation time [13, 14, 15]. For example, Liu _et al._[16] proposed two supervised residual gated GCNs to directly predict the entire Pareto set for multi-objective facility location problems. GNNs have inherent advantages to represent GCPs and solve such graph-structured problems, as samples (nodes) can aggregate information from their neighbors. Recently, some efforts have been made to apply GNNs to solve GCPs, and different structured graph neural networks have been proposed. GNN-GCP [17] uses a network containing a GNN to predict the chromatic number of a given graph and uses the generated embedding vectors for clustering to obtain a color assignment. Li _et al._[18] explored several rules for using a GNN to solve GCPs, and proposed a graph neural network called GDN. PI-GNN [19] is inspired by a Potts model, and it is shown to perform well by combining with a novel loss function. The above algorithms basically follow the classical network structures, which are proposed to solve homophilous problems, such as node classification and link prediction [20]. However, these structures may not necessarily be suited for solving GCPs, and improvements should be made in the structural design to better adapt to the heterogeneous characteristics of GCPs. For this reason, a negative message passing strategy is proposed considering the requirements for GCPs, i.e., two connected nodes should have as different embedding vectors as possible. Meanwhile, color assignment with no conflict and stable convergence during the training process are both desired when solving GCPs. Therefore, a loss function consisting of a utility objective and a convergence objective is proposed to guarantee the good performance of the proposed neural network and a stable training process. Section II presents the preliminaries of this work, including the definition of graph coloring problems, the aggregation methods of classical GNNs, and the related work that use GNNs to solve graph coloring problems. In Section III, the proposed graph network model with negative message passing and a new loss function is detailed. Section IV describes the experimental results on the benchmark graph coloring problems and a real-world problem. Ablation studies and analysis of the computational complexity are also presented. Finally, conclusions and future work are given in Section V. ## II Preliminaries ### _Definition of Graph Coloring Problems_ We consider an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,2,\ldots,n\}\) is the node set, and \(\mathcal{E}\) is the edge set connecting two nodes, represented by \((u,v)\in\mathcal{E}\). The adjacency matrix \(A\) of a graph \(\mathcal{G}\) also represents the connection between the nodes. When nodes \(i\) and \(j\) are connected, \(A(i,j)\) and \(A(j,i)\) are equal to 1; otherwise, elements in these two positions of \(A\) are equal to 0. Graph coloring problems [21, 22, 23] aim to minimize the number of conflicts while using the minimum number of colors. Mathematically, GCPs can be formulated in two different forms. In the first formulation, GCPs are defined as a constrained minimization problem as follows: \[\min k, \tag{1}\] \[\mathrm{s.t.}\sum_{(u,v)\in\mathcal{E}}f(u,v)=0,\] where \(f(u,v)\) represents the clash between neighboring nodes \(u\) and \(v\), and \((u,v)\in\mathcal{E}\). If \(u\) and \(v\) share the same color, \(f(u,v)=1\); otherwise, \(f(u,v)=0\). \(k\) is the number of colors used in graphs. If \(k\) colors can fill a graph with no clash, this graph is called \(k\)-colorable. The chromatic number \(\mathcal{X}(\mathcal{G})\) is the minimum of \(k\), denoting the optimum number of colors without resulting in any conflicts in \(\mathcal{G}\). Alternatively, GCPs can also be expressed as a constraint satisfaction problem with a given number of colors \(k\): \[\min \sum_{(u,v)\in\mathcal{E}}f(u,v), \tag{2}\] \[\mathrm{s.t.} k.\] This formulation aims to minimize the clashes in a graph, given that the number of colors that can be used is \(k\). The optimization problems defined in Eq. (1) and Eq. (2) are slightly different, which may be suited for different requirements or priorities in solving the same problem. Take the assignment of taxis to customer requests as an example. In Eq. (1), the customer requests are the top priority, and a minimal number of taxis should be found. In this case, the assumption is that there is a sufficient number of taxis. However, if the number of taxis is limited during the peak period, the assignment that can satisfy the majority of customer requests would be preferred. This work will focus on minimizing the number of total conflicts under a certain given number of colors, i.e., the GC problem will be solved according to the formulation in Eq. (2). An illustrative example consisting of six nodes and seven edges is given in Fig. 1. In Fig. 1 (a), three colors are used, and no connected neighboring nodes have the same color, meaning that this is an optimal solution to the given example. Fig. 1: Example solutions to graph coloring with six nodes and seven edges. (a) An optimal solution for the given problem, where \(\mathcal{X}(\mathcal{G})=3\). (b) The number of total conflicts is zero, but the number of used colors can be reduced. (c) A solution using only two colors, while the \(\mathcal{X}(\mathcal{G})\) being \(3\), and the number of total conflicts should be minimized. By contrast, as shown in Fig. 1(b), there is no conflict between the neighboring nodes, but the number of colors used can still be further minimized. In Fig. 1 (c), on the other hand, only two colors have been used, and there is one conflict in color. Overall, Fig. 1 (a) provides an ideal solution, while solutions in Fig. 1 (b) and (c) may also be applicable in different scenarios. ### _Classical GNNs_ The main difference between graph neural networks and other neural networks is that the nodes (samples) in GNNs can gather information from their neighbors before being projected to the next layer, due to the existence of edges [24, 25]. Aggregating embeddings from neighbors and combining them with the node's embedding are two critical operators in GNNs called aggregation and combination. Authors in [18] consider GNNs with aggregation and combination operators as AC-GNNs. They make GNNs powerful in exploiting and revealing relationships between nodes. The input to GNNs is usually the feature vector or randomly-generated vector of each node. With an adjacency matrix, one node generates a new embedding in the next hidden layer by combining its own and neighbors' aggregated embedding. Therefore, embeddings in the first layer contain first-order neighborhood information and the \(k\)-th hidden layer captures the \(k\)-th neighborhood information. The aggregation and combination operators may differ in various GNNs based on different purposes and optimization tasks. In the following, we briefly review some classical and popular aggregation and combination methods. GCN [1] considers the equal importance of a node and its neighbor information, which integrates the aggregation and combination methods. The embedding of the \(k\)-th layer is calculated by \(H^{k}=\sigma(\hat{A}H^{k-1}W^{k-1})\), where \(\hat{A}=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}\) is the normalized adjacency matrix with self connections, \(D\) is the degree matrix, and \(W^{k-1}\) is a trainable weight matrix of the \(k-1\)-th layers. On the other hand, some GNNs separate the aggregation and combination methods, assigning different importance to the node embedding and the aggregated neighborhood embedding. GraphSAGE [2] is one example of this type; the embedding of node \(v\) in the \(k\)-th hidden layer is obtained by \(h_{v}^{k}\leftarrow\sigma(W^{k-1}\cdot CONCAT(h_{v}^{k-1},h_{\mathcal{N}(v)} ^{k}))\), where \(h_{\mathcal{N}(v)}^{k}\gets AGGREGATE(h_{u\in\mathcal{N}(v)}^{k-1})\). There are also many other types of aggregation and combination methods proposed very recently, such as HetGNN [26], DNA [27], non-local graph neural networks [28], and SAR [29]. ### _Related Work_ Only sporadic work on solving graph coloring problems with the help of graph neural networks have been reported, which are based on supervised or unsupervised learning. GNN-GCP [17] generates color embeddings and node embeddings with a process of learning whether the graph is \(k\)-colorable or not. At first, \(2^{15}\) positive and \(2^{15}\) negative GCP instances are generated with ground truth given by a GCP solver. The proposed neural network learns the difference between the ground truth and the prediction by using the binary cross entropy as a loss function. The prediction is gotten through a GNN to aggregate neighbor information, an RNN to update the embeddings, and an MLP to get the final logit probability. If the network predicts a graph is k-colorable, the second stage is applied by clustering vertex embeddings using k-means, and nodes in the same cluster share the same color. Different from the above work that relies on supervised learning and a set of pre-solved GCP instances, unsupervised learning has also been adopted to tackle GCPs by constructing a loss function without requiring the ground truth. In general, the output of unsupervised learning for GCPs are probability vectors of the nodes, indicating which color should be assigned to each node. GDN [18] uses a margin loss function, which minimizes the distance between a pre-defined margin and a Euclidean distance between node pairs. Schuetz _et al._[19] take advantage of the close relationship between GCPs and the Potts model so that the partition function of the Potts model can be converted to the chromatic function of \(\mathcal{G}\) mathematically. Besides, the Potts model only distinguishes whether neighboring spins are in the same state or not, which is very similar to the definition of GCPs. Consequently, a loss function is proposed in PI-GNN that minimizes the inner product of node pairs. Generally, loss functions in unsupervised learning usually aim to minimize the similarity between the connected nodes in solving heterophilous problems such as graph coloring problems. Besides, solving GCPs with heuristic algorithms has also been studied. Tabucol [30] is a Tabu-based algorithm that defines Tabu moves and a Tabu list tailored for GCPs. It changes the color (for example, red) of one randomly selected node into another color (green) to reduce the number of conflicts. The color set (green, red) will be put into the Tabu list for certain iterations to restrict this node from changing to red. An evolutionary algorithm, HybridEA [31] is proposed to modify traditional ways of generating offspring and use a Tabu-search method as the mutation operator. ## III The Proposed Graph Neural Network Model The main difference between heterophilous and homophilous optimization problems is that connected nodes in shared edges should express differently rather than similarly. For example, in node classification tasks, neighboring nodes should be classified into different classes instead of the same class. Therefore, the embeddings of connected nodes should be as different as possible, and the first-order neighbors need to pass negative messages to the node. On the other hand, the second-order neighborhood may contain information that positively impacts the node. Therefore, inspired by the heterophilous property of graph coloring problems, we proposed a new AC-GNN, which is called GNN-1N by mainly focusing on the first-order (1st) negative message passing strategy. To better solve GCPs, the proposed algorithm should take into account two objectives. One is to find a solution without conflict and the other is to achieve fast and stable convergence. Based on the above two objectives, a loss function for solving graph coloring problems in an unsupervised way is proposed. ### _Forward Propagation_ The forward propagation of the proposed framework is described in this subsection. We apply GraphSAGE [2] as a baseline GNN model, and some modifications specialized for solving GCPs are made. In the original paper of GraphSAGE, the authors give three aggregation methods: mean aggregator, LSTM aggregator, and pool aggregator. The embedding \(h_{v}^{k}\) of node \(v\) in the \(k\)-th layer using the mean aggregator is calculated as follows: \[\begin{split}& h_{\mathcal{N}(v)}^{k}=mean\{h_{u}^{k-1}\},\forall u \in\mathcal{N}(v),\\ & h_{v}^{k}=\sigma\left(W_{self}^{k}\cdot h_{v}^{k-1}+W_{neigh}^ {k}\cdot h_{\mathcal{N}(v)}^{k}\right),\end{split} \tag{3}\] where \(\mathcal{N}(v)\) is the connected node set of node \(v\), \(h_{\mathcal{N}(v)}^{k}\) is the aggregated embedding of neighborhood of \(v\), \(\sigma\) is the activation operator, \(W_{self}^{k}\) and \(W_{neigh}^{k}\) is the learnable weight matrix for the \(k\)-th layer. In this work, the mean aggregator is considered for improvement. At the beginning of forward propagation, the embedding of each node is usually the feature vector. However, some graphs in GCPs may not have features, so embeddings are generated randomly. In the first hidden layer, nodes gather the first-order neighborhood information, which should make negative contributions. Thus, the embedding of the first layer is obtained by \[\begin{split}& h_{\mathcal{N}(v)}^{1}=mean\{h_{u}^{0}\},\forall u \in\mathcal{N}(v),\\ & h_{v}^{1}=\sigma\left(W_{self}^{1}\cdot h_{v}^{0}-\alpha\cdot W _{neigh}^{1}\cdot h_{\mathcal{N}(v)}^{1}\right),\end{split} \tag{4}\] where \(h_{v}^{0}\) is the randomly generated feature of the input layer of node \(v\), \(\alpha\) is the trainable parameter controlling the negative influence of the neighborhood. Elements in \(W_{self}^{1}\) and \(W_{neigh}^{1}\) are all positive values distributed uniformly from 0 to 1, and \(\alpha\) is initialized to be \(0.5\). This strategy is reasonable and natural. For example, we assume that there is node \(v\) and its two neighbors \(u_{1}\) and \(u_{2}\), whose embeddings are \([0.8,0.6,0.1]\), \([0.7,0.1,0.1]\), and \([0.5,0.1,0.7]\), respectively. We let \(W_{self}^{1}\) and \(W_{neigh}^{1}\) be an identity matrix, and \(\alpha=0.5\), the embedding of node \(v\) is \([0.5,0.55,-0.1]\) calculated by Eq. (4). If embeddings also represent the probability of the assigned color, \(v\) and \(u_{1}\) conflict with each other before applying Eq. (4), as these two nodes both prefer the first color among the three given colors. The assigned color of node \(v\) changes into the second one after applying Eq. (4). On the other hand, if Eq. (3) is used as an updating strategy in the first hidden layer, \(v\) and \(u_{1}\) will remain to be conflicting with each other. According to the property of GNNs, the second hidden layer can aggregate the second-order neighborhood information, which may be able to contribute some helpful positive influence. Therefore, in the second-order layer, the original mean aggregator (Eq. (3)) of GraphSAGE is used to generate embeddings \(h_{v}^{2}\). For graph coloring problems, there are two ways to assign colors to nodes. Firstly, there are \(k\) sets containing nodes, and all nodes in one set have the same color. The second way uses the \(k\)-length probability vector of each node, and the node is assigned to the \(i\)-th color if the probability in the \(i\)-th position of the vector is the largest. This work considers GCPs as an unsupervised classification task, and the second way is used to assign \(k\) colors to different nodes. Therefore, the dimension of \(h_{v}^{2}\) equals the color number \(k\). The probability of node \(v\) can be obtained as follows: \[p_{v}=softmax\{h_{v}^{2}\}, \tag{5}\] where \(softmax\) is the softmax operator, i.e., \(p_{v}(j)=\frac{h_{v}^{2}(j)}{\sum_{i=1}^{k}h_{v}^{2}(i)}\). The final color assigned to \(v\) is the color with the highest probability. ### _Loss Function_ A loss function is proposed to achieve the utility-based and the convergence-based goal in an unsupervised way. The former is to minimize the conflicts between the connected nodes with a given number of colors, while the latter is to minimize the uncertainty of nodes, which can stabilize convergence. As no ground truth is available, the utility-based objective function uses the probability of nodes to reflect the relationship of connected nodes. The main idea is to maximize the difference or minimize the similarity between node pairs. Some loss functions have been proposed based on the above idea. Among them, the loss function inspired by the Potts model performs very well, which is therefore adopted as the utility-based objective function in this work: \[f_{utility}=\sum_{(u,v)\in\mathcal{E}}p_{v}^{T}\cdot p_{u}. \tag{6}\] Eq. (6) only aims to minimize the inner product between two probabilities, which can be considered as minimizing the similarity between two nodes. This is reasonable for solving heterophilous problems such as GCPs. Note that no other _a priori_ knowledge is required, making it suitable for solving other unsupervised learning problems. Fig. 2: The framework of the proposed method, where the first-order aggregator applies Eq. (4) and the second-order aggregator uses the original aggregator Eq. (3). Between the first and the second hidden layers, dropout is employed to avoid getting stuck in local optimums. In addition to maximizing the difference between the connected nodes, we also intend to increase the probability of one node being assigned a certain color to make the learning process more stable. Therefore, we introduce self-information to construct \(f_{conv}\), which is formulated as follows: \[f_{conv}=\sum_{i=1}^{n}p_{i}^{T}\cdot\log p_{i}. \tag{7}\] Self-information represents the amount of information in an event. If the value of self-information is large, it means this event contains more information, indicating a high uncertainty. Otherwise, the uncertainty of this event is low. In terms of GCPs, high self-information of one node means the probabilities of being assigned to different colors are approximately equal. It results in unstable convergence because the color assignment varies greatly under small weight changes. Therefore, nodes with small self-information hold more confidence in the current color assignments, which helps stabilize the convergence. By combining the terms in Eq. (6) and Eq. (7), we get the following loss function: \[\min F=f_{utility}+\lambda f_{conv}. \tag{8}\] where \(\lambda>0\) is a hyperparameter. ### _The Overall Framework_ As shown in Fig. 2, the framework mainly consists of the forward propagation and the optimization process. In the forward propagation, the randomly generated embedding \(h_{i}^{0}\) is assigned to the \(i\)-th node. If we take \(n_{1}\) as an example, it aggregates its first-order negative neighborhood embedding in the first hidden layer, and its second-order neighborhood embedding in the second layer. Between these two hidden layers, the dropout [32] is applied to prevent the solver from getting stuck in a local optimum. After \(n_{1}\) aggregates the two-hop neighborhood information, \(h_{1}^{2}\) is obtained, followed by the softmax function to get a probability vector \(p_{1}\). \(p_{1}\) contains \(k\) elements representing the probabilities of choosing \(k\) colors. After getting the probability vectors of nodes, the loss function \(F\) is calculated with two terms, namely \(f_{utility}\) and \(f_{conv}\). The AdamW optimizer is used to compute the gradient and update weights in the graph neural network. ## IV Numerical Experiments In this section, experiments on the COLOR dataset [33] are presented at first, and then an application to taxi scheduling is applied. An ablation study is given to demonstrate the stabilization ability of the proposed loss function, and finally, the computational complexity is analyzed. ### _Experiments on COLOR Dataset_ In this experiment, we use the publicly available COLOR dataset to evaluate the performance of the proposed algorithm and its peer methods. The COLOR dataset is a classical, widely used graph dataset in the field of graph coloring problems, where Myciel graphs are based on the Mycielski transformation and Queens graphs are constructed on \(n\) by \(n\) chessboard with \(n^{2}\) nodes. More detailed information on graphs can be found in Table 1, with the number of nodes and edges in each graph. Eq. (2) is optimized in the following experiments given the color number \(k\), which is shown in Table 1. Five algorithms are chosen as peer algorithms: Tabucol [30], HybridEA [31], GDN [18], PI-GCN [19], and PI-SAGE [19]. Tabucol and HybridEA are tabu-based heuristics algorithms. The rest three algorithms, GDN, PI-GCN, and PI-SAGE, are GNN-based unsupervised algorithms. The above five methods focus on minimizing conflicts with given numbers of colors. GNN-GCP mainly predicts the color number of a given graph. Therefore, it is not included in this comparison. The results in Table I show the conflicts of each graph (\(f(u,v)\) in Eq. (6)) found by the proposed GNN-1N and the algorithms under comparison. The results of Tabucol and HybridEA are taken from [18], with a maximum run time, i.e., 24 hours per graph, and the results of GDN, PI-GCN, and PI-SAGE are taken from [19]. For a fair comparison, we use the same maximum number of iterations (\(10^{5}\)) to run our methods (GNN-1N) on GPU. Besides, the early stopping mechanism is applied within \(10^{3}\) iterations if the value of the loss function changes less than \(0.001\). The hyperparameters, including the dimensions of \(h_{i}^{k}\), the learning rate \(\eta\) in the AdamW optimizer, the probability of dropout, and \(\lambda\) in the loss function \(F\) are optimized in a similar way to that in [19]. All results listed in Table I are the best coloring results of all methods, from which we can see that the proposed method performs the best on all graphs, especially on large and dense graphs, such as queen11-11 and queen13-13. On the contrary, traditional tabu-based methods and other machine-learning methods cannot find as few conflicts as GNN-1N does. To gain more insights into the solutions found by our methods, we plot the color assignment of queen13-13 in Fig. Fig. 3: The final color assignment of queen13-13 with 169 nodes given by our method. Only 15 conflicts highlighted in red lines exist out of 3328 edges (in grey lines) in this figure. 3. There are 169 nodes in this graph, and 13 colors should be used to color nodes. There are 3328 edges plotted with grey lines and 15 conflicted edges highlighted with red lines. The normalized error rate is \(0.45\%=\frac{15}{3328}\times 100\%\), which is relatively small, indicating that our method has a good ability to find less-conflict color assignments. ### _Application_ In this section, we take the taxi scheduling problem as an example to show the ability of GNN-1N to solve graph-structured problems in real life. We give a simple scenario for taxis to customer requests. Seven customs call a taxi company to book taxis one day. They all plan to book a taxi during the evening rush hour from 17:00 to 18:00 and each confirms a time period. The task is to satisfy all customers' requests with an available number of taxis, assuming that only four taxis are available during this peak hour. To solve this problem, three steps are taken, which are 1) encoding, 2) optimization, and 3) decoding. The encoding step transfers the given timetable into a graph. As shown in Fig. 4 (a), the timetable with departure time and arrival time of seven customers is shown, and each customer is represented by a node in the graph. As customers with overlapping schedules cannot use the same taxi, two nodes should share one edge if two customers have time overlap. For example, the arrival time of customer \(u_{1}\) is earlier than the departure time of \(u_{2}\), so \(u_{1}\) and \(u_{2}\) are not connected. On the other hand, the first customer and the third customer plan to use a taxi from 17:13 to 17:15. Therefore, \(u_{2}\) and \(u_{3}\) are connected, as shown in Fig. 4 (b). After obtaining the graph description of the relationship between customers' schedules, the generated graph is optimized by GNN-1N with a specific color number. The color number is the number of available taxis, which is four here. Figure 4 (c) shows the final color assignments with four colors after optimization, and no conflict is found in the solution. According to the color assignment, seven customers are assigned into four groups, that is, \(\{u_{1},u_{2},u_{7}\}\), \(\{u_{3}\}\), \(\{u_{4},u_{6}\}\), and \(\{u_{5}\}\). ### _Ablation Study and Time Complexity Analysis_ An ablation study is conducted to show the effectiveness of self-information term \(f_{conv}\) (Eq. (7)) included in the loss function \(F\). Figure 5 shows the conflicts trained by the loss function with and without \(f_{conv}\) on queen6-6 and queen8-12 over \(10^{5}\) iterations. To be specific, Eq. (6) is the \(f_{utility}\) proposed in [19], and Eq. (8) is the loss function proposed in [19], the Eq. (8) is the loss function proposed in this paper with adding the self-information term. The hyperparameters are obtained directly from [19], and the \(\lambda\) in Eq. (8) is set to be \(0.25\). As we can see in Fig. 5, the conflicts curve obtained by Eq. (6) is unstable and fluctuates dramatically sometimes. By contrast, the conflicts curve trained by Eq. (8) decreases smoothly. The curves in Fig. 5 indicate the stabilization function of self-information, which can be attributed to the convergence term in Eq. (8). The runtime (in seconds) is plotted in Fig. 6, which shows the time required by GNN-1N to run \(10^{5}\) iterations on one graph coloring problem. It is reasonable that the runtimes increase with the number of nodes and edges increase, because the computational cost is mainly concentrated in the process of node aggregation and backpropagation. In general, the time Fig. 4: Taxi scheduling problem. (a) The timetable containing departure time and arrival time of seven customers. (b) Encoding the timetable into a graph coloring problem. (c) Optimizing the graph coloring problem with our unsupervised neural network method and decoding it. Under this scenario, customs and taxis are nodes and colors in graph coloring problems, respectively. complexity of GNN-1N is similar to PI-GNN [19], as no additional computation is added to the proposed algorithm. ## V Conclusion and Future Work The graph coloring problem is a classical graph-based problem aiming to find a color assignment using a given number of colors. As the GC problem is an NP-hard problem, it is almost impossible to obtain a feasible solution in an acceptable time. Moreover, due to its graph property, such problems cannot be easily and effectively solved by conventional neural networks. In this work, we propose an unsupervised graph neural network (GNN-1N) tailored for solving GCPs, which combines negative message passing with normal message passing to handle heterophily. Besides, a loss function with the utility-based objective and convergence-based objective is proposed for unsupervised learning. Experimental results on public datasets show that GNN-1N outperforms five state-of-the-art peer algorithms. In addition, a toy real-world application of graph coloring problems is also given to demonstrate further the effectiveness of GNN-1N. Solving graph-based heterogeneous problems with GNNs is still in its infancy. The following three improvements could be made on the proposed model. First, we can consider pre-/post-processing to further decrease the number of conflicts. Second, the dynamic graph coloring problems are worthy of investigation, as the conditions may change in the real world. Finally, the fairness of color assignment is an interesting topic to examine. For example, each color should be used roughly the same number of times while making sure that there is no conflict in connecting nodes. For the taxi scheduling problem, each taxi should have a similar number of customers. Therefore, fairness coloring is of great practical importance.
2302.07256
JADES NIRSpec Spectroscopy of GN-z11: Lyman-$α$ emission and possible enhanced nitrogen abundance in a $z=10.60$ luminous galaxy
We present JADES JWST/NIRSpec spectroscopy of GN-z11, the most luminous candidate $z>10$ Lyman break galaxy in the GOODS-North field with $M_{UV}=-21.5$. We derive a redshift of $z=10.603$ (lower than previous determinations) based on multiple emission lines in our low and medium resolution spectra over $0.8-5.3 \mu$m. We significantly detect the continuum and measure a blue rest-UV spectral slope of $\beta=-2.4$. Remarkably, we see spatially-extended Lyman-$\alpha$ in emission (despite the highly-neutral IGM expected at this early epoch), offset 555 km s$^{-1}$ redward of the systemic redshift. From our measurements of collisionally-excited lines of both low- and high-ionization (including [O II]$\lambda3727$, [Ne III]$\lambda 3869$ and C III]$\lambda1909$) we infer a high ionization parameter ($\log U\sim -2$). We detect the rarely-seen N IV]$\lambda1486$ and N III]$\lambda1748$ lines in both our low and medium resolution spectra, with other high ionization lines seen in the low resolution spectrum such as He II (blended with O III]) and C IV (with a possible P-Cygni profile). Based on the observed rest-UV line ratios, we cannot conclusively rule out photoionization from AGN, although the high C III]/He II and N III]/He II ratios are compatible with a star-formation explanation. If the observed emission lines are powered by star formation, then the strong N III]$\lambda1748$ observed may imply an unusually high $N/O$ abundance. Balmer emission lines (H$\gamma$, H$\delta$) are also detected, and if powered by star formation rather than an AGN we infer a star formation rate of $\sim 20-30 M_{\odot} yr^{-1}$ (depending on the IMF) and low dust attenuation. Our NIRSpec spectroscopy confirms that GN-z11 is a remarkable galaxy with extreme properties seen 430 Myr after the Big Bang.
Andrew J. Bunker, Aayush Saxena, Alex J. Cameron, Chris J. Willott, Emma Curtis-Lake, Peter Jakobsen, Stefano Carniani, Renske Smit, Roberto Maiolino, Joris Witstok, Mirko Curti, Francesco D'Eugenio, Gareth C. Jones, Pierre Ferruit, Santiago Arribas, Stephane Charlot, Jacopo Chevallard, Giovanna Giardino, Anna de Graaff, Tobias J. Looser, Nora Luetzgendorf, Michael V. Maseda, Tim Rawle, Hans-Walter Rix, Bruno Rodriguez Del Pino, Stacey Alberts, Eiichi Egami, Daniel J. Eisenstein, Ryan Endsley, Kevin Hainline, Ryan Hausen, Benjamin D. Johnson, George Rieke, Marcia Rieke, Brant E. Robertson, Irene Shivaei, Daniel P. Stark, Fengwu Sun, Sandro Tacchella, Mengtao Tang, Christina C. Williams, Christopher N. A. Willmer, William M. Baker, Stefi Baum, Rachana Bhatawdekar, Rebecca Bowler, Kristan Boyett, Zuyi Chen, Chiara Circosta, Jakob M. Helton, Zhiyuan Ji, Jianwei Lyu, Erica Nelson, Eleonora Parlanti, Michele Perna, Lester Sandles, Jan Scholtz, Katherine A. Suess, Michael W. Topping, Hannah Uebler, Imaan E. B. Wallace, Lily Whitler
2023-02-14T18:53:16Z
http://arxiv.org/abs/2302.07256v2
JADES NIRSpec Spectroscopy of GN-z11: Lyman-\(\alpha\) emission and possible enhanced nitrogen abundance in a \(z=10.60\) luminous galaxy ###### Abstract We present JADES JWST/NIRSpec spectroscopy of GN-z11, the most luminous candidate \(z>10\) Lyman break galaxy in the GOODS-North field with \(M_{UV}=-21.5\). We derive a redshift of \(z=10.603\) (lower than previous determinations) based on multiple emission lines in our low and medium resolution spectra over \(0.8-5.3\,\mu\)m. We significantly detect the continuum and measure a blue rest-UV spectral slope of \(\beta=-2.4\). Remarkably, we see spatially-extended Lyman-\(\alpha\) in emission (despite the highly-neutral IGM expected at this early epoch), offset \(555\,\mathrm{km\,s}^{-1}\) redward of the systemic redshift. From our measurements of collisionally-excited lines of both low- and high-ionization (including [O ii] \(\lambda 3727\), [Ne ii] \(\lambda 3869\) and C iii] \(\lambda 11909\)) we infer a high ionization parameter (log \(U\sim-2\)). We detect the rarely-seen [N iv] \(\lambda 1486\) and N iii] \(\lambda 1748\) lines in both our low and medium resolution spectra, with other high ionization lines seen in low resolution spectrum such as He ii (blended with O iii] and C iv (with a possible P-Cygni profile). Based on the observed rest-UV line ratios, we cannot conclusively rule out photoionization from AGN. The high C iii]/He ii ratios, however, suggest a likely star-formation explanation. If the observed emission lines are powered by star formation, then the strong N iii] \(\lambda 1748\) observed may imply an unusually high \(N/O\) abundance. Balmer emission lines (Hry, H\(\delta\)) are also detected, and if powered by star formation rather than an AGN we infer a star formation rate of \(\sim 20-30\,M_{\odot}\,\mathrm{yr}^{-1}\) (depending on the IMF) and low dust attenuation. Our NIRSpec spectroscopy confirms that GN-z11 is a remarkable galaxy with extreme properties seen 430 Myr after the Big Bang. Key Words.: ## 1 Introduction Spectroscopically confirming galaxies formed within the first few hundred million years after the Big Bang, and understanding their nature and evolution, represents one of the biggest challenges of modern astrophysics, and one of the main drivers behind the _James Webb Space Telescope (JWST)_. Probing the formation of some of the very first galaxies helps establish the epoch of first light in the Universe, i.e. the timescales of the formation of the first stars, bringing an end to the so-called cosmic Dark Ages. Deep NIRSpec follow-up of some of the highest redshift galaxy candidates has already yielded spectroscopic confirmations via clear detection of the Lyman break in four galaxies at \(z>10\)(Curtis-Lake et al., 2022; Robertson et al., 2022). The onset of the first star formation began the process of reionizing the intergalactic medium (IGM), although the exact details of this reionization are still uncertain. Observations of Lyman-\(\alpha\) emission and absorption provide strong constraints on how and when the diffuse gas in the IGM transitions from neutral to ionized (see Robertson, 2022 for a review). An important observation is the decrease in Lyman-\(\alpha\) emission line equivalent width with increasing redshift above \(z=6\)(Stark et al., 2010; Schenker et al., 2014; Caruana et al., 2014; Jung et al., 2020), consistent with stronger absorption from an increasingly neutral IGM. However, this picture was largely based on moderate-luminosity galaxies. Luminous galaxies at \(7.5<z<9\) often show Lyman-\(\alpha\) emission (Zitrin et al., 2015; Oesch et al., 2015; Stark et al., 2017; Larson et al., 2022), at a redshift where quasar damping wing studies suggest the IGM is significantly neutral (\(x_{\mathrm{in}}\sim 0.5\); Greig et al., 2017; Davies et al., 2018; Wang et al., 2020). This indicates that around luminous (and potentially massive) galaxies, Lyman-\(\alpha\) escapes more easily, perhaps as a result of ionized bubbles that grew early in overdense regions (Endsley & Stark, 2022; Jung et al., 2022; cf. Saxena et al. in prep). Another possible Lyman-\(\alpha\) escape mechanism is through resonant scattering, with high velocity neutral gas (perhaps associated with outflows) redshifting the photons to lower frequencies at which they are no longer absorbed by the intervening neutral IGM (Dijkstra, 2014; Mason et al., 2018). Deep spectroscopy with _JWST_ is vastly increasing our knowledge in this area, both by detecting rest-frame optical lines of known Lyman-\(\alpha\) emitters to derive systemic redshifts and details of nebular physical conditions, and by detecting Lyman-\(\alpha\) further into the reionization epoch than has been possible from the ground (e.g. Tang et al., 2023). Before the launch of _JWST_, the most distant galaxy with a tentative but plausible spectroscopic redshift was GN-z11 (Oesch et al., 2016). This was first selected as a likely high redshift Lyman-break candidate through multi-colour imaging with _HST_(Oesch et al., 2015), and subsequent _HST/WFC3_ slitless grism spectroscopy revealed a possible Lyman break in the continuum (Oesch et al., 2016), yielding a redshift of \(z_{\rm{grain}}=11.09\). With an apparent \(H_{160}\) magnitude of \(26.0\pm 0.1\), GN-z11 is remarkably bright, up to 3 times more luminous than the characteristic rest-UV luminosity (\(L_{\star}\)) measured from luminosity functions at \(z\sim 6-8\)(e.g. Finkelstein et al., 2015; Bouwens et al., 2015). Using _Spitzer_/IRAC fluxes, Oesch et al. (2016) estimated its stellar mass to be \(M_{\star}=10^{9}\,M_{\odot}\), indicating a rapid build up of stellar mass in the very early Universe. Through deep round-based near-infrared spectroscopy using MOSFIRE on the Keck Telescope, the redshift of GN-z11 was further refined by Jiang et al. (2021) via the possible detection of C iii]\(\lambda 1907,1909\) doublet, yielding a redshift of \(z=10.957\). If real, the intense C iii] emission line might originate partly due to an active galactic nucleus (AGN) hosted by the galaxy, or due to rapid carbon enrichment (Jiang et al., 2021). Given the unique nature of this source and the low signal-to-noise ratio (\(S/N\)) of existing continuum break and emission line detections of this galaxy, NIRSpec on _JWST_ now offers a chance to confirm its true distance and nature through high \(S/N\) detection of multiple rest-UV and optical emission lines as well as its bright continuum. Several diagnostics relying on the ratios and strengths of rest-UV and optical emission lines can help differentiate between photoionization due to AGN or star-formation alone, and further help characterize the ionization conditions in the interstellar medium (ISM) of this remarkably luminous distant galaxy. In this paper, we report an unambiguous spectroscopic redshift of \(z=10.6034\) for GN-z11 using deep NIRSpec observations in the GOODS-North field via the robust detection of several emission lines including N iv] \(\lambda\,1486\), N iii] \(\lambda\lambda 1747,1749\), C iii] \(\lambda\lambda 1907,1909\), [ O ii] \(\lambda\lambda 3726,3729\), [ Ne ii] \(\lambda\lambda 3869,3967\), H \(\delta\) and H \(\gamma\). Although the measured redshift is lower than previously reported in other work, this still places GN-z11 as comfortably the most luminous source currently confirmed at \(z>10\). This galaxy is given the designation JADES-GN-z10-0 in the JADES spectroscopic database, but for the remainder of this paper we use the more familiar name GN-z11. The layout of this paper is as follows. In Section 2 we describe our JWST/NIRSpec observations of GN-z11 and the data reduction strategies adopted. In Section 3 we present the 1D and 2D spectra of GN-z11, emission line measurements and discuss the inferred physical properties. In Section 4 we conclude our findings. Throughout this work, we assume the Planck 2018 cosmology (Planck Collaboration et al., 2020) and the AB magnitude system (Oke and Gunn, 1983). ## 2 Observations The NIRSpec observations of GN-z11 were taken as part of the _JWST_ Advanced Deep Extragalactic Survey (JADES), a collaboration between the Instrument Science Teams of NIRSpec and NIRCam to study galaxy evolution out to high redshift through imaging and spectroscopy in the two GOODS fields. The Guaranteed Time Observations presented here are part of program ID 1181 (P.I.: D. Eisenstein), with spectroscopic targets based on pre-_JWST_ data, largely _HST_ imaging. Our observations of GN-z11 were taken on UT 5 & 7 February 2023, using NIRSpec (Jackobsen et al., 2022) in its microstheuter array (MSA) mode (Ferruit et al., 2022). The MSA comprises four arrays of \(365<171\) independently-operable shutters, each covering \(98\arcmin\times 91\arcmin\) on the sky. GN-z11 was targeted in four independent MSA configurations. Each configuration acquired 3100 s of integration in each of the medium-resolution G140M/F070LP, G235M/F170LP, and G395M/F290LP grating/filter combinations (with resolving power \(R\approx 1000\) and combined spectral coverage over \(1.1-5.3\,\mu\)m) and 6200 s in the low-resolution PRISM/CLEAR mode (with \(R\sim 100\) and continuous coverage \(0.8-5.3\,\mu\)m). Targets were assigned three shutter slitlets, with the targets nodded into each of the three shutters during the observing sequence to facilitate background subtraction. As GN-z11 was one of our highest priority targets, we ensured that its spectra did not overlap with other targets, even for the gratings (where the spectra are more extended on the detector than the low-dispersion prism). Individual integrations used the NRSIRS2 readout mode with 14 groups (1035 s each) to limit correlated readout noise. In total our integration time was 3.45 hours in each of the three gratings, and 6.9 hours in the prism. These observations were processed with algorithms developed by the ESA NIRSpec Science Operations Team and the NIRSpec GTO Team. We refer the reader to Cameron et al. (2023) for more details of the processing. We note that for the G140M/F070LP grating/filter combination we extended the calibration of the spectrum up to 1.84 \(\mu\)m, taking into account the transmission filter throughput beyond the nominal wavelength range of this configuration (\(0.70\,\mu\)m - \(1.27\,\mu\)m). Since GN-z11 is at \(z>10\) with no flux at wavelengths below Lyman-\(\alpha\), there is no second order light to overlap with the extended wavelength range of \(1.27\,\mu\)m - \(1.84\,\mu\)m. Wavelength-dependent path-loss corrections were made based on the object position within the shutter and modelling the galaxy as a point-like source. GN-z11 is very compact, so this is a good approximation. In all four of the configurations, GN-z11 was located not more than 30% of the illuminated slit with or slit height from the centre along either axis of the \(0\farcs 20\times 0\farcs 46\) slitlet. Individual calibrated one-dimensional (1D) and 2D spectra were combined excluding bad pixels by using an an iterative sigma clipping algorithm at the \(3\sigma\) level. Before the combination process a spatial median of each exposure is subtracted to remove any residual background. 1D spectral extractions were made from the rectified 2D spectra using box extractions of height 3 and 5 pixels (\(0\farcs 3\) and \(0\farcs 5\), respectively). ## 3 Results Our NIRSpec spectra of GN-z11 show well-detected continuum in the prism (Figure 1) where we have \(S/N>20\) per spectral resolution element at wavelength above Lyman-\(\alpha\) and out to \(\sim 3\,\mu\)m. We also see the continuum at lower \(S/N\) in the medium-resolution gratings (Figure 2, Figure B.1). A strong spectral break at Lyman-\(\alpha\) is observed, with no significant flux at shorter wavelengths. We have robust detections of several emission lines, most of which are seen in both the prism (Figure 1) and grating (Figure 2) spectra. We also see evidence of interstellar absorption lines, but in this paper we focus on the emission lines properties. In the subsections below, we use the spectrum of GN-z11 to infer physical properties. As well as using empirical diagnostics from the emission line fluxes and ratios, we also use the neagle Bayesian SED fitting code (Chevallard and Charlot, 2016) on our full prism spectrum, the exact details and results are presented in Table 2 and in Appendix A. ### Emission lines and Redshift Determination The full list of detected lines is given in Table 1. Line wavelengths are measured from the grating spectra because they have higher resolution resulting in less blending and more accurate line centroids. Line fluxes are measured from both the prism and gratings. We fit each emission line with a single Gaussian model, where the local continuum level and error is inferred using sigma-clipped median and standard deviation measured from around each emission line. The uncertainties on the Gaussian fit and the continuum level are then added in quadrature to estimate the errors on the measured line fluxes. In determining the redshift from the vacuum rest-frame wavelengths, we exclude Lyman-\(\alpha\) (which has a velocity offset, see Section 3.3) and also Mg ii (which is only significantly detected in the low-resolution prism), and do a weighted fit of 9 well-detected emission lines to give a redshift \(z=10.6034\pm 0.0013\). The redshift we measure is considerably lower than the previously reported redshift values of \(z=11.09^{+0.08}_{-0.021}\) from _HST_ grism (Oesch et al., 2016) and \(z=10.957\pm 0.001\) from Keck MOSFIRE (Jiang et al., 2021). The 2D _HST_ grism observation shows flux down to the wavelength we measure for the Lyman break (\(1.41\,\mu\)m), but due to noise fluctuations their fitted model break was at a longer wavelength of \(1.47\,\mu\)m. The Keck MOSFIRE redshift was based on possibles detections of the [C iii]\(\lambda 1907\) and C iii]\(\lambda 1909\) lines at \(2.2797\,\mu\)m and \(2.282\,\mu\)m respectively, at \(2.6\,\sigma\) and \(5.3\,\sigma\). We do not find any significant emission lines at these observed wavelengths in our data, where they would have been detected at \(20\,\sigma\) and \(40\,\sigma\) for the line fluxes quoted in Jiang et al. (2021). Instead, we do detect C iii] but at a shorter wavelength consistent with our measured \(z=10.603\). ### Is GN-z11 an AGN? GN-z11 has a compact morphology and the continuum spatial extent in our NIRSpec 2D spectroscopy is barely resolved. In a companion paper, Tacchella et al. (2023) analyze JADES NIRCam imaging data and derive the best size constraint so far, finding an intrinsic half-light radius of only \(0.016\pm 0.005\arcsec\) (\(64\pm 20\) pc). The possibility of a significant point source contribution to the total flux leaves open the question of whether some of the light originates from an AGN. Our data do contain several high ionization lines and we wish to explore the excitation mechanism. We have detected a large number of emission lines of varying ionization potential in GN-z11. In particular the N iv]\(\lambda 1486\) line (ionization potential \(E>47.5\) eV) is often a signature of an AGN (e.g. Vanden Berk et al., 2001; Vanzella et al., 2010) although it has been seen in some star forming galaxies (e.g., Fosbury et al., 2003 and McGreer et al., 2018). However, the higher ionization Nitrogen line N v (\(E>77.5\) eV), which is a clear signature of AGN activity, is not detected in either the grating or the prism spectra. We note that in the prism (\(R\sim 100\)) spectrum we see emission features arising from blended He ii and [O iii]\(\lambda\lambda 1660\), 1666 lines, as well as a P-Cygni type feature from C iv (see Figure 2). The resolution of the prism at these short wavelengths is very low, and accurate line flux measurements of blended lines are not possible. The He ii+ O iii] and C iv lines, which should otherwise appear to be deblended in the medium resolution (\(R\sim 1000\)) grating spectra, are unfortunately below the detection limit of our grating spectra. The reliable detection of C iii] and N iii] lines in the grating spectra, however, enables us to investigate rest-UV line ratios that can be compared with predictions from photoionization models to differentiate between an AGN or a star-formation origin (e.g. Feltre et al., 2016). In Figure 3 we plot our \(2\,\sigma\) limits from the grating spectra on C iii]\(\lambda 1909\)/He ii\(\lambda 1640\) (\(>2.6\)) versus C iii]\(\lambda 1909\)/C iv\(\lambda 1548\), 1550 (\(>2.7\)), along with predictions from photoionization models of Feltre et al. (2016). Based on these limits, we find that photoionization by an AGN is not favoured, and star formation alone may be able to explain the observed line ratios. Additionally, using the C iii] and He ii based diagnostics from Nakajima et al. (2018), we find that the Figure 1: 2D (top) and 1D (bottom) spectra of GN-z11 using PRISM/CLEAR configuration of NIRSpec. Prominent emission lines present in the spectra are marked. The signal to noise ratio (SNR) of the continuum is high and the emission lines are clearly seen in both the 1D and 2D spectra. observed strength of C iii] emission and the C iii]/He ii ratio can also be explained by star-formation alone, although the measurements lie very close to the separation between photoionization due to AGN and star-formation. Nakajima et al. (2018) found a parameter space in their diagnostic plots where both AGN and star-forming models could overlap due to low metallicities and high C/O ratios, which is where the limits from GN-z11 suggest it could lie. We note that when considering the photoionization models of Nakajima et al. (2022), the limit on C iii]/C iv we derive is compatible with the envelope of expectations from chemically evolved AGN. A 2\(\sigma\) limiting ratio of N iii]/He ii \(>2.4\) is also consistent with photoionization due to star-formation (Hirschmann et al., 2019), and interestingly, no AGN scenario in the models of Hirschmann et al. (2019) predicts a N iii]/He ii ratio greater than 1. Composite models containing contribution from both AGN and star-formation can achieve N iii]/He ii ratios \(\sim 1\), but only star-formation is favoured at ratios \(>1\). Overall, we find that the C iii] and N iii] emission and their ratios with respect to He ii and C iv do not obviously favour photoionization due to AGN. However, the presence of other rare lines (e.g. N iv]) that have previously been observed in the spectra of AGN makes ruling out the presence of an AGN less obvious. Given the expected extreme nature of GN-z11, together with a lack of any observational insights into the expected spectroscopic properties of AGN at \(z>10\), we are unable to draw definitive conclusions about the dominant source of photoionization in GN-z11. Finally, we note that the grating spectra do not show obvious evidence for the presence of a broad component of permitted lines (see Figure 2), which would be ascribed to the Broad Line Region (BLR) of an AGN. This is not necessarily conclusive proof against the AGN scenario, as the BLR is often obscured along our line of sight in most AGN, however it is another element consistent with the lack of significant contribution from an AGN. ### Lyman-\(\alpha\) Emission The prism spectrum shows a near-total Gunn-Peterson trough at wavelengths below Lyman-\(\alpha\), consistent with a highly neutral intervening IGM (Gunn & Peterson, 1965). Although the spectral break is fairly sharp in wavelength at the low dispersion of the prism, we do see some evidence of a damping wing absorption. In spectra from the bluest G140M grating, an emission line is seen at 14132 A, close to the sharp Lyman break observed with the prism. Taking the systemic redshift of GN-z11 to be \(z=10.6034\) (see Section 3.1), the rest-frame wavelength is 1217.92 A, consistent with being Lyman-\(\alpha\) in emission, but with the line centroid redshifted by 555 km s\({}^{-1}\) (see Figure 4). This Figure 2: Emission lines seen in GN-z11 from the Medium resolution gratings, apart from the last row that shows line emission from the low resolution prism spectrum. The C iv, He ii+O iii] Mg ii and [O iii] \(\lambda 4363\) lines are not detected with high significance in the grating spectra. emission line is seen in all 12 of the individual G140M grating exposures (in each of the 3 nod positions in the 4 MSA configurations), and is \(>15\,\sigma\) in the combined grating spectrum. Figure 5 shows a close-up of the G140M spectral region around Lyman-\(\alpha\). There is zero transmitted flux shortward of the systemic Lyman-\(\alpha\) wavelength. Any such flux would require an ionized bubble around the galaxy, but the lack of such flux rules out an optically-thin H II region around GN-z11 (e.g. Mason & Gronke 2020). The redshifted line is well approximated by a Gaussian, without significant asymmetry. We measure a FWHM of \(566\,\mathrm{km\,s^{-1}}\), which is extended beyond the instrumental line spread function of \(\approx 200\,\mathrm{km\,s^{-1}}\) (de Graaff et al. in prep) for a compact source in the G140M grating at this wavelength. Removing the line spread function in quadrature suggests an intrinsic velocity spread of \(\delta v_{\mathrm{FWHM}}=530\,\mathrm{km\,s^{-1}}\). In the 2D spectrum of Figure 5 it is apparent that the Lyman-\(\alpha\) emission is more spatially extended than the continuum. Whilst the continuum flux is largely contained within 2 pixels (\(0.2\arcsec\)), as expected based on the small size measured in our NIR-Cam imaging presented in Tacchella et al. (2023), the Lyman-\(\alpha\) emission extends further to the south-west. The Lyman-\(\alpha\) exten \begin{table} \begin{tabular}{l c c c c} \hline Emission line & Observed wavelength (Å) & \(F_{\mathrm{prism}}\)* & \(F_{\mathrm{grating}}\)* & EW\({}_{0}\) (Å) \\ \hline Ly\(\alpha\) & 14132.2 & – & \(22.0\pm 1.0\) & \(18.0\pm 2.0\) \\ N iv] \(\lambda\lambda 1486\) & 17251.78 & \(8.6\pm 1.8\) & \(14.7\pm 2.1\) & \(9.0\pm 1.1\) \\ N iii] \(\lambda\lambda 1747,1749\) & 20302.03 & \(7.3\pm 1.0\) & \(12.2\pm 1.3\) & \(9.4\pm 1.0\) \\ C iii] \(\lambda\lambda 1907,1909\) & 22142.4 & \(10.4\pm 1.6\) & \(13.0\pm 1.9\) & \(14.3\pm 2.3\) \\ Mg ii \(\lambda 12795,2802\) & 32558.7 & \(3.7\pm 0.6\) & – & – \\ [O ii] \(\lambda\lambda 3726,3729\) & 43261.02 & \(9.1\pm 0.4\) & \(11.1\pm 1.1\) & \(52.3\pm 7.7\) \\ [Ne iii] \(\lambda 3869\) & 44894.97 & \(7.9\pm 0.7^{a}\) & \(11.3\pm 1.0\) & \(55.9\pm 15.3\) \\ He i \(\lambda 3889\) & 45127.41 & \(3.1\pm 0.4^{a}\) & \(6.2\pm 0.8\) & \(30.1\pm 7.9\) \\ [Ne iii] \(\lambda 3967+\mathrm{H}\epsilon\) & 46055.43 & \(5.1\pm 0.5\) & \(8.2\pm 1.6\) & \(54.7\pm 11.2\) \\ H \(\delta\) & 47602.09 & \(5.7\pm 0.5\) & \(8.8\pm 1.8\) & \(46.8\pm 14.8\) \\ H \(\gamma\) & 50384.67 & \(10.7\pm 1.1^{a}\) & \(13.9\pm 1.2\) & \(109.7\pm 15.4\) \\ [O iii] \(\lambda 4363\) & & \(2.3\pm 0.7^{a}\) & – & – \\ \multicolumn{5}{l}{\(2\sigma\)_upper limits on non-detections in Grating\({}^{\dagger}\)_} \\ \multicolumn{5}{l}{C iv \(\lambda\lambda 1548,1550\)} & – & \(<4.8\) & \(<3.2\) \\ He ii \(\lambda 1640\) & – & \(<5.1\) & \(<3.7\) \\ \multicolumn{5}{l}{[O iii] \(\lambda\lambda 1660,1666\)} & – & \(<4.7\) & \(<3.5\) \\ \multicolumn{5}{l}{*Line fluxes are in units of \(\times 10^{-19}\,\mathrm{erg\,s^{-1}\,cm^{-2}}\).} \\ \multicolumn{5}{l}{\({}^{a}\) Partially blended with another line but it is possible to independently measure the line fluxes.} \\ \multicolumn{5}{l}{\({}^{\dagger}\) Although these lines are seen in the PRISM spectrum, a more careful modelling of the emission lines would be require to de-blend or account for absorption, which is beyond the scope of this paper.} \\ \end{tabular} \end{table} Table 1: Emission line fluxes detected in the prism (\(R\sim 100\)) and grating (\(R\sim 1000\)) spectra. We note that the medium resolution grating is more sensitive to spectrally unresolved emission lines. The rest-frame equivalent widths given are derived using line fluxes measured from the medium resolution gratings, with the continuum measured from the lower resolution spectrum with a higher SNR. The upper limits reported are from medium resolution grating measurements. Figure 4: Velocity offset of the Ly\(\alpha\) emission line (blue solid line) compared with the H \(\gamma\) line (green dashed line). The Ly\(\alpha\) line is redshifted by \(555\,\mathrm{km\,s^{-1}}\) compared to the redshift derived from other emission lines in the spectrum. Figure 3: Limits measured on C iii]/He ii vs C iii]/C iv ratios for GN-z11 shown along with predictions from photoionization due to AGN (stars) and star-formation (circles) from Feltre et al. (2016) at a fiducial metallicity of \(Z=0.05\,Z_{\odot}\). Based on the line ratios shown, photoionization due to an AGN is unlikely, with the limits favoring star-formation. We note that the parameter space probed by predictions for AGN and star-forming galaxies at different metallicities are similarly well separated. sion beyond the continuum is at least 2 pixels, corresponding to an extra 0.8 kpc. We note the Lyman-\(\alpha\) could extend further since the MSA shutters are only 5 pixels high, so beyond this region there is self-subtraction, but a visual check of the 2D spectrum without background subtraction did not show Lyman-\(\alpha\) in neighbouring shutters. A similar check on the extent of other well-detected lines in the grating spectra ([O ii] and H\(\gamma\)) shows that these lines have the same spatial profile as their nearby continuum. The fact that Lyman-\(\alpha\) is spatially extended is a remarkable result, which may be suggestive of a Lyman-\(\alpha\) halo. The presence of such haloes around individual star-forming galaxies has been reported at lower redshifts (e.g. Rauch et al., 2008; Wisotzki et al., 2016; Leclercq et al., 2017; Kusakabe et al., 2022) and we may be seeing the gas in the circum-galactic medium (CGM), from Lyman-\(\alpha\) fluorescence or shock heating. Using a 3 pixel (0.3 '') extraction aperture, the measured Lyman-\(\alpha\) flux is \((1.4\pm 0.1)\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\), with a rest-frame equivalent width (with respect to the continuum longward of the Lyman-\(\alpha\) break) of EW\({}_{0}=12\) A. Using a larger "full-shutter" extraction aperture of height 5 pixels (0.5 '') gives a significantly higher flux of \((2.2\pm 0.1)\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\), and the rest-frame equivalent width rises to EW\({}_{0}=18\) A. The emission line flux of Lyman-\(\alpha\) is about twice that of H-\(\gamma\) the strongest Balmer line we detect). From Case B recombination and assuming no dust as found from the Balmer line ratio, Lyman-\(\alpha\) would have about 50\(\times\) the line flux of H-\(\gamma\), so it appears to be suppressed by about a factor of 25\(\times\) (i.e., \(f_{\rm esc,Ly\alpha}=0.04\)), presumably through resonant scattering effects. The discovery of Lyman-\(\alpha\) emission at such high redshift is remarkable, given the expected highly neutral IGM at this epoch so much earlier than the end of reionization at \(z\approx 6\)(Fan et al., 2001). However, perhaps this result is not so surprising when one considers the high rate of Lyman-\(\alpha\) detection in luminous \(7.5<z<9\) galaxies (Zitrin et al., 2015; Oesch et al., 2015; Stark et al., 2017; Larson et al., 2022) at redshifts prior to complete reionization. GN-z11 is a similarly luminous galaxy with \(M_{\rm UV}=-21.5\) so the effects that make Lyman-\(\alpha\) detectable in such galaxies (see Mason et al., 2018) may be at play for GN-z11. There are two aspects of our Lyman-\(\alpha\) observations that may explain the significant transmission of \(f_{\rm esc,Ly\alpha}=0.04\). Firstly, the large velocity offset of 555 km s\({}^{-1}\) and rest-frame equivalent width are similar to those measured in luminous \(7.5<z<9\) galaxies with Lyman-\(\alpha\) (Figure 6; see also Tang et al., 2023). Large velocity offsets are key to the escape of Lyman-\(\alpha\) photons from galaxies in a highly-neutral IGM. Since the damping wing of the IGM and proximate HI will absorb photons close to the resonant frequency, photons that escape must resonantly scatter in the wings. Those that scatter far enough to the red may then be able to escape the system without being absorbed (Dijkstra, 2014). Additionally, the intense star formation in luminous galaxies will be driving powerful and fast-moving outflows. Outflows on the far side of the galaxy may provide a redshifted medium from which the photons can backscatter to our line-of-sight with the required velocity offset. In this context, our observation of spatially-extended Lyman-\(\alpha\) emission to the south-west suggests that if an outflow is present, it would extend in the north-east direction. ### Rest-frame UV properties of GN-z11 From our low-dispersion prism spectra, where we have high S/N detection of the continuum, we measure a UV spectral slope of \(\beta=-2.36\pm 0.10\) (over the range \(\lambda_{\rm rest}=1500-2600\) A), consistent with that of \(\beta=-2.4\) reported from our NIRCam imaging in Tacchella et al. (2023). We measure a luminosity of \(M_{AB}^{UV}=-21.50\pm 0.02\) over the range \(\lambda_{\rm rest}=1400-1600\) A Figure 5: Zoom in on the Ly\(\alpha\) emission line in the G140M 1D (lower) and 2D (upper) spectra. The grey dashed line shows the systemic wavelength of the Ly\(\alpha\) transition. The histogram (top-left) of the Ly\(\alpha\) spatial profile (yellow) and that of the continuum (blue), shows the Ly\(\alpha\) emission from GN-z11 is more extended towards the south-west (up in the MSA shutter in this view). Figure 6: Lyman-\(\alpha\) velocity offset (\(\Delta\nu_{\rm Ly\alpha}\)) versus M\({}_{\rm UV}\) for GN-z11 (star) and other high-redshift galaxies, color coded by Lyman-\(\alpha\) EW. We overplot data of \(z>6\) galaxies with ground-based observations from the literature (Cuby et al., 2003; Pentericci et al., 2011, 2016, 2018; Vanzella et al., 2011; Willott et al., 2013, 2015; Maiolino et al., 2015; Oesch et al., 2015; Stark et al., 2015, 2017; Furusawa et al., 2016; Knudsen et al., 2016; Carniani et al., 2017; Laporte et al., 2017; Mainiail et al., 2017; Hashimoto et al., 2019; see Endsley et al. 2022 and the Table 4 therein) in circles, and Ly\(\alpha\) emitting galaxies at \(z\sim 7-9\) from CEERS NIRSpec observations (Tang et al., 2023) in squares. Prediction of the correlation between Ly\(\alpha\) velocity offset and M\({}_{\rm UV}\) at \(z=7\) from Mason et al. (2018) is shown by the grey dashed line. GN-z11 has properties similar to the Ly\(\alpha\) emitting galaxies at \(z\sim 7-9\). (adopting a luminosity distance of 113,148.8 Mpc from our chosen cosmology). This corresponds to a luminosity density of \(L_{\nu}^{\rm UV}=1.7\times 10^{29}\,{\rm erg\,s^{-1}\,cm^{-2}\,Hz^{-1}}\) around \(\lambda_{\rm rest}=1500\) A. The rest-UV will be potentially affected by dust reddening. However, from the prism spectrum, we measure a Balmer line ratio using \({\rm H\,\delta/H\,\gamma}\) of \(0.53\pm 0.07\), which is very close and within the errors of the intrinsic ratio of 0.55 expected in an H ii region with electron density \(n_{e}=300\) cm\({}^{-3}\) and temperature \(T_{e}=15,000\) K, conditions expected in very high redshift galaxies (e.g. Curti et al., 2023; Katz et al., 2023; Isobe et al., 2023). We note that the wavelength baseline between \({\rm H\,\delta}\) and \({\rm H\,\gamma}\) is short, so does not provide a large lever arm to quantify the dust attenuation accurately, and also that the \({\rm H\,\delta/H\,\gamma}\) ratio of the fluxes from our medium grating spectra is \(0.63\pm 0.14\), above the case B value of 0.55 but consistent within the errors. Our full-S drogen recombination. Our wavelength coverage does not extend to the widely-used [O iii] \(\lambda\)5007, however we do have a robust detection of [Ne iii] \(\lambda\)3869 which has a similar ionization potential. Hence, we consider the line flux ratio [Ne iii] \(\lambda\)3869 / [O ii] \(\lambda\lambda\)3726, 3729 as a probe of ionization parameter (\(U\)) - [Ne iii]/[O ii] has been shown to track [O iii]/[O ii] well (e.g. Levesque & Richardson, 2014; Wistok et al., 2021), which is the most widely-used indicator of \(U\). We measure [Ne iii]/[O ii] \(=1.02\pm 0.14\), which is comparable to the redshift \(z\sim 5.5-9.5\) NIRSpec sample presented in Cameron et al. (2023) from our JADES survey, and also \(z\gtrsim 7\) galaxies observed in the CEERS survey (Tang et al., 2023). Following the calibration set out in Wistok et al. (2021), this corresponds to an ionization parameter of log \(U=-2.06\pm 0.05\). We find a similar value of log \(U=-2.25\pm 0.97\) from our BEAGLE SED fitting. We report a marginal detection of the [O iii] \(\lambda\)4363 line in our prism spectrum (partially blended with H \(\gamma\); Figure 2), which has already been observed in a number of \(z>7\) galaxies (e.g. Curti et al., 2023; Katz et al., 2023). Although this line can in theory be used to derive a \(T_{\rm e}\)-based ('direct method') metallicity, the absence of [O iii] \(\lambda\)5007 from our data means we cannot measure the temperature with the standard approach. The O iii] \(\lambda\lambda\)1660,1666 / [O iii] \(\lambda\)4363 ratio can also be used as a temperature diagnostic, but the low significance of the [O iii] \(\lambda\lambda\)4363 coupled with the non-detection of O iii] \(\lambda\lambda\)1660, 1666 in our grating spectrum means that any derived temperature would be highly uncertain. Thus, we instead consider using strong-line ratios to constrain the metallicity of GNz-11. A widely-used metallicity indicator is R23 (the ratio of [O ii]+[O iii] to H\(\beta\)), but since [O iii] \(\lambda\)5007 and H\(\beta\) fall beyond our spectral coverage, we cannot measure this ratio. We instead consider an analogous ratio of ([Ne iii] \(\lambda\)3869 + [O ii] \(\lambda\lambda\)3727)/H\(\delta\). All three of these emission lines are well detected in our grating spectra, and conveniently lie at very similar wavelengths which minimizes any uncertainties arising due to wavelength-dependent attenuation. We measure a ratio of ([Ne iii] + [O ii])/\(H\delta=0.40\pm 0.08\). Following the calibrations from Wistok et al. (2021) (which provides [O iii]/[Ne iii] \(\approx 15\) at the derived ionization parameter) and assuming \(H\delta/H\beta=0.268\), this would be equivalent to \(R23\approx 0.87\). These values place GN-z11 in fairly close alignment with the median values presented in the Cameron et al. (2023) sample; their stacked spectra at \(z\sim 6\) (\(z\sim 8\)) show \(R23=0.88\) (0.86) and log([Ne iii]/[O ii])\(=0.05\) (0.04). According to the binned average relationships presented in Nakajima et al. (2022), this suggests a metallicity in the range 7.59 \(<\)12+log(O/H)\(<7.76\), which corresponds to \(0.08-0.12\,Z_{\odot}\) assuming a solar abundance of \(12+\log(O/H)_{\odot}=8.69\). Our BEAGLE SED fitting yields a consistent value of \(Z_{\rm neb}=0.12\pm 0.02\,Z_{\odot}\). In Figure 7 we compare our [Ne iii]/[O ii] and ([Ne iii]+ [O ii])/H\(\delta\) measurements from GN-z11 with measurements from \(z>5.5\) galaxies from Cameron et al. (2023), \(z\sim 0\) galaxies from SDSS MPA-JHU catalogs1(Aihara et al., 2011) and photoionization model grids from Gutkin et al. (2016). This line-ratio diagram is analogous to the widely used R23-O32 'ionization vs. excitation' diagram since, as described above, [Ne iii]/[O ii] traces ionization and ([Ne iii]+ [O ii])/H\(\delta\) traces excitation of both the high- and low-ionization metal ions. Footnote 1: [https://www.sdss3.org/dr10/spectro/galaxy_mpajhu.php](https://www.sdss3.org/dr10/spectro/galaxy_mpajhu.php) The Gutkin et al. (2016) models in Figure 7 demonstrate the two-valued nature of ([Ne iii] \(\lambda\)3869 + [O ii] \(\lambda\lambda\)3727)/H\(\delta\) with metallicity. Although the signal-to-noise ratio requirements significantly cut down the available SDSS sample, one can still see clear evidence of this two-valued relation. The \(z>5.5\) sample from Cameron et al. (2023) appears to follow an extrapolation of the low-metallicity (high-ionization) branch of this two-valued sequence. We see that GN-z11 (yellow diamond) lies in good agreement with the sequence formed by these \(z>5.5\) galaxies. It falls on top of the Gutkin et al. (2016) \(Z/Z_{\odot}=0.07\) model line, suggesting \(12+\log({\rm O/H})\approx 7.5\), and lies proximal to the model values with log \(U=-2.0\), consistent with the empirical values derived above. We now consider where GN-z11 might fall on the mass-metallicity relation (see Maiolino & Mannucci, 2019 for a review). The stellar mass estimated from BEAGLE of \(\log(M_{*}/M_{\odot})=8.73^{+0.06}_{-0.06}\) is consistent with that derived from our NIRCam photometry of \(\log(M_{*}/M_{\odot})=9.1\pm^{0.3}_{0.4}\) presented in Tacchella et al. (2023), again assuming that the light is dominated by the stellar population rather than an AGN. Our observed spectrum shows no evidence of a Balmer Break, and if the continuum is purely stellar, is dominated by a young stellar population. It is possible a more stochastic star formation history would fit a higher stellar mass. Comparing our metallicity and mass estimates for GN-z11 with the average reported for \(8<z<10\) galaxies in Nakajima et al. (2023) we find GN-z11 is offset to somewhat lower metallicity, albeit within the uncertainty quoted there. We note that the sample presented in that paper is still small and our understanding of the metallicities of galaxies at \(z>8\) will no doubt continue to evolve significantly over the coming years. We also note that the uncertainties on our derived metallicity are large. In particular, we caution that the set of emission lines used to determine the metallicity presented here has not been robustly calibrated. The systematic uncertainties associated with this quoted metallicity are likely very high, so robust conclusions cannot be drawn from this about the evolution of the mass-metallicity relation. Further work is needed to robustly calibrate shorter wavelength metallicity diagnostics suitable for the study of \(z>10\) galaxies with NIRSpec. What is more puzzling is the strong N iii] and N iv] emission observed in the rest-frame UV, especially given the absence of O iii] \(\lambda\lambda\)1660,1666 in our grating spectrum (although the blend with He ii is detected in the low-dispersion prism spectrum spectrum). The N iii] \(\lambda\)1748 emission line complex is not often seen in the spectra of star-forming galaxies, although it has been observed in stacks of rest-UV galaxy spectra at \(z\sim 3\)(e.g. Saxena et al., 2022). However, it is typically observed to be weaker \begin{table} \begin{tabular}{l c} \hline Parameter & GN-z11 \\ \hline log(\(M/{\rm M}_{\odot}\)) & \(8.73^{+0.06}_{-0.06}\) \\ \(\psi/{\rm M}_{\odot}\) yr\({}^{-1}\) & \(18.78^{+0.01}_{-0.09}\) \\ log(\(t/{\rm yr}\)) & \(7.27^{+0.15}_{-0.1}\) \\ log(\(t_{\rm u}/{\rm yr}\)) & \(7.01^{+0.15}_{-0.07}\) \\ log(\(Z_{\rm neb}/Z_{\odot}\)) & \(-0.92^{+0.06}_{-0.08}\) \\ log \(U_{\rm S}\) & \(-2.25^{+0.97}_{-0.07}\) \\ A\({}_{V}\) & \(0.17^{+0.07}_{-0.07}\) \\ log(\(\xi_{\rm ion}/{\rm erg}^{-1}{\rm Hz}\)) & \(25.67^{+0.02}_{-0.02}\) \\ \(f_{\rm esc}\) & \(0.03^{+0.05}_{-0.02}\) \\ \hline \end{tabular} \end{table} Table 2: Estimates of GN-z11 physical parameters derived from BEAGLE SED fitting of the prism spectrum of Figure 1 with the uncertainties giving the extent of the 1\(\sigma\) credible regions: stellar mass (\(M\), accounting for mass returned to the ISM through stellar winds and supernova explosions), star formation rate (\(\psi\)), maximum age of the stars (\(t\)), the mass-weighted age of stars (\(t_{\rm u}\)), nebular metallicity (\(Z_{\rm neb}\)), ionization parameter (log \(U_{\rm S}\)), \(V\)-band dust attenuation (A\({}_{V}\)), ionizing photon production efficiency (\(\xi_{\rm ion}\)) and escape fraction of H-ionizing photons (\(f_{\rm esc}\); see Appendix A for details). than the nearby [O iii] \(\lambda\lambda\)1660, 1666 lines. Even making generous assumptions about the ISM conditions, this ratio is difficult to explain without invoking super-solar nitrogen abundances ratios of \(\log(N/O)\gtrsim-0.4\) (i.e. more than two times higher than the solar abundance ratio), which would appear quite unusual with respect to \(z\sim 0-2\) galaxies (e.g. Perez-Montero and Contini 2009; Hayden-Pawson et al. 2022), and strongly inconsistent with canonical chemical evolution models (see Maiolino and Mannucci 2019 or Kobayashi 2022 for reviews). We also detect the C iii] \(\lambda\lambda\)1907,1909 line in our G235M spectrum. This C iii] line has been much more widely observed in star-forming galaxies at high redshift (e.g. Saxena et al. 2022; Arellano-Cordova et al. 2022; Jones et al. 2023), and its presence does not necessarily point to unusual C/O abundance ratios (Arellano-Cordova et al. 2022; Jones et al. 2023). In summary, the emission line ratios measured for GN-z11 suggest a very high ionization parameter and low oxygen abundance in the vicinity of 10 % solar, broadly in line with findings from galaxies at \(z\sim 6-10\) (Cameron et al. 2023; Sanders et al. 2023; Mascia et al. 2023; Nakajima et al. 2023; Tang et al. 2023). However, the detection of strong N iii] emission suggests unexpected abundance patterns, which may have deeper implications for chemical enrichment histories. ## 4 Conclusions We present JWST/NIRSpec spectroscopy of one of the most luminous galaxies at \(z>10\). GN-z11 is in the GOODS-North field and had previously been identified as a Lyman break galaxy candidate by Oesch et al. (2015), with a tentative redshift of \(z=11.1\) from a continuum break in slitless HST/WFC3 spectroscopy (Oesch et al. 2016). We see numerous emission lines and a strong Lyman-\(\alpha\) break in our NIRSpec spectroscopy, and we unambiguously measure the redshift to be \(z=10.603\). Our grating spectrum reveals Lyman-\(\alpha\) in emission, making it the first object at \(z>9\) with confirmed Lyman-\(\alpha\) emission. The rest-frame equivalent width is \(W_{0}=18\) A. The emission is offset 555 km s\({}^{-1}\) redward of the systemic redshift and spatially extended. These properties are consistent with models of Lyman-\(\alpha\) backscattering off the far side of galactic scale outflows. The NIRSpec spectrum of GN-z11 is remarkably rich with emission lines, enabling us to study the ISM properties at \(z>10\). Based on the high [Ne iii]/[O ii] ratio we infer a high ionization parameter (\(\log(U)>-2.0\)). We report a significant detection of the very rarely-seen N iii] \(\lambda\)1748 line, which could suggest unusually high N/O ratios. While some high ionization lines are detected, the He ii \(\lambda\lambda\)1640 and C iv \(\lambda\lambda\)1550 lines, which are typically associated with photoionization due to AGN, are weak. If this galaxy is indeed powered by star formation, then the Balmer emission lines and UV continuum suggest a current star formation rate of \(\sim 30~{}M_{\odot}\) yr\({}^{-1}\) and low dust attenuation. We have presented a very high signal-to-noise spectrum of a galaxy at \(z>10\), showing continuum and line emission, highlighting the power of our JADES observations to not only measure redshifts but to do detailed studies of the physical and chemical properties of galaxies formed within the first few hundred million years of the Big Bang. ###### Acknowledgements. AB, as, AC, GCJ, JC, and IEBW acknowledge funding from the "FirstGalaxies" Advanced Grant from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 789056). ECL acknowledges support of an STFC Webb Fellowship (ST/W001438/1). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant no.140.8 Res acknowledges support from a STFC Ernest Rutherford Fellowship (ST/5004851)(1). RM, JW, MC, FDE, JS, TL, LS, and WMB acknowledge support by the Science and Technology Facilities Council (STFC) and by the ERC through Advanced Grant 69571 "QUENICIC". RM also acknowledges funding from a research professorship from the Royal Society. JW also acknowledges funding from the Fondation MERAC. This research is supported in part by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number ET10010013. FS, EE, DJE, BDJ, MR, BER, IS, and CNAW acknowledge a JWST/NIRCam contract to the University of Arizona NASC-2015. DJE is also supported as a Simons Investigator. SC acknowledges support by European Union's EE ERC Starting Grant No. 101040227 - WINGS. SA, BRDP, and MP acknowledges support from the research project PID2021-12718NB-100 of the Spanish Ministry of Science and Innovation/State Agency of Research (MICI/NAE), HD gratefully acknowledges support by the Isaac Newton Trust and by the Kavli Foundation through a Newton-Kavli Junior Fellowship. Funding for this research was provided by the Johns Hopkins University, Institute for Data Intensive Engineering and Science (IDIES). RB acknowledges support from an STFC Ernest Rutherford Fellowship [grant number ST/1003595/1]. MP also acknowledges support from the Programa Arracion de Talento de la Communidad Madrid via grant 2018-72/TIC-11715. LW acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 2016-213719. DP acknowledges support by the Hong Family Foundation through a PC. He PhD Studentship. This work is based [in part] on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikusikik Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1181.
2301.06418
Mind the Gap: Modelling Difference Between Censored and Uncensored Electric Vehicle Charging Demand
Electric vehicle charging demand models, with charging records as input, will inherently be biased toward the supply of available chargers. These models often fail to account for demand lost from occupied charging stations and competitors. The lost demand suggests that the actual demand is likely higher than the charging records reflect, i.e., the true demand is latent (unobserved), and the observations are censored. As a result, machine learning models that rely on these observed records for forecasting charging demand may be limited in their application in future infrastructure expansion and supply management, as they do not estimate the true demand for charging. We propose using censorship-aware models to model charging demand to address this limitation. These models incorporate censorship in their loss functions and learn the true latent demand distribution from observed charging records. We study how occupied charging stations and competing services censor demand using GPS trajectories from cars in Copenhagen, Denmark. We find that censorship occurs up to $61\%$ of the time in some areas of the city. We use the observed charging demand from our study to estimate the true demand and find that censorship-aware models provide better prediction and uncertainty estimation of actual demand than censorship-unaware models. We suggest that future charging models based on charging records should account for censoring to expand the application areas of machine learning models in supply management and infrastructure expansion.
Frederik Boe Hüttel, Filipe Rodrigues, Francisco Câmara Pereira
2023-01-16T13:19:18Z
http://arxiv.org/abs/2301.06418v4
Mind The Gap - Modelling Difference Between Censored and Uncensored Electric Vehicle Charging Demand ###### Abstract Electric vehicle charging demand models, with charging records as input, will inherently be biased toward the supply of available chargers, as the data do not include demand lost from occupied stations and competitors. This lost demand implies that the records only observe a fraction of the total demand, i.e. the observations are censored, and actual demand is likely higher than what the data reflect. Machine learning models often neglect to account for this censored demand when forecasting the charging demand, which limits models' applications for future expansions and supply management. We address this gap by modelling the charging demand with probabilistic censorship-aware graph neural networks, which learn the latent demand distribution in both the spatial and temporal dimensions. We use GPS trajectories from cars in Copenhagen, Denmark, to study how censoring occurs and much demand is lost due to occupied charging and competing services. We find that censorship varies throughout the city and over time, encouraging spatial and temporal modelling. We find that in some regions of Copenhagen, censorship occurs \(61\%\) of the time. Our results show that censorship-aware models provide better prediction and uncertainty estimation in actual future demand than censorship-unaware models. Our results suggest that future models based on charging records should account for the censoring to expand the application areas of machine learning models in this supply management and infrastructure expansion. Electric mobility, Electric Vehicle charging demand, Latent mobility demand, Bayesian modelling. ## 1 Introduction Electric vehicle (EV) charging stations are increasingly more prominent in energy and transportation infrastructure planning (Bauer et al., 2021; Jakobsen et al., 2020), and adequate infrastructure is one of the main barriers to large-scale EV adoption (Yi and Shirk, 2018). However, planning and expanding charging infrastructure is a complex procedure that needs to account for factors such as charging station placement, demand predictions, and integration with existing power systems and road infrastructure (Deb, 2021). Recently, machine learning has been used to tackle some of the challenges mentioned above, particularly charging demand predictions, which is a problem well suited for machine learning (Buzna et al., 2019). Forecasts of the charging demand with machine learning models can inform energy providers and operations of the current stations of the energy requirements. A distinct property of charging stations is that they are a _shared_ service. The shared nature of charging stations has implications for how models forecast the charging demand, as shared mobility services often experience _censorship_, where demand is lost because of limited supply (Huttel et al., 2022). Forecasters tend to neglect this concept when modelling the charging demand with historical data, as the observations will depend highly on the available supply and might not reflect the _actual_ behaviour of the service users, which introduces biases into the models (Gammelli et al., 2020) and limit their application in strategic expansion. Specifically, for a charging station with two plugs, the station can never record a demand above two cars charging, even though more cars potentially need to recharge. The capacity of a charging station acts as an upper limit for the demand each station can observe, making the data only contain a fraction of the true unobserved (latent) demand distribution. Censorship unawares models do not forecast the unobserved demand; therefore, they can only asses the demand for the current station and not future stations. Another example of censorship in EV charging demand is that charging station operations lose demand to competing services. Users might decide to use a competing service, which censors the data the service providers observe. Therefore operators might have demand observations which do not reflect the actual demand around their stations, as they lost it to competitors. The censorship-unaware model will limit the operator's ability to make strategic moves in the competitive environment. For the case of EV charging, we argue that censoring mainly occurs in the two ways mentioned above; firstly, when an EV wants to charge however other cars occupy nearby charging stations1[Larsen, 2016], or if a service provider loses demand to a competing service [Li et al., 2021]. Figure 1 illustrate the two scenarios. Basing decisions on censorship-unaware models will be biased and produce underestimations of the actual demand, leading to inefficient charging station operation [Gammelli et al., 2022]. Therefore, we argue that planners of future charging infrastructure must be aware of the gap between the observed- and unobserved demand distributions and account for it in future modelling approaches. Footnote 1: Often referred to as frustrated demand. To address this gap, we propose to model the charging demand with censorship-aware models, giving an unbiased estimate of charging demand. Specifically, we focus our modelling on probabilistic censored neural networks due to their flexibility and scalability. We focus our approach on spatial and temporal model architectures, as the demand will likely fluctuate in these two dimensions [Huttel et al., 2021]. Traditional time series models neglect to account for spatial fluctuations [Kim and Kim, 2021]. We investigate the two censorship scenarios mentioned above, with demand lost to occupied chargers and competing services. We organise the rest of the work as follows. Section 2 reviews related work on modelling EV charging demand and censored modelling in the transport and machine learning domain. After the review, section 3 introduces two censorship-aware models with different distributional assumptions of the charging demand. Firstly, the classical Tobit model, which assumes a Gaussian demand distribution and secondly, a quantile regression approach which offers a semi-non-parametric distribution fit of the demand. It also covers how to model the spatial and temporal correlations between stations with graph neural networks. The section 4 introduces the data we base our models on, and section 5 describes our experimental setup, where we compare censorship-aware models with unaware models. In section 6, we discuss some of the advantages and limitations of our modelling approach. Lastly, section 7 concludes our main findings and outlines future research directions. ## 2 Related Work Modelling EV charging demand is a complex problem and has been studied from many angles, such as simulation-based modelling [Jin et al., 2023], queuing theory [Rich et al., 2022] and applications of statistical models [Amara-Ouali et al., 2022]. In this review, we focus on applications of machine learning models to model the charging demand. Lastly, we cover censored modelling approaches in the machine learning context and the mobility demand setting. Figure 1: The figure shows two examples of how censoring of EV charging demand occurs. Left: Censored demand lost to lost opportunities. Right: Charging demand lost to competing services. ### Electric Vehicle Charging Demand. Previous studies of EV charging demand, varies spatially from the scale of a single station (Zhu et al., 2019) to city-wide (Li et al., 2021), and all the way to country-level demand predictions (Kim and Kim, 2021). Temporally the forecasting horizon also varies between a short-term forecast of 15 minutes to day-ahead forecasting and even longer-term for expansion planning, often dictated by the application of the model. A substantial point of variability is the machine learning and statistical models themselves. Traditional statistical models, such as Autoregressive Integrated Moving Average (ARIMA and variations thereof), have been used to forecast the EV charging demand (Amini et al., 2016; Louie, 2017), with extensions to non-parametric estimation based on quantile regression (Buzna et al., 2021; Huber et al., 2020). Statistical models often provide interpretable parameters of the input features, which is helpful for policy and decision-making. However, machine learning models tend to show better predictive performance at the cost of interpretability (Buzna et al., 2019). Many machine learning models have often been used to model the demand, with random forest (Almaghrebi et al., 2020; Lu et al., 2018; Ullah et al., 2021), support vector machines (Almaghrebi et al., 2020; Majidpour et al., 2016; Sun et al., 2016; Xydas et al., 2013), and gradient boosting (Almaghrebi et al., 2020; Buzna et al., 2019) as the primary models. The modelling often includes temporal and external features to improve predictive performance. Over the last few years, recurrent neural networks (RNN), a recurrent variation of neural networks that learn a signal's temporal features, have been used in demand forecasting due to their ability to handle large datasets (Yi et al., 2021). Specifically, the long short-term memory (LSTM) variation is extensively used to forecast demand (Boulakhbar et al., 2022; Kim and Kim, 2021; Ma and Faye, 2022; Van Kriekinge et al., 2021; Zhu et al., 2019). However, they are restricted by their temporal aspect, as they do not capture the complex spatial correlations between individual charging stations. Therefore, researchers are applying temporal graph neural networks, which combine LSTMs and graph convolutions, to use spatial and temporal features in the forecast (Huttel et al., 2021; Li et al., 2021). A last point of variety is the datasets used for modelling the demand. In general, there is a lack of open-world charging datasets due to privacy or data property issues regarding charging records, leading researchers to other data sources to estimate charging demand. Tu et al. (2016) used GPS trajectories to estimate the EV charging demand from electric taxis in Shenzhen, China, by modelling the energy consumption of taxis and the dynamics of charging stations. Another example is Li et al. (2021), which uses the daily traffic flow to estimate the proportional charging demand of cars. Recently, researchers have started to use _observed_ charging records from charging stations as a basis for the demand (Amara-Ouali et al., 2022; Boulakhbar et al., 2022; Buzna et al., 2021; Flammini et al., 2019; Huttel et al., 2021; Kim and Kim, 2021; Ma and Faye, 2022; Van Kriekinge et al., 2021). The charging records are an efficient way to estimate the energy and power demand of the charging stations to inform energy providers (Boulakhbar et al., 2022). However, these models will not extrapolate outside the supply, limiting their application to infrastructure expansion. When the charging records are the basis for demand modelling, the data will inherently be biased by the availability and supply of chargers (Gammelli et al., 2020; Huttel et al., 2022). Moreover, the data does not account for demand lost to competing services. Consequently, the _actual_ charging demand is typically latent (i.e., unobserved), and its observations are likely to lie below it. Namely, they are _right-censored_. This censoring will be important when the charging station operators need to plan expansion, provide waiting time predictions, or schedule maintenance operations (Gammelli et al., 2022b). ### Censored Modeling of Mobility demand Gaussian processes (GP) have previously been the go-to model for modelling censored distributions in the mobility research field (Gammelli et al., 2020, 2022). The GP offers a probabilistic fit of the latent demand distribution, assuming it follows a Gaussian distribution with a Tobit likelihood function. However, the GP faces some scalability modelling limitations when the dataset size becomes too large (Liu et al., 2020). As an alternative to the GP, censored variations of SVM (Shim and Hwang, 2009) and random forest (Li and Bradic, 2020) exist, which are yet to be applied in mobility research. The trend in transportation research and EV charging demand modelling is that researchers increasingly use larger datasets, which forces models to be able to scale (Mao et al., 2016; van Cranenburgh et al., 2022). Therefore, in earlier work (Huttel et al., 2022), we propose to model the latent demand distribution using neural networks with a censored quantile regression approach, leveraging their flexible nonlinear modelling capabilities. The censored quantile neural network assumes an asymmetric Laplace likelihood function which is common in Bayesian inference of censored quantile regression Yang et al. (2016); Yu and Stander (2007). It does not assume any parametric form of the latent demand distribution, and the demand can follow a non-symmetrical distribution. An advantage of using neural networks is also that they scale to large datasets and can fit multiple data modalities (Goodfellow et al., 2016; LeCun et al., 2015). Outside the mobility demand context, censored quantile neural networks have been applied to biomedical data (Jia and Jeong, 2022; Pearce et al., 2022). As highlighted by Huttel et al. (2022) and Pearce et al. (2022), the applications of censored neural networks are scarce. In conclusion, we have not found any studies on modelling the censored EV charging demand in a machine-learning context, despite the prevalence of censorship in transportation research, demand settings and shared mobility services. We propose to estimate the true latent demand distribution of EV charging demand, using the flexibility of neural networks to capture spatial and temporal correlations with temporal graph neural networks. ## 3 Methodology Firstly, this section introduces two modelling approaches to model the true latent demand distributions in the context of neural networks: the Tobit model and the censored quantile regression model. We present the likelihood functions for both of these under right censorship. Afterwards, we cover how to capture the spatial-temporal correlations structures with temporal graph neural networks and combine the models with the likelihood functions. ### Censored modelling A dataset consisting of \(n\) observations with features \(\{\mathbf{x}_{i},\ldots,\mathbf{x}_{n}\}\) and a corresponding set of target variables \(\{y_{i},\ldots,y_{n}\}\), is censored if the target values are clipped observations of a corresponding latent (true) value \(\{y^{*}_{i},\ldots,y^{*}_{n}\}\). For each observation, there exist a threshold value \(\{\tau_{1},\ldots,\tau_{N}\}\), which makes observations either left-censored or right-censored, such that: \[y_{i}=\begin{cases}y^{*}\,,&y^{*}_{i}>\tau_{i}\\ \tau_{i}\,,&y^{*}_{i}\leq\tau_{i}\end{cases}\text{ in left-censorship,} \tag{1}\] \[y_{i}=\begin{cases}y^{*}\,,&y^{*}_{i}<\tau_{i}\\ \tau_{i}\,,&y^{*}_{i}\geq\tau_{i}\end{cases}\text{ in right-censorship.} \tag{2}\] The threshold value \(\tau_{i}\) can be either observed, unobserved, fixed or stochastic (Huttel et al., 2022). In right censoring, \(\tau_{i}\) will act as an upper limit for the observations, and in left censoring, a lower limit. In the case of EV charging demand, the observations will often be right censored based on the current supply. ### Parametric Distribution estimation. A commonly used model for censored modelling is the _Tobit_ model (Tobin, 1958). The model assumes that the latent value, \(y^{*}\), follows a Gaussian distribution. In the context of neural networks, a Tobit model estimates both a mean function \(\mu_{\beta}(\mathbf{x}_{i})\) and a standard-deviation function \(\sigma_{\beta}(\mathbf{x}_{i})\): \[f_{\beta}(\mathbf{x}_{i})=\mathcal{N}(y^{*}_{i}|\mu_{\beta}(\mathbf{x}_{i}), \sigma_{\beta}(\mathbf{x}_{i}))\,. \tag{3}\] Where \(\sigma_{\beta}(\mathbf{x}_{i})\) is constraint to positive values. The parameters \(\beta\) can be estimated with back-propagation on the negative log-likelihood of the censored distribution. The likelihood for a right-censored Tobit model is given by: \[\mathcal{L}\left(y|\beta,\mathbf{x}\right)=\prod_{i=1}^{n}\left\{\frac{1}{ \sigma(\mathbf{x}_{i})}\varphi\left(\frac{y_{i}-\mu(\mathbf{x}_{i})}{\sigma( \mathbf{x}_{i})}\right)\right\}^{(1-l_{i})}\left\{1-\Phi\left(\frac{y_{i}-\mu (\mathbf{x}_{i})}{\sigma(\mathbf{x}_{i})}\right)\right\}^{l_{i}}, \tag{4}\] where \(\varphi\) is the Probability Density Function (PDF) of a \(\mathcal{N}(0,1)\), \(\Phi\) is its Cumulative Distribution Function (CDF), and for a given fixed threshold \(\tau_{i}\): \[l_{i}=\begin{cases}0\,,&y_{i}<\tau_{i}\\ 1\,,&y_{i}=\tau_{i}\end{cases}. \tag{5}\] Which leads to the corresponding negative log-likelihood function \[\log\mathcal{L}\left(y|\beta,\mathbf{x}\right)=-\sum_{i=1}^{n}l_{i}\log\left( \frac{1}{\sigma(\mathbf{x}_{i})}\phi\left(\frac{y_{i}-\mu(\mathbf{x}_{i})}{ \sigma(\mathbf{x}_{i})}\right)\right)+(1-l_{i})\log\left(1-\Phi\left(\frac{y _{i}-\mu(\mathbf{x}_{i})}{\sigma(\mathbf{x}_{i})}\right)\right)\,. \tag{6}\] ### Non-Parametric Distributions A non-parametric distribution can be modelled through censored quantile regression (Yu and Stander, 2007) as an alternative to the parametric Gaussian assumption. Quantile regression estimates the \(\theta\)-quantile of the true latent distribution, and multiple quantiles can be combined to model the distribution entirely. Naturally, the \(\theta\) must be constrained to \(0<\theta<1\). Then \(q_{\theta,i}\) will be the \(\theta\)-quantile estimate of the label \(y_{i}^{*}\): \[f_{\beta}(\mathbf{x}_{i})=q_{\theta,i}\,. \tag{7}\] For a quantile regression model with left censorship and a specified quantile \(\theta\), the following likelihood function can be used for Bayesian Inference of the parameters \(\beta\)(Yu and Stander, 2007): \[\mathcal{C}\left(y|\beta,\mathbf{x},\theta\right)=\theta^{N}\left(1-\theta \right)^{N}\exp\Bigg{\{}-\sum_{i=1}^{N}\rho_{\theta}\left(y_{i}-\max\left\{ \tau_{i},\hat{q}_{i,\theta}\right\}\right)\Bigg{\}}\,, \tag{8}\] Where \(\rho_{\theta}\) is the tilted loss: \[\rho_{\theta}(r)=\max\{\theta r\,,(\theta-1)r\}\,. \tag{9}\] A problem with quantile regression (Equation 7) is that one model only fits a single quantile, and multiple models must be estimated to learn the entire target distribution. However, a whole set of quantiles \(\{q_{\theta,\theta}\}_{k=1}^{K}\) can be jointly estimated using multi-output neural networks without adding substantial computational cost. We get the following likelihood model with multi-output for the censored quantile regression for the extension. (Huttel et al., 2022) \[\mathcal{C}\left(y|\beta,\mathbf{x},\{\theta_{k}\}_{k=1}^{K}\right)=\sum_{k=1 }^{K}\left(\theta_{k}^{N}\left(1-\theta_{k}\right)^{N}\exp\Bigg{\{}-\sum_{i=1 }^{N}\rho_{\theta,k}\left(y_{i}-\max\left\{\tau_{i},\hat{q}_{\theta,k,i} \right\}\right)\Bigg{\}}\,\right)\,. \tag{10}\] Since the likelihood functions above are based on left censoring, we negate the target values to make the right censoring scheme into a left censoring one. ### Spatial-Temporal Modeling Spatial modellingWe propose to model the demand with temporal graph neural networks to model the spatial and temporal correlation structures between charging stations (Zhao et al., 2020). As the name implies, the spatial structure is modeled using a _graph_, \(G=(N,E)\), where \(V=\{v_{1},v_{2},\ldots,v_{m}\}\) is a set of nodes and \(E\) is the set of edges between the nodes. For a graph with \(m\) nodes and a temporal signal \(\mathbf{x}_{t}\in\mathbb{R}^{m\times c}\) at timestep \(t\), where \(c\) is the set of additional input, the spatial-temporal modelling can be formulated as using the last \(L\) timesteps, to forecast the next \(Q\) timesteps. (Tygesen et al., 2022) \[f_{\beta}(G;\mathbf{x}_{t-L+1:t})=p(\mathbf{x}_{t+1:t+Q}|G;\mathbf{x}_{t-L+1: t},\beta)\,. \tag{11}\] We model \(G\) as a weighted graph, using the geographical locations of nodes and the distance between them. We model the weights of an edge between the two nodes \(i\) and \(j\) as: \[e_{ij}=\exp(-h(z_{i},z_{j})) \tag{12}\] Where \(z_{i}\) and \(z_{j}\) are the latitude and longitude of the nodes and \(h\) is the _Haversine Distance_ between the nodes in \(\mathrm{km}\)(Huttel et al., 2021). The set \(E\) defines the adjacency matrix \(A\in R^{m\times m}\), representing the network's topology. To capture features from the topological structure, a graph convolutional network (GCN) uses graph convolutions to model the relationship between nodes in a graph (Kipf and Welling, 2016). A 2-layered GCN model is formulated as follows: \[f(A,\mathbf{x}_{t})=\sigma\left(\widehat{A}\ \mathrm{Relu}\left(\widehat{A} \mathbf{x}W_{0}\right)W_{1}\right)\,, \tag{13}\] where \(X\) is the feature matrix, \(A\) is the adjacency matrix, \(\widehat{A}=\widetilde{D}^{-\frac{1}{2}}\widetilde{A}\widetilde{D}^{-\frac{1}{2}}\) denotes a preprocessing step, \(\widehat{A}=A+I_{N}\) is a matrix with self-connection structure, and \(\widetilde{D}\) is a degree matrix, \(\widetilde{D}=\sum_{j}\widetilde{A}_{ij}\.\ W_{0}\) and \(W_{1}\) are the weight matrices in the first and second layers, respectively, and \(\sigma(\cdot)\), \(\mathrm{Relu}()\) is the activation function [22]. Temporal ModelingThe GCN extends to a temporal signal by combining the GCN with long short-term memory layers (T-GCN) [Hochreiter and Schmidhuber, 1997, Zhao et al., 2020]. The key equations of the T-GCN with an LSTM cell can be summarised as follows, where \(f(A,\mathbf{x}_{t})\) is the graph convolution from Equation 13: \[i_{t} =\sigma_{g}\left(W_{i}f(A,\mathbf{x}_{t})+U_{i}h_{t-1}+b_{i} \right)\,, \tag{14}\] \[f_{t} =\sigma_{g}\left(W_{f}f(A,\mathbf{x}_{t})+U_{f}h_{t-1}+b_{f} \right)\,,\] (15) \[o_{t} =\sigma_{g}\left(W_{o}f(A,\mathbf{x}_{t})+U_{o}h_{t-1}+b_{o} \right)\,,\] (16) \[\tilde{c}_{t} =\sigma_{c}\left(W_{c}f(A,\mathbf{x}_{t})+U_{c}h_{t-1}+b_{c} \right)\,,\] (17) \[c_{t} =f_{t}\odot c_{t-1}+i_{t}\odot\tilde{c}_{t}\,,\] (18) \[h_{t} =o_{t}\odot\sigma_{h}\left(c_{t}\right)\,. \tag{19}\] The matrices \(W_{i-c}\) and \(U_{i-c}\) and the vectors \(b_{i-c}\) contain the trainable weights, \(\odot\) denotes the Hadamard product and \(c_{0}=0\) and \(h_{0}=0\). \(f_{t}\) is the forget gate activation, \(i_{t}\) is the input/update gate activation and \(o_{t}\) is the output gate hidden activation. \(\sigma_{g}\) is the sigmoid function and \(\sigma_{c}\) and \(\sigma_{h}\) are the hyperbolic tangent activation function [Hochreiter and Schmidhuber, 1997]. In summary, the T-GCN models capture the complex topological structure using the GCN and the temporal structure of the data using the LSTM layers [22]. The output from the T-GCN is a set of values for each node in the graph. We combine the T-GCN with the censored likelihood functions from Equation 6 and Equation 10, to compute the loss for each node and sum them. ## 4 Counterfactual study Let us now introduce a _counterfactual_ study based on GPS-trajectories of internal combustion engine cars (ICE) in the metropolitan area of Copenhagen, Denmark. These trajectories will be the basis for estimating both observed and unobserved charging demand in the city. We assume that the ICE cars are electric and generate a fleet of EVs, where we model how the state-of-charge (SoC) of the vehicles will vary based on the GPS trajectories. We include charging dynamics from the public charging station, where cars can recharge, similar to the approach proposed by [17]. This study provides us with the _true_ unobserved charging demand, and we can assess the censorship levels that occur throughout space and time. We call it a counterfactual study because we essentially ask the question "what would have happened with charging demand if the ICE Cars from the dataset had been EVs instead". We summarise the counterfactual study in Algorithm 1 and cover each step in depth in Appendix A. ### Data As mentioned, the data consist of GPS trajectories from ICE cars in Denmark, where locations are saved every 20 seconds. Each GPS location contains an ID of the car and can be stitched together into a complete GPS trajectory, which forms a trip for the car. Each trip contains a start coordinate, an end coordinate, and the distance driven by a car. Cars' parking time can be inferred as the time between trips. To ensure the users' privacy, the trajectories are randomised by adding a noise distance between 50 and 500 meters to each trip's endpoints. The GPS trajectories of the cars were collected over three months, from September to November 2019 and is uniformly distributed across the country (Denmark) and vehicle segments. The GPS trajectories do not suffer from the biases identified in other EV studies, namely, having early adopters bias the behaviour in the data sets. In total, 32664 cars are observed in the capital region (Copenhagen), which accounts for \(5.71\%\) of the cars in the city (A total of 571627 cars for the period [14]). In the observations period, the penetration rate of EVs in Copenhagen was \(1\%\), and for the fall of 2022, the penetration rate has increased to \(2.5\%\)[14]. We visualise the number of daily trips for the entire period in Figure 2. In total, we have 1.5 million trips with a median of 10 trips pr. car. Charging Stations in CopenhagenTo model the charging infrastructure in Copenhagen, we use the charging infrastructure from 2021. We scraped the infrastructure from uppladdning [2021], an open-source map containing locations of EV charging stations. The scraped chargers have different charging power ranging from slow chargers with 3.7 kW charging to fast chargers with 150 kW. These provide us with the locations and the power for each charging station in the city, which we use to determine where the ICE cars would have charged, assuming they were EVs. ### Queuing Models As already explained, we theorise that the observed demand is censored by lost opportunities when charging stations are occupied. We, therefore, implement three different queues to handle multiple cars wanting to charge as follows: * _Gas-station_: People queue up like an ordinary gas station and wait for their turn to charge. The EVs charge to 80% SoC before driving away. This behaviour is primarily associated with fast chargers and is often referred to as a smart queue [12]. * _3-Hour_: In this queue, people are only allowed to charge for 3 hours before driving away, regardless of the SoC of the car. Cars that arrive at an occupied station will not be charged. This behaviour reflects policies (e.g. Denmark [2]) where there is a (typically 3 hrs) time limit for parking and charging. * First serve queue_: The last queue is a first come, first serve queue, where the EV who comes first charges for the entire duration until the next trip. Cars that arrive at an occupied station will not be charged. This corresponds to opportunistic behaviour. These variations of queues will naturally give rise to different levels of censoring. For example, the gas-station queue will be able to serve all users, given that they have time to wait. We also keep the queue similar across all charging stations. ### Outline of the counterfactual study We sample different numbers of cars from the GPS trajectories to match different EV penetration rates. The GPS trajectories have a penetration rate of 5% for all cars. In 2019, The penetration rate of EVs was roughly 1% and in 2022, is roughly 3%. Therefore, we vary the penetration rate between \(1\%\) to \(5\%\) and test the three different queuing models. Using the sampled cars and the queuing model, we model how the cars would have charged given they were EVs. We sample the battery sizes according to the the market distribution of different EV types (section A.1). We compute their willingness to charge based on the SoC consumption of a trip. The charging stations is selected probabilistic from the five closest charging stations, at an end trip. Each car follows the procedure outlined in Algorithm 1. ### Demand Profiles from counterfactual study We now turn to the output of the counterfactual study, focusing on the censored part of the charging demand. For each GPS trajectory, we stored the energy demand observed and unobserved, which we will use to model the censored demand. We report the fraction of hours where censorship occurs in Table 1, across the queue and the penetration rate. Across all the different queues and penetration rates, we observe that demand is censored at different scales. The gas-station queue offers a very low level of censorship due to the nature of charging, where observations are only censored if the EVs have a new trip while in the queue. When we go to the different queues, we see that amount of censoring increases. For current penetration levels of EVs, we find that between 33% and 53% of hours censoring occurs in the entire area. As the penetration rate increases, censorship follows, which is a natural consequence, as the infrastructure would need to expand to handle more demand from chargers. Figure 2: Overview of the total number of daily trips in the dataset. From the 14’th to the 20th of October, there is a drop in daily trips due to it being a holiday week in Denmark. Therefore, there are fewer trips due to a reduced traffic flow from commuting. Based on the study, we find that in the temporal dimension, most censoring occurs during peak demand hours, both in the morning and afternoon (Figure 3). The peak demand hours reflect the traffic patterns of the original GPS trajectories and are similar to what other studies observe in charging demand in cities (Huttel et al., 2021). The demand also varies throughout the week, with lower demand on weekends than on ordinary weekdays. Even though the demand varies throughout the week, the amount of censored demand is generally stable. There is no large discrepancy in the proportional censored demand over the different weekdays (Figure 3). As a final step, we aggregate the demand spatially. We conduct _kmeans_ clustering of all the charging stations to divide the city into ten regions. Each cluster centre will work as the temporal graph neural network nodes. We do this for modelling purposes, as demand at an individual charging station can be very random and sporadic.Figure 4 shows the spatial overview of the counterfactual study. Based on the counterfactual study, censorship also varies in spatial dimensions. In Table 2, we show the censorship levels for the \(5\%\)-penetration of EVs and the _First come - first serve_ queue. In cluster 0 (The western part of Copenhagen), 61% of the hourly charging demand is censored. These large censorship levels stem from only two regional charging stations covering a large spatial area. Still, it does not have sufficient capacity to meet the demand in this \begin{table} \begin{tabular}{l|r r r r r} \hline \hline & \multicolumn{5}{c}{Penetration rate} \\ Queue & 1\% (Historical) & 2\% & 3\% (Current) & 4\% & 5\% (Data) \\ \hline Gas-station Queue & \(<\)0.1\% & 2.2\% & 4.9\% & 9.8\% & 16.4\% \\ 3 Hour queue & 4.1\% & 17.6\% & 33.2\% & 46.8\% & 56.9\% \\ First come - First serve Queue & 8.3\% & 30.1\% & 53.2\% & 66.2\% & 70.5\% \\ \hline \hline \end{tabular} \end{table} Table 1: Fraction of hours where the observed demand is censored for the different queuing models. Figure 3: Average charging demand, either grouped by the time of day (left) or day of the week (right). Rate: \(5\%\) and queue: _First come - First serve_. area. This area is residential, so most of the demand could be served by home charging, which s not included in our study. In contrast, in cluster 4 (The southern part of Copenhagen), we find lower censorship levels and rarely any demand at night. Cluster 4 is an industrial area with low demand at night (Figure 5). We report the censorship across all the different queues, penetration rates and clusters in Appendix B. In summary, based on GPS trajectories from ICE cars in the fall of 2019, we have created a _counterfactual_ study of the charging behaviour in this period. The study allows us to assess the true _latent_ charging demand, which is un-observed if only the charging records are used. The study did employ some simplifications of the battery and the choice modelling assumptions to generate the demand profiles. ## 5 Experiments and Results This section introduces our experimental setup. First, we experiment with the censoring schemes from Table 1 and model the demand for the entire city. We compare the censorship-aware models to the unaware ones before we test the models in a competitive environment where only a fraction of the demand is observed. We denote the models based on their likelihood functions used to model. We include two censorship-unaware models; a _Gaussian_ model fitted with maximum likelihood and a _quantile_ regression (QR) model with the uncensored tilted loss (Equation 9). We compare them to censorship-aware models; _Tobit_ (Equation 6) and a _censored QR_ (Equation 10). We keep the same architectural design choices across all the models and use the same random seed for their initialisation of parameters. Each experiment is conducted ten times, and the reported results are the average across these runs. Figure 4: Overview of the spatial structure in the counterfactual study. Small circles indicate the end of the GPS trajectories. The circle with a dark line is the charging station, and squares indicate the centroids of the clusters. Colours indicate which cluster they belong to. ### Architectures For the input size of the temporal signal, we use the last 168 hourly demand observations, which equals one week of data. In addition, we add the type of day and hour as external input features, encoded into cyclical features using sine and cosine. We combine the cyclical features with the historical demand to create time series for each node in the graph \(G\). We scale each time series between 0 and 1 to avoid nodes contributing a large weight to the loss function during training [22]. We use 16 and 8 channels for the graph convolutions and an LSTM with 32 hidden units. We use the Adam optimiser [Kingma and Ba, 2014] with a learning rate of 0.0003 and norm clipping of 1.0. We train each model for 1000 epochs with a 256 batch size and apply an early stopping criterion of 0.001 on the validation loss. We divide the dataset into a train, validation and test set with a split of 80%, 10% and 10%, respectively. For the quantile regression models, we estimate 0.05, 0.5, and 0.95-quantiles. All the models are implemented in Keras, [2] and the graph convolutions are implemented using StellarGraph [16, 2018]. \begin{table} \begin{tabular}{l l r} \hline \hline Cluster & Location & \(\%\) of censored hours \\ \hline 0 & Western part of city & \(61.03\%\) \\ 1 & South Easter part & \(20.29\%\) \\ 2 & West of the city centre & \(3.12\%\) \\ 3 & North Eastern part of the city & \(16.31\%\) \\ 4 & Southern part of the city & \(11.28\%\) \\ 5 & Western part of city & \(24.33\%\) \\ 6 & North part of the city centre & \(6.57\%\) \\ 7 & South Western part of the city & \(20.16\%\) \\ 8 & North Western part of the city & \(31.17\%\) \\ 9 & City center & \(16.67\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Table showing the fraction of hours with censored observation with 5% EV penetration and First-come Queue. Figure 5: Charging demand from the Counterfactual study for cluster 0 and cluster 4 in the city. ### Evaluation Metrics We evaluate our models on the tilted loss they achieve on the test set. We compute their quantiles for the Gaussian and Tobit models using the estimated distributions. In addition, we evaluate our models based on the following metrics, commonly used for quantile regression evaluation: Interval Coverage Percentage (ICP) and Mean Interval Length (MIL). We limit the notation to a single node in the graph: \[\text{ICP} =\frac{1}{N}\sum_{j=1}^{N}\begin{cases}1&\text{if }\hat{q}_{i, \theta}\leq y_{i}\leq\hat{q}_{i,\theta^{\prime}}\\ 0&\text{otherwise}\end{cases} \tag{20}\] \[\text{MIL} =\frac{1}{N}\sum_{j=0}^{N}(|\hat{q}_{i,\theta}-\hat{q}_{i,\theta^ {\prime}}|) \tag{21}\] The desired quality of a probabilistic model is to have a high ICP (Equation 20), which is close to 0.9, meaning that 90% of observations are within the prediction interval while keeping a low MIL (Equation 21), making the model predictions precise. ### Total demand predictions Firstly, we experiment with evaluation on all charging stations and nodes with the censoring scheme reported in Table 1. For the experiments, we vary the penetration rate of EVs, and as mentioned above, the censorship levels increase as the penetration rate increases and the queuing model changes. We report the tilted loss on the true demand for the test set in Table 3. The quantile regression-based models (QR and Censored-QR) generally yield a lower tilted loss for the test set. There is no clear distinction between the censorship-aware and unaware models for the Gas-station queue due to the generally low levels of censorship we observe for this type of queue. However, as we vary the queue and the censorship levels, we find that the discrepancy between the censorship-aware and unaware models increases. In general, the Censored QR is the best-performing model for the 3-hour and First come queue, where the censorship levels increases. In Figure 6, we depict the MIL and ICP for the models across the different queues and penetration rates. Since the amount of censoring varies between nodes, the ICP will be pictured for the cluster with the most censoring. The Gaussian models (Gaussian and Tobit) tend to have higher ICP and MIL, meaning they tend to have larger prediction intervals. As a consequence, a higher ICP compared to the non-parametric models. The QR models tend to have a tighter confidence interval (lower MIL), with an ICP comparable to the Gaussian models. Notice that the best model is one with ICP close to 90% and the shortest MIL possible. \begin{table} \begin{tabular}{l l|l l l l l} \hline \hline \multicolumn{1}{c}{Queue} & \multicolumn{1}{c}{Penetration rate} & \multicolumn{1}{c}{1.0\%} & \multicolumn{1}{c}{2.0\%} & \multicolumn{1}{c}{3.0\%} & \multicolumn{1}{c}{4.0\%} & \multicolumn{1}{c}{5.0\%} \\ \hline \multirow{3}{*}{Gas-station} & Gaussian & 0.798 & 0.800 & 0.710 & 0.782 & 0.714 \\ & Tobit & 0.801 & 0.802 & 0.712 & 0.774 & 0.713 \\ & QR & 0.657 & **0.719** & **0.668** & 0.731 & **0.680** \\ & Censored QR & **0.655** & 0.720 & 0.670 & **0.731** & 0.681 \\ \hline \multirow{3}{*}{3-hour} & Gaussian & 0.766 & 0.752 & 0.812 & 0.781 & 0.798 \\ & Tobit & 0.764 & 0.750 & 0.800 & 0.763 & 0.765 \\ & QR & 0.644 & 0.678 & 0.756 & 0.740 & 0.763 \\ & Censored QR & **0.641** & **0.673** & **0.748** & **0.726** & **0.736** \\ \hline \multirow{3}{*}{First come - First serve} & Gaussian & 0.696 & 0.776 & 0.797 & 0.887 & 0.945 \\ & Tobit & 0.694 & 0.773 & 0.751 & 0.784 & 0.790 \\ \cline{1-1} & QR & 0.577 & 0.697 & 0.734 & 0.841 & 0.894 \\ \cline{1-1} & Censored QR & **0.574** & **0.690** & **0.709** & **0.774** & **0.765** \\ \hline \hline \end{tabular} \end{table} Table 3: Tilted loss summed for all the nodes. Lower is better, and the best-performing model is highlighted in bold. Figure 6: The ICP (left) and MIL (right) across the different penetration rates and Queues. In general, the ICP should be close to 0.9 with a lower MIL value, which means that the uncertainty can capture the target distribution while allowing for uncertainty in the predictions. ### Application of censored demand modelling Here we describe some of the benefits of using the censorship-aware model compared to the censorship-unaware and give an example of its influence on future expansion plans. Infrastructure expansion planning is more complex than we present here, but we highlight the benefits of using censorship-aware models. Again, we turn to cluster 0, where a large fraction of observations is censored. We find that, in general, the censored QR has a better fit of the actual true demand than the censorship-unaware QR (Figure 7). The QR model provides a relatively accurate fit of the observed demand and tens to forecast the demand sufficiently by forecasting the general temporal patterns. However, the censored QR provides higher predictions of the charging demand than the system observes. These elevated predictions indicate that potentially expanding the infrastructure in cluster 0 is viable. A particular benefit of the censorship-aware models is their estimate of the capacity increase needed to serve the true demand. The gap between the predicted demand and observed censored could already serve as an indicator for expansion. Still, the regressed value and their corresponding uncertainty provide a size of the additions required to meet all demand. In conclusion, the censorship-aware models estimate the capacity required to meet demand, which provides a reasonable basis for data-driven decisions. Flexible supply could also serve peak demand hours due to the variation of demand throughout the day. ### Competing Services We now focus on a scenario with multiple EV charging providers in a competitive environment. We focus our experiment and models from the perspective of one single charging service provider, i.e., we only observe a fraction (market share) of the total charging demand in the city and then try to forecast the true demand for operation expansions. We experiment with different amounts of market shares, ranging from 10% up to 95%, with 10% market share meaning that 90% of the true demand is unobserved. We also do not assume the number of competitors in the market but only use data observed by one provider. We cluster the chargers into graph nodes as described. For all the chargers, we sample the market share of chargers in the entire city and censor all the charging stations. We report the tilted loss, ICP, and MIL in Table 4 for the different levels of market share. We find that the censorship-aware models outperform their unaware counterparts. For low market shares, we find that the Tobit model tends to provide a better fit of the demand, but as the market-share increase, the censored QR models surpass it across all the evaluation metrics. Again, the uncertainty reflected in the prediction intervals is tighter than the Tobit model while still having a high percentage of observations within its forecasts. These results show the benefit of using censorship-aware models in a competitive environment. They provide a good basis for the charging stations' operations and estimates of the demand the competitors can serve. Naturally, this estimate of competitors' demand provides a competitive edge for future expansions and operation of the infrastructure from a charging station operator's point of view, as the estimation of the demand lost to competitors can be used for strategic moves [Li et al., 2021]. Figure 7: Demand predictions for cluster 0 for the first-come queue with 5% penetration rate. ## 6 Discussion Throughout the analysis, experiments and results, we have argued that the observed charging demand from charging records is censored and therefore does not reflect the demand for EV charging. We used ICE GPS trajectories and battery models to support the hypothesis to conduct a counterfactual study. This counterfactual study allows us to investigate how much of the charging demand is censored versus what is observed. The counterfactual study does have the advantage over artificial censoring of the data that the censoring happens as a consequence of the charging infrastructure and not just as censoring of the dataset, which is common practice in censored modelling (Gammelli et al., 2020; Huttel et al., 2022). We argue that the censoring from the study is coherent with real-world censoring and provides a more accurate censoring scheme than artificial censoring. Another advantage is the use of ICE GPS trajectories, which are not biased by the behaviour of early EV adopters, as the charging behaviour of the EVs does not constrain the ICE. Therefore, the GPS trajectories represent the desired behaviour from users when they are not constrained to charging their cars. The counterfactual study provides some modelling advantages compared to using public charging records. However, the counterfactual study does employ simplistic charging and battery models. A future research direction is to evaluate and incorporate the other key travel metrics influencing EVs' energy consumption, such as speed, acceleration and ambient temperature (McNerney et al., 2017). All of these factors lead to variations in the battery consumption for each trip of the cars; therefore, we underestimate charging demand, as the SoC of the EVs is likely lower than what we present here (Hipolito et al., 2022). This supports the argument that EV charging demand is censored and that the censoring is larger than the one we observe, emphasising the need for censorship-aware models in modelling EV charging demand. ## 7 Conclusion and future research directions In summary, we have argued that the charging demand observed by charging records is censored due to opportunities lost to competition or supply limitations. This lost demand is often neglected in the EV charging demand modelling, and it will impact the application of machine learning models in infrastructure expansion. The gap between lost and observed demand must be addressed to facilitate efficient and cost-effective expansions. We have presented several methods to account for this censoring in modelling EV charging demand. We propose to model the demand with neural networks which are flexible and scalable. To evaluate both censorship scenarios, we conduct a counterfactual study of Copenhagen, where ICE cars are assumed to be EVs following different penetration rates, to infer the EV charging demand. For the highest penetration with the most ineffective queue, we find that \(61\%\) of all hours the demand is censored. Through experiments on the demand, we showed that the censorship-aware models showed superior modelling performance in the true latent distribution of charging demand than censorship-unaware models. Across the three different queues with found that the censored quantile regression model performed the best by having a lower tilted loss across \begin{table} \begin{tabular}{l l|l l l l l} \hline \hline \multirow{3}{*}{Metric} & \multicolumn{6}{c}{Market share} \\ & \multicolumn{1}{c|}{Model} & \(10\%\) & \(25\%\) & \(50\%\) & \(75\%\) & \(95\%\) \\ \hline \multirow{3}{*}{TL} & Gaussian & 4.428 & 2.658 & 1.370 & 0.920 & 0.734 \\ & Tobit & **3.085** & **1.898** & 1.092 & 0.842 & 0.729 \\ & QR & 4.217 & 2.522 & 1.265 & 0.851 & 0.669 \\ & Censored QR & 3.248 & 1.904 & **1.060** & **0.790** & **0.671** \\ \hline \multirow{3}{*}{ICP} & Gaussian & 0.325 & 0.446 & 0.589 & 0.689 & 0.800 \\ & Tobit & **0.314** & **0.496** & 0.724 & 0.786 & 0.804 \\ & QR & 0.259 & 0.492 & 0.632 & 0.717 & 0.819 \\ & Censored QR & 0.355 & 0.564 & **0.679** & **0.760** & **0.809** \\ \hline \multirow{3}{*}{MIL} & Gaussian & 0.223 & 0.321 & 0.320 & 0.312 & 0.315 \\ & Tobit & **0.319** & **0.427** & 0.398 & 0.352 & 0.324 \\ \cline{1-1} & QR & 0.179 & 0.277 & 0.300 & 0.302 & 0.316 \\ \cline{1-1} & Censored QR & 0.302 & 0.375 & **0.347** & **0.331** & **0.320** \\ \hline \hline \end{tabular} \end{table} Table 4: Model performance as we increase market share. Bold indicates the model with the lowest Tilted loss. the different experiments compared to the Tobit model. Only for the gas-station queue, where censorship is low, did the censorship-unaware models perform. We also compare censorship-aware and unaware models in a competitive environment with different market share amounts, where the censorship-aware models were superior in modelling the demand. In future work, we plan to improve upon the counterfactual study by including complex choice models with realistic charging behaviours in users, as users might not want to queue up for slow charging opportunities. We also plan to incorporate other key travel metrics, such as speed, acceleration, road grade and temperature, to more accurately estimate the consumption of state of charge. We propose to use empirical charging profiles instead of the piece-wise linear function used in this work. In future works, we plan to evaluate the impact of censored modelling in operation and expansion of the charging stations, focusing on mobile charging stations. ## 8 Acknowledgements The research leading to these results has received funding from the Independent Research Fund Denmark (Danmarks Frie Forskningsfond) under grant no. 0217-00065B.
2301.01377
Delayed Development of Cool Plasmas in X-ray Flares from kappa1 Ceti
The Neutron star Interior Composition ExploreR (NICER) X-ray observatory observed two powerful X-ray flares equivalent to superflares from the nearby young solar-like star, kappa1 Ceti, in 2019. NICER follows each flare from the onset through the early decay, collecting over 30 cts s-1 near the peak, enabling a detailed spectral variation study of the flare rise. The flare in September varies quickly in ~800 sec, while the flare in December has a few times longer timescale. In both flares, the hard band (2-4 keV) light curves show typical stellar X-ray flare variations with a rapid rise and slow decay, while the soft X-ray light curves, especially of the September flare, have prolonged flat peaks. The time-resolved spectra require two temperature plasma components at kT ~0.3-1 keV and ~2-4 keV. Both components vary similarly, but the cool component lags by ~200 sec with a 4-6 times smaller emission measure (EM) compared to the hot component. A comparison with hydrodynamic flare loop simulations indicates that the cool component originates from X-ray plasma near the magnetic loop footpoints, which mainly cools via thermal conduction. The time lag represents the travel time of the evaporated gas through the entire flare loop. The cool component has several times smaller EM than its simulated counterpart, suggesting a suppression of conductive cooling possibly by the expansion of the loop cross-sectional area or turbulent fluctuations. The cool component's time lag and small EM ratio provide important constraints on the flare loop geometry.
Kenji Hamaguchi, Jeffrey W. Reep, Vladimir Airapetian, Shin Toriumi, Keith C. Gendreau, Zaven Arzoumanian
2023-01-03T22:15:42Z
http://arxiv.org/abs/2301.01377v1
# Delayed Development of Cool Plasmas in X-ray Flares from \(\kappa^{1}\) Ceti ###### Abstract The Neutron star Interior Composition ExploreR (_NICER_) X-ray observatory observed two powerful X-ray flares equivalent to superflares from the nearby young solar-like star, \(\kappa^{1}\) Ceti, in 2019. _NICER_ follows each flare from the onset through the early decay, collecting over 30 cnts s\({}^{-1}\) near the peak, enabling a detailed spectral variation study of the flare rise. The flare in September varies quickly in \(\sim\)800 sec, while the flare in December has a few times longer timescale. In both flares, the hard band (2\(-\)4 keV) light curves show typical stellar X-ray flare variations with a rapid rise and slow decay, while the soft X-ray light curves, especially of the September flare, have prolonged flat peaks. The time-resolved spectra require two temperature plasma components at _kT_\(\sim\)0.3\(-\)1 keV and \(\sim\)2\(-\)4 keV. Both components vary similarly, but the cool component lags by \(\sim\)200 sec with a 4\(-\)6 times smaller emission measure (_EM_) compared to the hot component. A comparison with hydrodynamic flare loop simulations indicates that the cool component originates from X-ray plasma near the magnetic loop footpoints, which mainly cools via thermal conduction. The time lag represents the travel time of the evaporated gas through the entire flare loop. The cool component has several times smaller _EM_ than its simulated counterpart, suggesting a suppression of conductive cooling possibly by the expansion of the loop cross-sectional area or turbulent fluctuations. The cool component's time lag and small _EM_ ratio provide important constraints on the flare loop geometry. Main sequence stars (1000), Solar analogs (1941), Stellar flares (1603), Stellar x-ray flares (1637) 0000-0002-4606]Kenji Hamaguchi 0000-0002-8882-8879]Jeffrey W. Reep 0000-0002-1883-0883]Vladimir Airapetan 0000-0002-4133-0883]Shin Toriumi 0000-0002-1883-0883]Keith C. Gendreau 0000-0002-1883-0883]Zaven Arzoumanian ## 1 Introduction Solar and stellar flares are the most energetic events on low-mass stars (Haisch et al., 1991; Gudel & Naze, 2009). They represent the rapid conversion of magnetic energy of active regions into kinetic and thermal energies, radiating from radio to gamma-rays and ejecting high-energy nuclei and electrons. Powerful solar flares have disrupted the Earth's magnetosphere and human activity, yet flares of young Sun-like stars can far surpass their solar counterparts in energy and frequency, with their enhanced magnetic dynamos driven by rapid rotations and deep convections. Their intense radiation could impact the exoplanetary environment and habitability (e.g., Airapetan et al., 2020). These flares, even with substantial energy variations, share similar behavior and characteristics and arise from the universal magnetic reconnection mechanism. Magnetic reconnection efficiently accelerates particles to high energies (\(\gtrsim\)10 keV), which bombards the footpoints of the loops with high-energy particles and heats the chromosphere; the evaporated gas fills the magnetic loop and gradually cools down via radiation. The evaporated gas at \(\approx\)10\({}^{7}\) K radiates primarily in soft X-rays between 0.1\(-\)10 keV (\(\approx\)1\(-\)100 A). Therefore, soft X-ray observations are crucial in understanding the flare geometry and heating mechanisms. During a typical flare, soft X-ray emission rises quickly as the evaporated gas fills the magnetic loop and decays quasi-exponentially as it gradually cools down radiatively. Earlier studies have focused on the peak and decay phase of flares (White et al., 1986; van den Oord and Mewe, 1989; Reale and Micela, 1998; Tsuboi et al., 1998; Favata et al., 2000; Sasaki et al., 2021). They suggested that powerful flares tend to decay slowly and originate from larger flare loops, which exceed the stellar radius in extreme cases. Direct solar flare imagings, stellar flare occultation observations, or theoretical models support this idea, but the models can significantly overestimate the flare size due to continuous heating, multiple loop structures or subsequent flares during the decay phase (e.g., Toriumi et al., 2017; Schmitt and Favata, 1999; Gudel et al., 2004; Reep and Toriumi, 2017). The rising phase holds crucial information on the flare geometry and heating mechanism (e.g., Reale, 2007) as it goes through initial heating, evaporation, and loop filling. However, the rising phase is often shorter than the decaying phase (e.g., Reep and Knizhnik, 2019; Getman et al., 2021), and so has been mostly limited to duration or crude hardness ratio studies in the soft X-ray band. An exception is an _XMM-Newton_ observation of Proxima Centauri, which caught a bright flare from the onset to the middle of the decay, recording \(\gtrsim\)100 cnts s\({}^{-1}\) near the peak (Gudel et al., 2002, 2004; Reale et al., 2004). The X-ray hardness ratio reached its maximum in the middle of the rise and started to decline near the flux peak. The timing of maximum hardness coincides with the U band (3000\(-\)3900A) flux peak measured with the onboard Optical Monitor, suggesting a connection between the coronal and chromospheric heating. The _NICER_ (Neutron star Interior Composition ExploreR) X-ray observatory onboard the International Space Station (ISS) (Gendreau and Arzoumanian, 2017) observed two bright X-ray flares from the nearby solar-like star \(\kappa\)(kappa)\({}^{1}\) Ceti (a.k.a. HD 20630, HIP 15457, \(d=\)9.16 pc, mass: 1.04 \(M_{\odot}\), radius: 0.95\(\pm\)0.10 \(R_{\odot}\), effective temperature: 5665 K, Ribas et al., 2010; Rucinski et al., 2004) during a monitoring program for the Sellers Exoplanet Environments Collaboration (SEEC)1 in 2019. This star shows intense magnetic activity due to its fast stellar rotation (\(P=\)9.2 days), emitting two orders of magnitudes higher coronal X-rays and chromospheric UV light than the Sun. In 1986, the star showed a signature of a superflare event in the He I D3 (\(\lambda\)5875.6 A) optical line, with an estimated total flare energy of \(E\approx\)2\(\times\)10\({}^{34}\) ergs (Schaefer et al., 2000). Still, the the radiation from transition region and coronal plasma satisfies a solar magnetic flux scaling law similar to other Sun-like stars (Toriumi and Airapetian, 2022). These characteristics suggest that \(\kappa^{1}\) Ceti is a young solar analog at 0.4\(-\)0.8 Gyrs old with an enhanced solar-type coronal and chromospheric heating rates (Airapetian et al., 2021). Its global-scale magnetic shear may cause superflares that eject huge masses of coronal material (Lynch et al., 2019). Footnote 1: [https://seec.gsfc.nasa.gov](https://seec.gsfc.nasa.gov) The _NICER_ X-ray observatory primarily aims at studying rapidly rotating neutron stars with very high timing resolution, but its superb soft X-ray collecting power, wide dynamic range, high throughput and moderate background, decent energy resolution, tolerance to optical light, and rapid maneuvering capability make it a powerful tool for observing nearby solar-type stars with sporadic bright X-ray flares. This manuscript describes analysis of _NICER_ observations of the two powerful X-ray flares from \(\kappa^{1}\) Ceti and performs hydrodynamic simulations of single loop flares to interpret the observations. The result provides how X-ray plasmas develop during the flare rising phase in detail. ## 2 Observation The _NICER_ X-ray Timing Instrument (XTI) is an array of aligned 56 X-ray modules, each of which consists of an X-ray concentrator (XRC, Okajima et al., 2016) and a silicon drift detector (SDD, Prigozhin et al., 2016). Each XRC concentrates X-rays within a \(\sim\)3\({}^{\prime}\) radius field of view to the paired SDD, which detects each photon with accuracy at \(\sim\)84 ns. The XTI as a whole has one of the largest collecting areas among X-ray instruments between 0.2\(-\)12 keV (\(\sim\)1900 cm\({}^{-2}\) at 1.5 keV). We use 50 XTI modules as the remaining six (ID: 11, 14, 20, 22, 34, 60) are inactive or noisy. _NICER_ can continuously observe a target up to \(\sim\)2.5 ksec in every ISS orbit (\(\sim\)5.5 ksec). However target visibility can be limited by Earth or ISS structure occultation, or proximity to high particle regions such as the South Atlantic Anomaly. _NICER_ can quickly slew the telescope and so observes multiple targets in each ISS orbit to maximize the observing efficiency. This capability enables _NICER_ to visit a target frequently, but it can also cause scheduling conflicts with other timely targets. _NICER_ started monitoring \(\kappa^{1}\) Ceti on 2019 September 16; it has observed the star for \(\sim\)180 ksec be tween 2019\(-\)2021. During these monitoring observations, _NICER_ detected two prominent X-ray flares on 2019 September 17 and December 10. Earlier X-ray imaging observations did not detect any X-ray sources at significant X-ray brightness within 3\({}^{\prime}\) from \(\kappa^{1}\) Ceti (e.g., Telleschi et al., 2005), indicating that the flares originate from \(\kappa^{1}\) Ceti. We analyze the _NICER_ observations ID: 2300020101, 2300020102, and 2300020114. We reprocess the datasets with the _NICER_ calibration ver. CALDB XTI(20210707), using nicerl2 in HEASoft ver. 6.29c and NICERDAS ver. V008c. We evaluate particle background using nibackgen3C50 ver. v7b with the parameters dtmin=10.0, dtmax=60.0, hbgcut=0.1, 80cut=30.0 (Remillard et al., 2021). We use python ver. 3.7, numpy ver. 1.20.3, scipy ver. 1.1.0 and astropy ver. 3.1. ## 3 Observation Results ### Light Curves The first flare occurred on 2019 September 17, during the last snapshot of the one-day observation of \(\kappa^{1}\) Ceti from September 16 (190917, Figure 1_top_). The snapshot only lasts for \(\sim\)800 sec, but it covers the rise and beginning of the decay of the flare as the flare varies very quickly. The Bayesian block analysis tool, bayesian_blocks in the astropy package (Scargle et al., 2013) -- nonparametric statistical analysis to detect significant flux change in time-series data -- does not suggest that the 0.6\(-\)1.2 keV count rate changes from the previous snapshot to the first 100 sec of this snapshot. This result suggests that the flare begins around the boundary of the last snapshot's second and third time bins, 2019 September 17 at 9\({}^{h}\) 36\({}^{m}\) 52\({}^{s}\) UT. Figure 1: Background subtracted light curves of \(\kappa^{1}\) Ceti between 0.6\(-\)1.2 keV on 2019 September 16\(-\)17 (_top_) and 2019 December 10 (_bottom_). _NICER_ opportunely caught the rising phase of a bright X-ray flare during each observation. The grey and yellow lines show the background level estimated with the _NICER_ tool nibackgen3C50 and an average count of each Bayesian block derived with the astropy tool bayesian_blocks. The time on the _top_ shows the light curve origin, an estimated flare onset. The color bars present quiescent (_magenta_), preflare (_red_), flare rise & peak (_blue_) and flare decay end (_cyan_) intervals for the spectral analysis. Each data bin is 50 sec. Figure 2: _1st\(-\)5th rows_: band-sliced light curves of \(\kappa^{1}\) Ceti around the first flare on 2019 September 17 (_left_) and the second flare on December 10 (_right_). These light curves have the same horizontal scales, so the variation timescales are directly comparable. The soft band light curves on the upper panels have delayed peaks compared with the 2\(-\)4 keV light curve. The grey line shows the instrumental background level. Each light curve has 50 sec bins. _Bottom row_: time-series of the hardness ratio defined by \((H-S)/(H+S)\) where \(H\) and \(S\) are the net count rates between 1.2\(-\)4 keV and 0.3\(-\)1.2 keV, respectively. The color bars present preflare (_red_), flare rise & peak (_blue_) and flare decay end (_cyan_) intervals for the spectral analysis. The first bin of the September flare is below \(-\)1 as nibackgen3C50 overestimates the hard band background. In each flare, the hardness ratio peaks before the total (0.3\(-\)4 keV) band count rate maximum. Figure 2_left_ shows band-sliced light curves of the last snapshot. The 2\(-\)4 keV light curve is typical of solar and stellar X-ray flares (Benz & Gudel, 2010, and references therein), showing a sharp rise in \(\sim\)200 sec and a steady decline by a factor of 3 in \(\sim\)500 sec after the peak. However, the softer band light curves rise more slowly, peak later, and decay more gradually. The light curves below 1.2 keV may not even show a decay during the snapshot. The soft band light curves significantly deviate from the hard band light curves. The hardness ratio in the bottom panel rises quickly, peaks before the total (0.3\(-\)4 keV) light curve, and declines gradually. This behavior is similar to the giant flare seen from Proxima Centauri with _XMM-Newton_(Gudel et al., 2004). The second flare occurred during the second snapshot on 2019 December 10 (191210, Figure 1_bottom_). This observation (for \(\sim\)1.6 ksec) is longer than for the first flare, but it similarly covers the rise and beginning of the decay as the second flare develops more slowly. _NICER_ misses the middle of the decay, but the third snapshot covers the end of the decay as the light curve connects smoothly from the second snapshot. On the other hand, the first snapshot shows a slight elevation in the middle, but both intervals appear to be in a quiescent state without significant variation. The light curve before the first flare also shows similar count rate variations (Figure 1_top_). A Bayesian block analysis shows that the 0.6\(-\)1.2 keV count rate stays at \(\sim\)6.6 cnts s\({}^{-1}\) from the latter half of the first snapshot to the first \(\sim\)150 sec of the second snapshot, suggesting that the flare begins during the second snapshot at around 19\({}^{h}\) 14\({}^{m}\) 40\({}^{s}\) UT. Meanwhile, the last Bayesian block of the third snapshot has almost the same count rate (\(\sim\)6.8 cnts s\({}^{-1}\)), suggesting that the quiescent emission does not vary significantly during the flare. Besides the slow variation, the second flare has similar energy-dependent variations to the first flare. The 2\(-\)4 keV light curve reaches its peak before other energy bands. The softer band peaks are delayed: the 0.3\(-\)0.6 keV light curve does not show a clear peak during the second snapshot. This delay continues to the third snapshot. The softer band light curves gradually decline, while the 2\(-\)4 keV light curve is almost flat. The hardness ratio reaches a maximum early during the rise, but otherwise the variation is similar to the first flare. ### Time Resolved Spectra To understand the energy-dependent time variations, we analyze the time-resolved spectra. We first produce a quiescent spectrum from the snapshot directly preceding each flare (the _magenta_ bar in Figure 1). The snapshot of the September flare shows no significant variation in count rates, while that in December does show a small but significant count rate increase in the middle at \(-\)4.7 ksec (Figure 1_bottom_). Therefore, we produce two spectra separated at the time with a statistically significant count rate change (change point) in the Bayesian analysis (Figure 3). The quiescent spectra show a prominent hump between 0.7\(-\)0.9 keV, consistent with emission lines of Fe XVII-XX, Ne IX-X, and OVIII ions seen in the _XMM-Newton_ RGS spectra of \(\kappa^{1}\) Ceti in 2002 (Telleschi et al., 2005). The spectrum has a steep hard slope, with negligible emission above \(\sim\)2 keV, but no absorption cut-off in the soft band down to 0.3 keV. We, thus, apply a thermal plasma emission model (apec) without absorption (Table 1). Assuming the best-fit elemental abundances of the XMM/RGS spectrum (Appendix A), we find that a 2-temperature (2\(T\)) model with _kT_ = 0.27 & 0.62 keV provides an acceptable fit to the quiescent spectrum for the September flare. The quiescent spectra of the December flare require a 3\(T\) model for the former interval and an additional 1\(T\) component for the latter interval to account for the flux increase. We use this 4\(T\) model as the fixed quiescent component for the December flare spectra. We produce time-resolved spectra during the flares, every 50 sec with a minimum 25 counts per bin for the September flare to track the fast variation and every 100 sec with a minimum of 50 counts per bin for the December flare to get good photon statistics (Figures 4, 5). For each flare, we make one spectrum before the flare onset in the same snapshot (the red bars in Figure 2 between \(-\)95----35 sec for the first flare and \(-\)80--0 sec for the second flare), which we call the preflare spectrum. In the third snapshot of the December flare, due to a decreased count rate, we increase the time interval of each bin to 400\(-\)600 sec. We also apply longer time intervals for the third snapshot of the December flare near the decay end. The preflare spectrum at the top left panel of each figure matches well the corresponding quiescent spectrum in the solid yellow line, consistent with the Bayesian block analysis of the flare onset timings. During the first flare, the flux in the entire energy band increases for the first 300 sec. The 0.7\(-\)0.9 keV hump becomes broader to the high energy side as the flux at \(\sim\)1.1 keV, probably originating from Fe XXII-XXIV or Ne X emission lines, is enhanced. After that, the hard band emission gradually declines, while the flux at \(\sim\)0.9 keV, probably produced by Fe XVII-XX lines, strengthens. The second flare evolves more slowly but similarly to the first flare. The whole band increases until \(\sim\)800 sec, and then the hump at \(\sim\)0.9 keV begins to strengthen. In the third snapshot, the flux declines nearly to the preflare level with some residual in the soft and hard bands in the prior \(\sim\)1 ksec. ### kT and EM Variations during the Flares The time-resolved spectra do not suggest any significant variation of the quiescent component during the flares. We, therefore, reproduce each time-resolved spectrum by a model with variable flare components plus the fixed quiescent component. The hard slopes of most spectra require a \(kT\sim\)3 keV plasma. However, collisional equilibrium plasma at that temperature does not emit Fe XVII-XX emission lines at \(\sim\)0.9 keV, which are enhanced near the flare peaks. Non-equilibrium ionization plasmas emit these lines at \(\tau\sim\)2\(\times\)10\({}^{10}\) s cm\({}^{-3}\), but they do not reproduce the emission lines at \(\sim\)1.1 keV observed during the rising phase. This result suggests that the flare spectra need at least two plasma components. We, thus, apply a \(2T\) apec plasma model for the flare emission, with elemental abundances fixed to the best-fit _XMM_/RGS values. We find that a model with \(kT\sim\)2\(-\)4 keV and 0.3\(-\)1 keV components reproduces each spectrum well and that these temperatures vary monotonically over time. However, the spectral parameters are poorly constrained near the flare onset and end due to weak flare emission. We, therefore, fit all spectra simultaneously, assuming that each component's plasma temperature varies linearly with time, and find reasonable results with \(\chi^{2}/d.o.f.\) at 685.96/616 for the first flare and 1118.41/919 for the second flare. Figures 4 & 5 plot the best-fit model for individual spectra and Figure 6 shows the best-fit _kT_ and _EM_ values. The third and fourth columns in Table 2 show the best-fit \(kT\) slopes. \begin{table} \begin{tabular}{c c c c c} \hline \hline Component & \multicolumn{2}{c}{190917} & \multicolumn{2}{c}{191210} \\ & \(kT\) & _EM_ & \(kT\) & _EM_ \\ & (keV) & (10\({}^{51}\) cm\({}^{-3}\)) & (keV) & (10\({}^{51}\) cm\({}^{-3}\)) \\ \hline 1 & 0.27\({}^{+0.03}_{-0.03}\) & 2.4\({}^{+0.4}_{-0.4}\) & 0.34\({}^{+0.02}_{-0.03}\) & 3.9\({}^{+0.4}_{-0.6}\) \\ 2 & 0.62\({}^{+0.04}_{-0.03}\) & 1.9\({}^{+0.3}_{-0.4}\) & 0.73\({}^{+0.06}_{-0.06}\) & 1.9\({}^{+0.5}_{-0.4}\) \\ 3 & & & 1.98\({}^{+0.87}_{-0.38}\) & 1.2\({}^{+0.2}_{-0.5}\) \\ 4 & & & 1.04\({}^{+0.29}_{-0.20}\) & 0.5\({}^{+1.1}_{-0.2}\) \\ \hline \(\Delta\chi^{2}\)/d.o.f & 55.44/64 & \multicolumn{2}{c}{162.06/175} \\ \hline \end{tabular} Note. – The errors show 90% confidence ranges. The 4th component is required for the latter interval spectrum of the 191210 observation. \end{table} Table 1: Best-fit Values of the Quiescent Spectra Figure 3: Quiescent spectra of the first flare for an exposure of 870 sec from \(-\)16921 sec to \(-\)16051 sec (_left_) and the second flare for 789 sec from \(-\)4700 sec to \(-\)3911 sec (_right_). The solid yellow lines are the best-fit models, and the dotted lines are the individual components. The upper right corner of each panel shows the best-fit parameters (the units are keV for _kT_ and cm\({}^{-3}\) for _EM_). The best-fit models reproduce the observed spectra well. The cool component _EM_s are mainly determined by fits of the Fe L emission line complex to the \(\sim\)0.9 keV excess. We examine whether the line intensity constraints in the applied model are adequate for this analysis. First, the model fixes the elemental abundances during the flares at the best-fit _XMM_/RGS quiescent spectrum values. However, some solar or stellar X-ray flares show apparent elemental abundance changes from the pre or post-flare states (e.g., Osten et al., 2000; Audard et al., 2001; Mondal et al., 2021). The best-fit spectral models also fit well the \(\sim\)1.1 keV bump with the Fe L and Ne K lines in the hot component. Since the hard spectral slope determines the hot component's _EM_, the hot component's Fe and Ne abundances are consistent with the assumed abundances, i.e., the coronal abundances are not observed to significantly change during the \(\kappa^{1}\) Ceti flares. Second, suppose the cool component did not reach equilibrium at ionization timescales of \(\lesssim\)10\({}^{10}\) s cm\({}^{-3}\) as opposed to the model assumption. In that case, the plasma should emit weaker Fe L lines than the equilibrium case and require a larger _EM_ to account for the observed 1.1 keV bump. However, no observed spectra show strong emission below \(\sim\)0.7 keV expected from low ionized oxygen and carbon emission lines from such non-equilibrium plasmas. In addition, preflare loops probably have densities over \(\sim\)10\({}^{11}\) cm\({}^{-3}\)(e.g., Osten et al., 2006), suggesting that the Fe L line complex develops within \(\approx\)0.1 sec. These results suggest that the cool component _EM_ measurements are robust. The hot component explains most of the initial flux rise in each flare. The component in the second flare is slightly hotter and cools down significantly slower Figure 4: Time-resolved spectra of \(\kappa^{1}\) Ceti during the first flare (190917). The red/blue line depicts the best-fit cool/hot component of the flare spectrum, and the yellow line does the fixed quiescent component. The solid black line is their sum. The hot component soars in the rising phase, dominating most energy bands, while the cool component is more significant at \(\sim\)0.9 keV with Fe L emission lines after \(\sim\)350 sec. The top right of each panel shows each spectrum’s time interval in seconds. than the first flare, but the component stays \(k\)\(T\)\(\gtrsim\)2 keV throughout both observations. On the other hand, the cool component develops more slowly than the hot component. The plasma temperature does not vary strongly at \(\sim\)1 keV around the flare peak, but it declines to \(\sim\)0.3 keV by the end of the second flare. _EM_ time series in Figure 6_bottom_ panels confirms the similarity of the two flares: i) the hot _EM_ varies with a linear rise and a slow decay, ii) the cool _EM_ varies similarly to the hot _EM_ but with a delay. To quantitatively evaluate their variations, we fit the _EM_ time series with Figure 5: Time-sliced spectra of \(\kappa^{1}\) Ceti during the second flare (191210). The flare spectral components behave similarly to those of the first flare. The cool component exceeds the hot component at \(\sim\)0.9 keV after \(\sim\)1 ksec but not so much as during the first flare. the following conventional formula for stellar flares. \[\begin{split} EM(t)&=0\qquad\qquad\qquad\qquad\qquad t<t_ {\rm onset}\\ &=EM_{\rm peak}\frac{t-t_{\rm onset}}{\Delta t_{\rm rise}}\qquad \qquad\qquad t_{\rm onset}\leq t<t_{\rm peak}\\ &=EM_{\rm peak}\exp(-\frac{t-t_{\rm peak}}{\tau_{\rm decay}})\quad t _{\rm peak}\leq t\end{split} \tag{1}\] where \(t_{\rm onset}\), \(t_{\rm peak}\), \(EM_{\rm peak}\) and \(\tau_{\rm decay}\) are free parameters and \(\Delta t_{\rm rise}=t_{\rm peak}-t_{\rm onset}\). For the fittings, we use curve_fit in the scipy package. We fix \(\Delta t_{\rm rise}\) and \(\tau_{\rm decay}\) of the 190917 flare's cool component at the best-fit values of the hot component as the cool component does not show a clear _EM_ peak. Table 2 shows the best-fit result. The hot component's \(t_{\rm onset}\) is close to zero, again consistent with the Bayesian blocks measurement of the flare onset in each flare. In contrast, the cool component's \(t_{\rm onset}\) is significantly delayed from the hot component's \(t_{\rm onset}\) (hereafter \(\Delta t_{\rm delay}=t_{\rm onset}({\rm cool})-t_{\rm onset}({\rm hot})\)). In the second flare, the cool component has similar \(\Delta t_{\rm rise}\) to, but a factor of two longer \(\tau_{\rm decay}\) than, the hot component. The second flare has longer durations in \(\Delta t_{\rm delay}\), \(\Delta t_{\rm rise}\), and \(\tau_{\rm decay}\) than the first flare. These behaviors explain the energy-dependent variations of the light curves. The 2\(-\)4 keV band light curve is dominated by the hot component's behavior, showing a conventional stellar flare variation. The softer bands add the cool component's behavior with a conventional flare variation but a time delay compared to the hot component. The 0.6\(-\)1.2 keV light curve deviates most with the strong 0.9 keV hump from the cool component. The deviation is stronger in the first flare with a larger relative time delay (\(\Delta t_{\rm delay}/\Delta t_{\rm rise}\)) and a larger _EM\({}_{\rm peak}\)_ ratio (_EM\({}_{\rm peak}({\rm cool})/\textit{EM}_{\rm peak}({\rm hot})\)_). ## 4 Hydrodynamic Simulations of Single Loop Flares The hot component constitutes the major part of the flare emission. As discussed in numerous studies, it should originate from radiatively-cooling plasma inside the flare magnetic loops. Then, what is the cool component? Flare spectral fits often require two temperatures or more (e.g., Sasaki et al., 2021; Paudel et al., 2021), but the nature of the cool component is poorly known. Our _NICER_ study provides this component's time variation through the flare rise. We run hydrodynamic simulations of single magnetic loop flares to understand the cool component. We employ a field-aligned hydrodynamic code, the HYDrodynamics and RADiation code (HYDRAD2; Bradshaw and Mason, 2003), used to study heating in the solar corona and solar flares. The code solves the equations for the conservation of mass, momentum, and energy for plasma confined to a magnetic flux tube (Bradshaw and Cargill, 2013). The loops are assumed to be heated by non-thermal electrons, accelerated by magnetic reconnection near the loop's apex. As the electrons propagate, they deposit their energy through Coulomb collisions with the ambient plasma. The majority of the heat is deposited in the upper chromosphere, causing a rapid increase in temperature and pressure. It then drives an expansion of material (chromospheric evaporation), carrying hot and dense plasma into the corona. The assumed form of the heating function that we use was derived by Emslie (1978), with modification for non-uniform ionization in the chromosphere (Hawley and Fisher, 1994). As the loop evolves, the plasma cools through thermal conduction and radiation, which we calculated using the atomic database CHIANTI (Dere et al., 1997), version 10 (Del Zanna et al., 2021). We use the elemental abundances derived from the _XMM_/RGS spectra (Table A1), but our preliminary study using solar elemental abundances provides a similar result. Our simulations assume a magnetic loop with a uniform cross-section and injected particles with a power-law energy distribution for 200 sec with an energy flux peaking at 10\({}^{11.5}\) ergs cm\({}^{-2}\) s\({}^{-1}\) at 100 sec. Since the two _NICER_ flares have different flare decay timescales and plausibly different magnetic loop lengths (e.g., Toriumi et al., 2017; Reep and Toriumi, 2017), the simulations consider loop lengths at 5, 10, 15, 20, and 25\(\times 10^{9}\) cm. The derived _EM_ normalization can be adjusted by changing the cross-section of the magnetic loops (equivalently, the total volume of the loops). Footnote 2: [https://github.com/rice-solar-physics/HYDRAD](https://github.com/rice-solar-physics/HYDRAD) The left panel of Figure 7 shows the _EM_ evolution of the 10\(\times 10^{9}\) cm cm flare loop simulation. The _EM_ is dominated by the hottest plasma emission that peaks near \(\sim\)600 sec. This component represents a radiative-cooling, evaporated plasma that fills the magnetic loop, corresponding to the hot component of the observing flares. Since the evaporated plasma cools down gradually under thermal equilibrium, a single temperature bucket dominates near its peak and drops to zero quickly once the plasma cools. A secondary component is a group of low-temperature buckets that rises and falls similarly to the main component at one-third the _EM_ of the evaporated plasma component. Each temperature bucket stays in this group until the evaporated plasma cools down to its temperature range. This secondary component represents plasmas at transition regions near the magnetic footpoints. Because the conductive heat flows from the looptop to the footpoints, the plasma has a strong temperature gradient and responds to the evaporated plasma's variation. The other loop length simulations show similar _EM_ variations with different time scales. In the first 500 sec, the log \(T\geq\)7.3 (K) buckets only reflect the evaporated plasma component, while the log \(T<\)7.0 (K) buckets reflect the footpoint plasma component. We, therefore, define two temperature ranges, log \(T=\)7.3\(-\)8.0 (K) and 6.6\(-\)6.9 (K), and sum up _EM_s within each range to understand their behaviors near the rising phase (Figure 7_right_). First, the _EM_[6.6\(-\)6.9] time series of various loop lengths vary similarly for \(\sim\)200 sec from the beginning. This _EM_ base originates from the initial heating of the plasma in the upper chromosphere by the injected particles, and peaks at \(\sim\)100 sec in response to the assumed particle injection flux. The _EM_[7.3\(-\)8.0] does not show this component clearly, but the slow rise in the first \(\sim\)50 sec originates from the initial plasma heating. All _EM_[6.6\(-\)6.9] plots except the 5\(\times\)10\({}^{9}\) cm simulation show the footpoint components' onsets as clear kinks (e.g., at \(\sim\)150 sec for the 10\(\times\)10\({}^{9}\) cm simulation). We measure the timing of each kink from a two linear fit (\(t_{\rm onset}\)(foot) in Table 3). The onset ranges between \(\sim\)80\(-\)300 sec, and longer loops have later onsets. The evaporated component does not have a clear onset signature, so we measure the onset timing from a fit to the first 200 sec of _EM_[7.3\(-\)8.0] by a linear function starting at \(t_{\rm onset}\)(eva). The onset ranges between 17\(-\)64 sec and does not appear to correlate with the loop length. The \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Flare} & Comp. & \(kT(t=0)\) & \(kT_{\rm slope}\) & \(t_{\rm onset}\) & \(\Delta t_{\rm rise}\) & \(\tau_{\rm decay}\) & _EM\({}_{\rm peak}\)_ & \(L_{\rm X}\)peak & \(E_{\rm X}\) & \(E_{\rm bol}\) \\ & & (keV) & (keV/ksec) & (sec) & (sec) & (sec) & (10\({}^{52}\) cm\({}^{-3}\)) & (10\({}^{29}\) ergs s\({}^{-1}\)) & (10\({}^{32}\) ergs) & (10\({}^{33}\) ergs) \\ \hline 190917 & cool & \(1.00^{+0.11}_{-0.13}\) & \(-0.18^{+0.25}_{-0.24}\) & 156 (20) & 218 (fix) & 572 (fix) & 0.71 (0.061) & 1.2 & 0.79 & 2.0/3.2 \\ & hot & \(2.86^{+0.51}_{-0.46}\) & \(-1.6^{+0.96}_{-0.95}\) & 14 (7) & 218 (15) & 572 (86) & 2.6 (0.13) & 3.2 & 2.2 & \\ 191210 & cool & \(1.02^{+0.03}_{-0.04}\) & \(-0.13^{+0.02}_{-0.01}\) & 227 (35) & 923 (99) & 2591 (324) & 0.45 (0.032) & 0.75 & 2.3 & 6.6/4.0 \\ & hot & \(3.65^{+0.24}_{-0.25}\) & \(-0.32^{+0.16}_{-0.09}\) & 33 (37) & 1003 (59) & 1135 (111) & 2.8 (0.12) & 3.9 & 6.4 & \\ \hline \end{tabular} Note. – \(kT(t=0)\), \(kT_{\rm slope}\): Best-fit \(kT\) linear time variation model of the combined spectral fits. The errors show 90% confidence ranges. \(t_{\rm onset}\), \(\Delta t_{\rm rise}\), \(\Delta t_{\rm decay}\), _EM\({}_{\rm peak}\)_: Best-fit linear rise plus exponential decay model of the _EM_ time series. The parentheses show 1\(\sigma\) confidence ranges. \(L_{\rm X}\)peak: Peak X-ray luminosity between 0.3\(-\)10 keV. \(E_{\rm X}\): Total X-ray flare energy between 0.3\(-\)10 keV. \(E_{\rm bol}\): Total bolometric flare energy. The left values use a relation to the GOES band (1.55-12.4 keV) flare-radiated energy for active stars (Table 2 in Osten & Wolk 2015). The right values use a relation to the GOES band solar flare peak flux (equation (13) in Aschwanden et al. 2017). \end{table} Table 2: Flare Parameters Figure 6: Best-fit \(kT\) (_top_) and _EM_ (_bottom_) values of the time-resolved flare spectra by the 2\(T\) apec models (_left_: 190917, _right_: 191210). The red/blue color shows the cool/hot plasma component. In the _kT_ plots, the solid lines and filled areas are the best-fit _kT_ linear models and the 90% confidence areas. In the _EM_ plots, the data points show the best-fit values and their 90% confidence ranges of the combined spectral fits. The solid lines show the best-fit linear rise plus exponential decay model to these _EM_ measurements. In each flare, the cool component rises similarly to the hot component but with a time delay. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Loop Length & \(t_{\rm onset}\)(eva) & \(t_{\rm onset}\)(foot) & \(\Delta t_{\rm delay}\) & \(\Delta t_{\rm rise}\)(eva) & \(\tau_{\rm decay}\)(eva) & _EM\({}_{\rm peak}\)_ ratio & _kT\({}_{\rm peak}\)(cool/hot)_ \\ (\(10^{9}\) cm) & (sec) & (sec) & (sec) & (sec) & (sec) & & (keV/keV) \\ \hline 5 & 63 & 77 & 15 & 330 & 760 & 1.21 & 0.32/1.40 \\ 10 & 64 & 153 & 89 & 491 & 1207 & 1.04 & 0.31/1.67 \\ 15 & 40 & 205 & 164 & 496 & 2268 & 0.78 & 0.32/1.66 \\ 20 & 25 & 248 & 223 & 639 & 3013 & 0.65 & 0.34/1.57 \\ 25 & 17 & 294 & 276 & 752 & 3601 & 0.60 & 0.35/1.57 \\ \hline 190917 & & & 142 & 218 & 572 & 0.27 & 0.96/2.50 \\ 191210 & & & 194 & 1003 & 1135 & 0.16 & 0.88/3.31 \\ \hline \end{tabular} Note. – In each simulation, the time origin is the particle injection start. \(t_{\rm onset}\)(eva)/\(t_{\rm onset}\)(foot): onset time of the evaporated/footpoint component derived from one/two linear fits to the _EM_[7.3\(-\)8.0]/_EM_[6.6\(-\)6.9] time series. \(\Delta t_{\rm delay}=t_{\rm onset}\)(foot) \(-\)\(t_{\rm onset}\)(eva). \(\Delta t_{\rm rise}\)(eva)/\(\tau_{\rm decay}\)(eva): rise/decay time of the evaporated component derived from a fit to the peak _EMs_ of individual temperature buckets by a linear rise plus exponential decay model. _EM\({}_{\rm peak}\)_ ratio, _kT\({}_{\rm peak}\)_(cool/hot): _EM\({}_{\rm peak}\)_ ratio and plasma temperatures at the cool/hot _EM_ peaks, derived from fits to the synthetic flare spectra with 100 sec bins by a 2\(T\) apec model. \end{table} Table 3: HYDRAD Simulation Result Figure 7: _Left_: Whole loop _EM_ variations of the \(10\times 10^{9}\) cm loop simulation. The _EM_s are divided by logarithmic temperature buckets and binned every 30 seconds. The plot shows two _EM_ components — the evaporated plasma that envelopes individual temperature peaks standing up from the hot side, getting a maximum at \(\sim\)600 sec, and the footpoint plasma, a group of low-temperature buckets that rises and falls similarly at one-third of the evaporated plasma component. _Right_: Whole loop _EM_ variations for the first 500 sec, summed over the temperature ranges log \(T\) =7.3\(-\)8.0 K (_blue_) and 6.6\(-\)6.9 K (_red_). The number that labels each line is the loop length in \(10^{9}\) cm. The evaporated component starts to rise at \(\sim\)50 sec, while the footpoint component is significantly delayed from the evaporated component, more with longer loops. The overlapping triangular base in the red plot up to \(\sim\)250 sec originates from the initial heating. time lags \(\Delta t_{\rm delay}\) (=\(t_{\rm onset}\)(foot) \(-\)\(t_{\rm onset}\)(eva)) clearly increase with longer loops. Figure 8 shows why the footpoint component is delayed. In the plots, the evaporated component is located in the middle part where temperature and density quickly increase after the particle injection. The footpoint component is located near both ends, whose temperature and density do not increase until the shocks produced by the evaporated gas's collision at the looptop propagate down to the footpoints. Figure 9 displays the hydrogen and electron density product and the electron temperature, magnifying the left end on a logarithmic scale. Both the log \(T\) =6.6\(-\)6.9 K depth and density product increase between 140 sec and 220 sec. The time delay corresponds to the travel time of the flare loop by the evaporated flows and the collisional shocks, which is approximately the sound crossing time. It is, therefore, roughly proportional to the loop length. We also measure the decay timescales of the simulated flares. As described above, the evaporated plasma is in a single temperature bucket at each temperature peak. We, thus, take the peak _EM_ of each temperature bracket and fit them by a linear plus exponential decay model from equation (1). The fits reproduce the _EM_ variations well except around the peak (Figure 10 for the 10\(\times\)10\({}^{9}\) cm loop simulation). Longer loop flares have longer decay timescales (\(\tau_{\rm decay}\)(eva) in Table 3), as suggested in earlier studies (e.g., van den Oord & Mewe, 1989; Toriumi et al., 2017). We make synthetic _NICER_ spectra of the simulated _EM_ distributions to compare the spectral properties. For each simulation, we produce spectral models with 100 sec bins, assuming an apec plasma model for each temperature bucket. We normalize them to a peak 0.3\(-\)2 keV flux at 2.2\(\times\)10\({}^{-11}\) ergs cm\({}^{-2}\) s\({}^{-1}\) to match the observed two _NICER_ flares. We then generate a synthetic spectrum for each spectral model with the xspec fakeit tool, by convolving the model with the _NICER_ on-axis responses, nixtiref20170601v003.rmf and nixti-aveonaxis20170601v005.arf. We increase photon statistics by a factor of 10 to reduce statistical uncertainty, equivalent to a 1 ksec exposure. We, then, bin each synthesized spectrum to have \(\geq\)50 counts per bin, and fit each spectrum by a 2\(T\) apec model. Figure 11_left_ shows a synthetic spectrum of the 10\(\times\)10\({}^{9}\) cm loop between 600\(-\)700 sec, adding the quiescent component of the September flare for a comparison. Table 3 shows the _EM_\({}_{\rm peak}\) ratios and plasma temperatures at the _EM_ peaks. The peak plasma temperatures \(\sim\)0.3\(-\)0.35 keV and 1.4\(-\)1.67 keV are similar among the simulations and significantly lower than the observed values. The _EM_\({}_{\rm peak}\) ratio is the highest with the 5\(\times\)10\({}^{9}\) cm loop simulation at 1.21 and smaller with longer loop simulations. This result is naturally understood since the footpoint plasma volume does not change with the loop length. The numerical simulations demonstrate that the footpoint component is delayed to rise by 100\(-\)300 sec from the evaporated component. This result indicates that the cool component in the _NICER_ spectra originates from the footpoint plasma. The simulations also suggest that longer flare loops have longer delays of the footpoint component rise and smaller _EM_ peak ratios, as well as longer decay timescales. All these properties are consistent with the properties of the two _NICER_ flares, suggesting that the December flare originates from a longer flare loop than the September flare. ## 5 Discussion The \(\kappa^{1}\) Ceti flares in 2019 are an order of magnitude more powerful than the most powerful solar flare ever seen, the Carrington Event in 1859 (\(L_{\rm X}\sim\)10\({}^{28}\) ergs s\({}^{-1}\), Cliver & Dietrich, 2013; Sakurai, 2022). Their X-ray luminosities are near the upper end of the flare luminosity ranges of solar-type G stars (\(L_{\rm X}\lesssim\)10\({}^{30}\) ergs s\({}^{-1}\), Schaefer et al., 2000; Tsuboi et al., 2016). Their bolometric flare-radiated energies 3\(-\)8\(\times\)10\({}^{33}\) ergs, evaluated from two independent empirical relations to the X-ray radiations among solar and active stellar flares (Aschwanden et al., 2017; Osten & Wolk, 2015, see Table 2), qualify them as superflares (\(>\)10\({}^{33}\) ergs, e.g., Maehara et al., 2012) and are comparable to the \(\kappa^{1}\) Ceti superflare recorded in 1986 (\(\sim\)2\(\times\)10\({}^{34}\) ergs, Schaefer et al., 2000). Nonetheless, their X-ray luminosities and released X-ray energies are modest among active or young stellar flares. (\(L_{\rm X}\lesssim\)10\({}^{32-33}\) ergs s\({}^{-1}\), e.g., Benz & Gudel, 2010, and references therein). The other X-ray characteristics -- the hot plasma temperature, hard band light curve, and hardness ratio variation -- are similar to solar and stellar X-ray flares (e.g., Pye et al., 2015). We conclude that the \(\kappa^{1}\) Ceti flares in 2019 are conventional magnetic reconnection events. The \(\kappa^{1}\) Ceti flare spectra require an additional cool (\(kT\lesssim\)1 keV) temperature component. Although such a component has not received much attention, well exposed stellar X-ray flare spectra usually require one or more components with \(kT\sim\)0.3\(-\)1 keV (e.g., GT Mus: Sasaki et al., 2021, EV Lac: Paudel et al., 2021). The high-resolution _XMM_/RGS spectra of the Proxima Centauri flare suggested that the flare _EM_ distribution was broad with a peak at \(\sim\)30MK and a low-temperature tail during the rise and steadily moved to low-temperatures as the flare developed (Gudel et al., 2004; Reale et al., 2004). In solar flares, the low-temperature _EM_ (\(<\)16.5 MK) peaks later than the high temperature _EM_(McTiernan et al., 1999), perhaps suggesting the presence of a similar cool component. The cool component is probably ubiquitous in solar and stellar flares and represents an average of the low-temperature tail in the _EM_ distribution. The cool component's _EM_s of the \(\kappa^{1}\) Ceti flares increase steadily during the rising phase, but the footpoint plasma's _EM_s in the HYDRAD simulations rapidly increase to half the maximum at the onsets. The September flare may be statistics limited due to its quick rise, but the December flare clearly shows that the cool component's _EM_ steadily increases with a possible step-wise increase in the middle of the rise. This may suggest that each flare is an assembly of multiple loops, which is well known from e.g. spatially-resolved UV and optical imaging of solar flares (e.g., Aschwanden & Alexander, 2001). Multiple loop models can reproduce energy-dependent X-ray time variations of solar flares (Reep & Toriumi, 2017). If the observed flares are multiple loop events, we should ideally convolve the HYDRAD simulations with single loop event rates. Very hard X-rays (\(>\)20 keV) or microwave emission can trace the flux variation of the injected non-thermal reconnection particles (e.g., Benz, 2017), but we do not have simultaneous data in these bands, unfortunately. Earlier flare observations suggest that these emissions drop before the soft X-ray peaks (Lin et al., 2003; Asai et al., 2004; Veronig et al., 2005), which are \(\sim\)200 sec in the September flare and \(\sim\)1 ksec in the December flare. A convolution in each timescale may change the _EM_\({}_{\rm peak}\) ratio and the flare de Figure 8: Density (_top_) and temperature (_bottom_) spatial distribution of the \(10\times 10^{9}\) cm loop simulation at 40, 60, 140 and 220 seconds from the particle injection start. The horizontal axis shows the distance from a footpoint along the loop: the loop top is at 5\(\times 10^{9}\) cm and the two footpoints are at 0, 10\(\times 10^{9}\) cm. From _left_, i) at 40 sec, the particle injection heats the footpoint chromosphere, and the evaporated gas soars into the magnetic loop. The evaporated spectral component starts to increase. ii) at 60 sec, the upward evaporation flows collide at the loop top, heating the gas further. iii) at 140 sec, the shock propagate down the other leg, smoothing the corona’s density. iv) at 220 sec, by the time the shock reaches the footpoints, the loop has enough high density that thermal conduction becomes extremely efficient, and the footpoint spectral component emerges. The red and blue lines are for electrons and hydrogen, respectively. The black and white double arrows point to the locations of the evaporation or shock fronts from either side. Figure 9: Electron and hydrogen density product (_top_) and electron temperature (_bottom_) distributions of the \(10\times 10^{9}\) cm loop simulation near a footpoint region at \(t=\)40 sec (_dotted_), 60 sec (_dash-dotted_), 140 sec (_dashed_), and 220 sec (_solid_). The horizontal axis is the same as Figure 8 but on a logarithmic scale. The filled grey area in the bottom panel shows the log \(T=\)6.6\(-\)6.9 K bucket. The boxes in the top panel show the one-dimensional volumes and density product ranges of this temperature bucket. Both the density product and the volume significantly increase between \(t=\)140 sec and 220 sec. cay timescale. It should not change the cool component delay timescales. We compare the derived \(\kappa^{1}\) Ceti flare parameters, \(\Delta t_{\rm delay}\), \(\tau_{\rm decay}\), and the _EM\({}_{\rm peak}\)_ ratio with the simulation (Table 3). These values vary monotonically with the loop length in the simulation, so we estimate the loop length for each parameter by linearly interpolating or extrapolating the two neighboring values (Table 4). We also list three other estimates from the literature. The first estimate is an empirical relation of the ribbon distance with the decay timescale in solar flares (Toriumi et al., 2017, equation 4). We approximate the decay timescale of the 1\(-\)8A energy flux with \(\tau_{\rm decay}\) of the hot component in Table 2 and assume a semi-circular flare loop shape to derive the loop length. The second estimate is a quasi-statistic cooling model for a constant radiative and conductive timescale ratio (van den Oord & Mewe, 1989; Tsuboi et al., 2000, equation A5). A problem with this estimate is that flares never truly cool statically (e.g., Cargill et al., 1995). The third estimate is a magnetic reconnection model, assuming that the gas pressure of a flare loop is comparable to the magnetic pressure (Shibata & Yokoyama, 2002, SY02). A caveat is that the model requires the unmeasurable preflare proton density. All estimates but the _EM\({}_{\rm peak}\)_ ratio are consistent with \(\approx\)10\({}^{10}\) cm loop lengths. All estimates but Figure 11: _Left_: Synthetic _NICER_ spectrum of the 10\(\times\)10\({}^{9}\) cm simulation at the flare peak (600\(-\)700 sec). The flare spectrum is normalized to have the 0.3\(-\)2 keV flux at 2.2\(\times\)10\({}^{-11}\) ergs cm\({}^{-2}\) s\({}^{-1}\) and combined with the quiescent spectrum of the September flare. _Right_: The same spectrum but with _EM_s below the evaporated plasma temperature (log \(T<\)7.3) reduced to 10%. The left spectrum has strong emission at \(\sim\)0.8 keV, while the right spectrum is close to the observed flare peak spectra. The upper right corner of each panel shows the best-fit parameters of a 2\(T\) apec model (the units are keV for _kT_ and cm\({}^{-3}\) for _EM_). Each plot uses the same color scheme as of Figures 4 and 5. Figure 10: _EM_ value (_blue_) at each temperature bucket peak and the corresponding plasma temperature (_grey_) in the 10\(\times\)10\({}^{9}\) cm loop simulation. The solid green line is the best-fit model of the _EM_ values by a linear plus exponential decay model. The model reproduces the _EM_ variation well. SY02 suggest that the December flare has a longer flare loop than the September flare. The derived loop length of \(\approx\)10\({}^{10}\) cm is near the upper end but still within the range of solar flare loops (e.g., Toriumi et al., 2017). Since \(\kappa^{1}\) Ceti has about the same stellar radius as the Sun, we can safely assume that the observed \(\kappa^{1}\) Ceti flares have similar magnetic field geometries to moderately large solar flare loops. However, the peak _EM_s \(\sim\)3\(\times\)10\({}^{52}\) cm\({}^{-3}\) are about two orders of magnitudes larger than the _EM_s of solar flares with similar loop lengths. One solution is that the \(\kappa^{1}\) Ceti flares have an order of magnitude higher flare plasma density. Such high-density plasma radiatively cools with an order of magnitude shorter timescales, but the \(\kappa^{1}\) Ceti's flare decay timescales are consistent with the solar flare's decay time relation (Table 4). The other solution is that the \(\kappa^{1}\) Ceti flares have two orders of magnitude larger widths and/or have thicker magnetic loops. The _EM_peak ratio derives inconsistent loop lengths, possibly because the HYDRAD simulation systematically overestimates the footpoint component. The footpoint component is comprised of all temperature buckets below the evaporated plasma temperature (see Figure 7_left_). We therefore reduce the footpoint component's _EM_s of Figure 11_left_ simulation -- the _EM_s below the evaporated plasma temperature, log \(T=\)7.3 (K) -- to 10%, as a trial. Then, the synthetic spectrum looks more similar to the observed spectra near the flare peaks (Figure 11_right_), and the best-fit 2\(T\) apec model has a smaller _EM_peak ratio at \(\sim\)0.18. As observed, this model also derives a higher hot component temperature at 2.0 keV. The footpoint plasma at the height of \(\sim\)1.2\(-\)1.6\(\times\)10\({}^{8}\) cm is in the transition region (Figures 8, 9). The line of sight should have more intervening material than the evaporated plasma in the corona. Still, attenuating \(\sim\)0.9 keV X-rays by \(\sim\)80% requires a hydrogen column density at \(N_{\rm H}\sim\)10\({}^{22}\) cm\({}^{-2}\), corresponding to a physical depth of \(\sim\)10\({}^{10-11}\) cm for the density of the transition region (\(n\sim\)10\({}^{11-12}\) cm\({}^{-3}\)). Flare loops need to be viewed almost edge-on to have this depth, but realizing such geometries for both loops are less likely. Therefore, the observed flares should have less footpoint plasma to the evaporated plasma than the HYDRAD simulations. The RHESSI observatory found in the hard X-ray band (\(>\)10 keV) that solar flares have several times higher electron rates at the looptop than at the footpoints during the impulsive phase3, implying the electrons accumulate in the looptop (Simoes and Kontar, 2013). The \(\kappa^{1}\) Ceti flares may also have a mechanism to suppress electron transportation to the footpoints and to reduce thermal conduction. Such a mechanism may also explain the slower cooling of the evaporated plasma compared to the HYDRAD simulations. Footnote 3: During the impulsive phase, the magnetic reconnection accelerates charged particles, which emit hard non-thermal X-rays (e.g., Benz, 2017). This phase occurs mostly before the cool component rises. One possible mechanism to suppress electron transport is that the flare magnetic loop expands toward the looptop, trapping charged particles in a magnetic mirror. Solar coronal loops, whether quiescent or flaring, do not necessarily show an expansion of the loop width along their lengths (Klimchuk et al., 1992; Klimchuk, 2000; Klimchuk and DeForest, 2020, and references therein). However, the magnetic field strength falls off with height in the corona, implying that there should be an expansion of the cross-sectional area of loops (e.g., Dudik et al., 2014), and models are unable to reproduce both hot and cool emission simultaneously without an area expansion (Warren et al., 2010; Reep et al., 2022). The loop expansion reduces the thermal conductivity near the footpoints. A preliminary 10\(\times\)10\({}^{9}\) cm loop simulation with the expansion geometry in Reep et al. (2022) does not produce a small _EM_peak ratio, but the cool component _EM_ peaks significantly later than the constant loop simulation. The other possible mechanism is that the flare loops have turbulent magnetic fluctuations, which would increase the frequency of Coulomb collisions, suppressing the energy transport and reducing the thermal conductivity (e.g., Bian et al., 2016; Allred et al., 2022). This mechanism increases coronal temperatures compared to those in a model with collisionally dominated transport. ## 6 Conclusion _NICER_ observed two moderately strong X-ray flares from \(\kappa^{1}\) Ceti, a nearby young solar analog, in 2019. \begin{table} \begin{tabular}{l l l l l l} \hline \hline Flare & & \multicolumn{2}{c}{HYDRAD} & \multicolumn{2}{c}{Sun} & QS & SY02 \\ & \(\Delta t_{\rm delay}\) & _EM_peak ratio & \(\tau_{\rm decay}\) & & \\ \hline 190917 & 13.5 & 60.3 & 2.9 & 6.2 & 8.9 & 12.9 \\ 191210 & 17.6 & 72.3 & 9.2 & 13.6 & 22.5 & 8.7 \\ \hline \end{tabular} Note. – Unit in 10\({}^{9}\) cm. HYDRAD: linear interpolation or extrapolation of the nearest two values of the HYDRAD simulation in Table 3. Sun: Solar flare ribbon distance relation in equation 4 of Toriumi et al. (2017). The derived \(d_{\rm ribbon}\) values are multiplied by \(\pi/2\). QS: Quasi-Static cooling model in Tsuboi et al. (2000), equation A5. SY02: Equation 7b in Shibata and Yokoyama (2002) for the preflare proton density at 10\({}^{11}\) cm\({}^{-3}\). \end{table} Table 4: Flare Loop Length Estimate _NICER_'s excellent soft X-ray sensitivity, good energy resolution, and large collecting area bring rare details of bright X-ray flares from the onsets through the peaks. Both flares show conventional stellar flare variations above 2 keV with a rapid rise and decay, having similar X-ray fluxes at \(\sim\)2.2\(\times\)10\({}^{-11}\) ergs cm\({}^{-2}\) s\({}^{-1}\) between 0.3\(-\)2 keV and high plasma temperatures at \(\sim\)3 keV near the peaks. Their bolometric energies estimated from the X-ray radiated energies, \(\sim\)3\(-\)9\(\times\)10\({}^{32}\) ergs, are comparable to superflares. The flare on September 17 varies in several hundred seconds in X-rays, with an interesting flat soft X-ray flux peak. The flare on December 10 varies 2\(-\)4 times more slowly, showing a similar but less extreme variation in the soft band. The time-resolved spectra show that, in the rising phase, the hard band slope increases first, and a hump at \(\sim\)0.9 keV, originating from the Fe L line complex, follows. Most spectra require two temperature optically-thin thermal plasma components at \(kT\sim\)1 keV and \(\sim\)3 keV on top of the quiescent component. The hot component mainly reproduces the hard band slope, and the cool component does the 0.9 keV hump. Both components' _EM_s rise linearly on similar timescales, but the cool component is delayed by 100\(-\)200 sec. The September flare has a longer delay time relative to the flare rise duration and a more substantial cool component than the December flare, producing a heavily rounded flare peak. The HYDRAD field-aligned numerical simulations demonstrate that the cooler footpoint plasmas start to increase a few hundred seconds after the hot evaporated plasmas increase -- longer flare loops have longer time delays and weaker cool components. This result indicates that the cool components in the \(\kappa^{1}\) Ceti flares originate primarily from the footpoint plasma and that the September flare stems from a shorter flare loop than the December flare. The estimated loop lengths of \(\approx\)10\({}^{10}\) cm are large but still within the range of solar flare loops. Since the \(\kappa^{1}\) Ceti flares have more than two orders of magnitudes larger _EM_s than the solar flares, they need significantly higher loop plasma densities or thicknesses. A significant discrepancy in the _EM\({}_{\rm peak}\)_ ratio may suggest that the HYDRAD simulations overestimate the footpoint _EM_s and require a mechanism to suppress electron transport, such as expanded magnetic loops or turbulent magnetic fluctuations. A difference in the cool component's _EM_ rise may suggest that both flares are multiple loop events, as seen in solar flares. The _NICER_'s \(\kappa^{1}\) Ceti observations and the HYDRAD simulations demonstrate that the time delay of the cool component and the peak _EM_ ratio of the two temperature plasma components can be used as new, effective parameters for estimating the flare loop length. We should confirm the derived relations with more flare samples of various luminosities, durations, peak temperature, and stellar types with existing or future _NICER_ observations. Simultaneous multi-wavelength observations will also greatly help constrain the flare parameters. In particular, UV and optical observations with Hubble Space Telescope or the TESS observatory trace hot chromospheric gas, helping understand the whole chromospheric and coronal heating process. The HYDRAD numerical simulations still have discrepancies with observations. We should decipher the cause with further studies and improve the model to explain the observations. The material is based upon work supported by NASA under award number 80GSFC21M0002. JWR was supported by the Office of Naval Research 6.1 Support Program. VSA acknowledges the funds from _NICER_ GO Cycle 2 project award number 80NSSC21K0101. This work is supported by JSPS KAKENHI Grant Nos. JP20KK0072, JP21H01124, and JP21H04492, and by NINS Grant Nos. 01321802 and 01311904. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC. We thank Mr. Craig Gordon for helping resolve a PYXSPEC problem. We thank Dr. Andrew Pollock for suggestions of _XMM-Newton_ RGS data analysis. We thank Drs. Stephen Drake, Yuta Notsu, Michael F. Corcoran and Konstantin V. Getman for discussions about stellar flare physics. NICER(XTI), XMM(RGS) HEASoft (Nasa High Energy Astrophysics Science Archive Research Center (Heasarc) 2014), xspec (Arnaud, 1996), scipy(Virtanen et al., 2020), astropy (Astropy Collaboration et al., 2013; Scargle et al., 2013), SAS (v19.0; Gabriel et al., 2004), HYDRAD (Bradshaw & Mason, 2003) ## Appendix A Elemental Abundance Measurement Telleschi et al. (2005) extensively studied the coronal elemental abundance of \(\kappa^{1}\) Ceti using _XMM_/RGS data in 2002. However, the _XMM-Newton_ instrumental calibration4 and the plasma emission codes (e.g., ATOMDB5) have significantly improved since then. The elemental abundance of the star might also have changed in 17 years. We, thus, independently measure the coronal elemental abundance of \(\kappa^{1}\) Ceti using the _XMM_/RGS data obtained on 2018 July 30 and 2019 January 29 (ObsID: 0822790901, 0822791001, PI: Wargelin). Footnote 4: [https://www.cosmos.esa.int/web/xmm-newton/calibration-documentation](https://www.cosmos.esa.int/web/xmm-newton/calibration-documentation) Footnote 5: [http://www.atomdb.org](http://www.atomdb.org) Footnote 6: [https://www.cosmos.esa.int/web/xmm-newton/sas](https://www.cosmos.esa.int/web/xmm-newton/sas) We reprocess these datasets with SAS version 19.06. EPIC/MOS2 turns off during these observations, while the EPIC-pn uses the timing mode with relatively poor spectral resolution. We thus analyze EPIC/MOS1 and RGS data. For MOS1, we take a 15\({}^{\prime\prime}\) radius circular source region centered at the X-ray peak position. The MOS1 on-axis CCD operates with the small window mode so that we take background data from a source-free region from the surrounding CCDs. The MOS1 light curves of these observations do not show significant time variations. EPIC/MOS1 measures the 0.6\(-\)1.2 keV flux during the second observation at \(\sim\)3.5\(\times\)10\({}^{-12}\) ergs cm\({}^{-2}\) s\({}^{-1}\), which is \(\sim\)16% lower than the first observation. This flux is nearly the lowest among the _NICER_ monitoring observations of \(\kappa^{1}\) Ceti. Footnote 6: [https://www.cosmos.esa.int/web/xmm-newton/sas](https://www.cosmos.esa.int/web/xmm-newton/sas) We produce the MOS1 spectra using the same source and background regions. For RGS, we run rgsproc for the target position measured from the MOS1 image and produce the source and background spectra (Figure A1). We only use the first-order RGS spectra as the second-order RGS spectra do not have enough photon counts to identify emission lines. We fit the unbinned MOS1 and RGS spectra simultaneously using Cash statistic (c-stat) built in xspec(Cash, 1979). The Cash statistic needs to add background as an additive model component so that we simultaneously fit background spectra by an empirical model (power-law + 4 Gaussians), convolved with the source response (rmf) weighted with the background areal scale (backscal). For the source spectra, we assume a 2\(T\) thermal plasma model with various abundance values (vapec) and fit all MOS1/RGS source/background spectra of the two observations simultaneously. We allow for varying spectral normalization between MOS1 and RGS to account for calibration uncertainty and \(kT\) and normalization between the two observations for time variation. Table A lists the derived elemental abundance. We use these values for the _NICER_ data analysis and numerical simulation.
2310.17254
New type of rogue waves
New type of localized solutions for the two-dimensional multicomponent Yajima-Oikawa system is presented. The dynamics of solutions of this type occurs on the zero background and is similar to that of rogue waves.
N. V. Ustinov
2023-10-26T09:10:54Z
http://arxiv.org/abs/2310.17254v1
# New type of rogue waves ###### Abstract New type of localized solutions for the two-dimensional multicomponent Yajima-Oikawa system is presented. The dynamics of solutions of this type occurs on the zero background and is similar to that of rogue waves. pacs: 05.45.Yv, 42.65.Tg, 42.81.Dp ## I Introduction Much attention of researchers has been paid in the recent decades to the study of rogue waves [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. Various mechanisms of formation of these waves were suggested. The occurrence of rogue waves is most often investigated on the basis of the mechanisms of modulation instability and superposition of waves [1; 5; 7; 8; 11; 12]. In both cases, an evolution of rogue waves takes place against the background of a wave field, which is reflected in the definitions of such waves [2; 5; 8]. In this report, the localized waves developed in the absence of the background wave fields are considered. At the same time, their dynamics corresponds to the dynamics of rogue waves that "appear from nowhere and disappear without a trace" [3]. A search among solutions of the multi-dimensional nonlinear equations for ones suitable for describing the behavior of rogue waves is of great interest. The solutions having dynamics similar to the dynamics of rogue waves were obtained as particular cases of lump (rational) solutions, semi-rational ones and their generalizations (see, e.g., Refs. [13; 14; 15; 16; 17]). It is important to find other types of solutions describing the dynamics of rogue waves. The mechanisms generating such waves may be different. The investigation of the two-dimensional multicomponent Yajima-Oikawa (YO) system attracts significant attention in the recent years [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. This system comprises multiple (say \(N\)) short-wave components and a single long-wave one. It generalizes the scalar (\(N=1\)) two-dimensional YO system [29] and is often called the 2D coupled long-wave-short-wave resonance interaction system. The two-dimensional multicomponent YO system belongs to the class of equations integrable by the inverse scattering transformation method [30]. Also, it arose in different physical contexts. The two-component system and the multicomponent one were derived by applying the reductive perturbation method in Refs. [18] and [22], respectively, as the governing equations for the interaction of dispersive waves in a weak Kerr-type nonlinear medium. In these systems, the short waves propagate in anomalous dispersion regime while the long wave propagates in the normal dispersion regime. A generation of the terahertz radiation by optical pulses in a medium of asymmetric quantum particles is described under the quasi-resonance conditions by the two-dimensional two-component YO system [28]. Various types of solutions of the two-dimensional multicomponent YO system were found. So, rational and semi-rational solutions mentioned above due to their role under considering rogue waves were investigated in Refs. [25] and [27], respectively. The rational solutions include the fundamental (simplest) and general (multi- and higher-order) lumps and line rogue waves derived from the lumps under the certain parameter conditions [25]. It was shown that the fundamental lumps and rogue waves have three different patterns: bright, intermediate and dark states. The fundamental semi-rational solutions considered in [27] can describe the fission of a dark soliton into a lump and a dark soliton or the fusion of one lump and one dark soliton into a dark soliton. The non-fundamental semi-rational solutions were shown to fall into three subclasses: higher-order, multi- and mixed-type semi-rational solutions. The solutions discussed above of the two-dimensional multicomponent YO system were found using the bilinear method. In this report, we exploit the Darboux transformation (DT) technique [31; 32] to obtain the solutions of this system. Note that the DT technique was applied to the multicomponent YO systems in the one-dimensional case in Refs. [33; 34; 35; 36; 37]. The paper is organized as follows. The two-dimensional multicomponent YO system of the general form and the corresponding overdetermined system of linear equations are given in Section 2. Also, the DT formulas for these systems are presented here. New type of localized solutions of the two-dimensional multicomponent YO system on the zero background is considered in Section 3, and the stability of solutions of this type is discussed. Concluding remarks are given in Section 4. ## II Overdetermined linear system and Darboux transformation The two-dimensional multicomponent YO system is written in the dimensionless form as \[\begin{split}\frac{\partial\varphi_{n}}{\partial t}+\frac{ \partial\varphi_{n}}{\partial y}=i\frac{\partial^{2}\varphi_{n}}{\partial x^{2 }}+iu\varphi_{n}\ \ (n=1,\ldots,N),\\ \frac{\partial u}{\partial t}=\frac{\partial}{\partial x}\sum_{ n=1}^{N}\sigma_{n}|\varphi_{n}|^{2},\end{split} \tag{1}\] where \(\varphi_{n}=\varphi_{n}(x,y,t)\) and \(u=u(x,y,t)\) are the \(n\)th short-wave and long-wave components, respectively, \(\sigma_{n}=\pm 1\) (\(n=1,\ldots,N\)). In the case of the YO system of the general form, parameters \(\sigma_{n}\) have different signs. The two-dimensional multicomponent YO system (1) has infinitely many integrals of motion. The first few integrals are \[\begin{array}{l}\iint\!u\,dx\,dy,\quad\iint\!|\varphi_{n}|^{2}\,dx\,dy\ \ (n=1,\ldots,N),\\ \iint\!\left(u^{2}+i\sum\limits_{n=1}^{N}\sigma_{n}\left[\varphi_{ n}\frac{\partial\varphi_{n}^{*}}{\partial x}-\varphi_{n}^{*}\frac{\partial \varphi_{n}}{\partial x}\right]\right)dx\,dy.\end{array} \tag{2}\] Also, Eqs. (1) are represented as the compatibility condition of the overdetermined system of linear equations \[\begin{array}{l}\frac{\partial^{2}\psi_{1}}{\partial x^{2}}=-i \left(\frac{\partial\psi_{1}}{\partial t}+\frac{\partial\psi_{1}}{\partial y} \right)-u\psi_{1},\\ \frac{\partial\psi_{n+1}}{\partial x}=\frac{\sigma_{n}}{2}\varphi_{ n}^{*}\psi_{1}\ \ (n=1,\ldots,N),\end{array} \tag{3}\] and \[\begin{array}{l}\frac{\partial\psi_{1}}{\partial t}=-\sum\limits_{n=1}^{N} \varphi_{n}\psi_{n+1},\\ \frac{\partial\psi_{n+1}}{\partial t}+\frac{\partial\psi_{n+1}}{ \partial y}=\frac{i\sigma_{n}}{2}\left(\varphi_{n}^{*}\frac{\partial\psi_{1} }{\partial x}-\frac{\partial\varphi_{n}^{*}}{\partial x}\psi_{1}\right)\\ (n=1,\ldots,N).\end{array} \tag{4}\] Here \(\psi_{k}=\psi_{k}(x,y,t)\) (\(k=1,\ldots,N+1\)) is the \(k\)th component of the solution of Eqs. (3) and (4). Let \(\chi_{k}=\chi_{k}(x,y,t)\) (\(k=1,\ldots,N+1\)) be the \(k\)th component of a solution of the overdetermined system (3), (4). Then, the differential 1-form \[d\,\delta(\chi,\psi)=\delta_{x}(\chi,\psi)dx+\delta_{t}(\chi,\psi)dt+\delta_{ y}(\chi,\psi)dy, \tag{5}\] where \[\delta_{x}(\chi,\psi)=\chi_{1}^{*}\psi_{1},\quad\delta_{t}(\chi,\psi)=-2\sum \limits_{n=1}^{N}\sigma_{n}\chi_{n+1}^{*}\psi_{n+1},\] \[\delta_{y}(\chi,\psi)=i\left(\chi_{1}^{*}\frac{\partial\psi_{1}}{\partial x }-\frac{\partial\chi_{1}^{*}}{\partial x}\psi_{1}\right)-\delta_{t}(\chi,\psi),\] is closed; i.e., for a contour \(\Gamma\) connecting the points \((x_{0},y_{0},t_{0})\) and \((x,y,t)\), integral \[\delta(\chi,\psi)=\int\limits_{\Gamma}d\,\delta(\chi,\psi)+C \tag{6}\] (\(C\) is a constant) depends only on initial and final points and is independent of a specific choice of contour \(\Gamma\). The overdetermined system of linear equations (3), (4) is covariant with respect to the DT \(\psi_{k}\rightarrow\psi_{k}[1]\) (\(k=1,\ldots,N+1\)), \(\varphi_{n}\rightarrow\varphi_{n}[1]\) (\(n=1,\ldots,N\)), \(u\to u[1]\), where the transformed quantities are defined in the following manner [28]: \[\psi_{k}[1]=\psi_{k}-\frac{\delta(\chi,\psi)}{\delta(\chi,\chi)}\chi_{k}\ \ (k=1,\ldots,N+1), \tag{7}\] \[\varphi_{n}[1]=\varphi_{n}-2\sigma_{n}\frac{\chi_{n+1}^{*}\chi_{1}}{\delta( \chi,\chi)}\ \ (n=1,\,\ldots,\,N), \tag{8}\] \[u[1]=u+2\frac{\partial^{2}}{\partial x^{2}}\log\delta(\chi,\chi). \tag{9}\] Relations (8) and (9) define new solution of the system (1), while expressions (7) give the components of corresponding solution of the overdetermined system (3), (4). ## III Rogue wave type solutions Let us assume that the initial solution of the YO system (1) is the zero background: \[\varphi_{1}=\cdots=\varphi_{N}=u=0.\] In this case, we have \[\chi_{n+1}=f_{n}(t-y)\ \ (n=1,\ldots,N), \tag{10}\] where \(f_{n}(t-y)\) (\(n=1,\ldots,N\)) are arbitrary functions of their argument. The complex variants of the source function of the heat equation can be used to express the component \(\chi_{1}\) of solution of the overdetermined system (3), (4). In the simplest case, this component is written as \[\chi_{1}=\frac{1}{\sqrt{y-\mu}}\ \exp\!\left(\frac{i(x-\lambda)^{2}}{4(y-\mu)} \right), \tag{11}\] where \(\lambda\) and \(\mu\) are complex constants. Then, using Eqs. (6), (5), (10) and (11), we obtain \[\begin{array}{l}\delta=\delta(\chi,\chi)=\sqrt{\frac{\pi}{2\mu_{I}}}\exp\! \left(\frac{\lambda_{I}^{2}}{2\mu_{I}}\right)\\ \times\mbox{erf}\!\left(\frac{\lambda_{I}(y-\mu_{R})-\mu_{I}(x- \lambda_{R})}{\sqrt{2\mu_{I}}\,|y-\mu|}\right)\\ +2\int\limits_{t_{0}-y_{0}}^{t-y}\sum\limits_{n=1}^{N}\sigma_{n}|f_{n}( \zeta)|^{2}d\zeta+C_{0},\end{array} \tag{12}\] where \(\lambda_{R}=\Re(\lambda)\), \(\lambda_{I}=\Im(\lambda)\), \(\mu_{R}=\Re(\mu)\), \(\mu_{I}=\Im(\mu)>0\), \(C_{0}\) is a real constant, \(\mbox{erf}(\zeta)\) is the error function. After substitution of the expressions (10)-(12) into the DT formulas (8), (9), we find the following solution of the two-dimensional multicomponent YO system (1): \[\varphi_{n}=-2\sigma_{n}\frac{f_{n}(t-y)^{*}\mbox{e}^{\frac{i(x-\lambda)^{2}}{ (y-\mu)}}}{\sqrt{y-\mu}\,\delta}\ \ (n=1,\,\ldots,\,N), \tag{13}\] \[u=2\frac{\partial^{2}}{\partial x^{2}}\log\delta\,. \tag{14}\] It is supposed in what follows that the functions \(f_{n}(t-y)\) and constant \(C_{0}\) are such that the solution (13), (14) is nonsingular. Different types of solutions of the two-dimensional YO system (1) are obtained by choosing the functions \(f_{n}(t-y)\)\((n=1,\dots,N)\) in Eqs. (12)-(14) in different manner. If, for example, \(f_{n}(t-y)\sim\exp[\varepsilon(t-y)]\) (\(\varepsilon\) is a constant) or \(f_{n}(t-y)\to 0\) at \(|t-y|\to\infty\) (\(n=1,\dots,N\)) then solution (13), (14) is localized on the \((x,y)\)-plane for any \(t\) and \(\varphi_{n}\to 0\) at \(|t|\to\infty\). Consider an interesting case when parameters \(\sigma_{n}\)\((n=1,\,\dots,N)\) have different signs and \(|f_{n}(t-y)|\to\infty\) at \(|t-y|\to\infty\) (\(n=1,\dots,N\)). Let us assume for the sake of concreteness that \[f_{n}(t-y)=\alpha_{n}{\rm e}^{\varepsilon_{1}(t-y)}+\beta_{n}{\rm e}^{ \varepsilon_{2}(t-y)}\;\;(n=1,\,\dots,\,N), \tag{15}\] where \(\alpha_{n}\), \(\beta_{n}\)\((n=1,\dots,N)\), \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are complex constants. If \(\Re(\varepsilon_{1})\Re(\varepsilon_{2})<0\), then the solution of YO system (1), which is obtained after the substitution of expressions (15) into Eqs. (12)-(14), is localized on the \((x,y)\)-plane, and, what is particularly important, \(\varphi_{n}\to 0\) (\(n=1,\,\dots,N\)) and \(u\to 0\) at \(|t|\to\infty\). So, we have localized solution having zero temporal asymptotics. Such kind of the dynamics resembles that of rogue waves. It is supposed here that the YO system (1) is of general form. In the opposite case, when all parameters \(\sigma_{n}\)\((n=1,\,\dots,\,N)\) have the same sign, using expressions (15) leads to the singular solution of the YO system. To illustrate the dynamics of the solutions discussed above we consider the simplest case \(N=2\), \(\sigma_{1}=1\) and \(\sigma_{2}=-1\). Eqs. (12)-(15) give us the following expressions for the solution of the two-component YO system: \[\varphi_{n}=-2\sigma_{n}\frac{\alpha_{n}^{*}{\rm e}^{\varepsilon_{1}^{*}(t-y )}+\beta_{n}^{*}{\rm e}^{\varepsilon_{2}^{*}(t-y)}}{\rm e}\,{\rm e}^{\frac{i( x-\lambda)^{2}}{4(y-\mu)}}\;\;(n=1,\,2), \tag{16}\] \[u=2\frac{\partial^{2}}{\partial x^{2}}\log\Delta\,, \tag{17}\] where \[\Delta = \sqrt{\frac{\pi}{2\mu_{I}}}\,{\rm e}^{\frac{\lambda_{I}^{2}}{2 \mu_{I}}}{\rm erf}\!\left(\frac{\lambda_{I}(y-\mu_{R})-\mu_{I}(x-\lambda_{R}) }{\sqrt{2\mu_{I}}\,|y-\mu|}\right)\] \[+ 2\int\limits_{t_{0}-y_{0}}^{t-y}\sum\limits_{n=1}^{2}\sigma_{n} \left|\alpha_{n}{\rm e}^{\varepsilon_{1}\zeta}+\beta_{n}{\rm e}^{\varepsilon_ {2}\zeta}\right|^{2}d\zeta+C_{0}.\] The profiles of the absolute value of component \(\varphi_{1}\) and component \(u\) of solution (16), (17) for different values of variable \(t\) and for the parameter values \(\lambda=i\), \(\mu=2i\), \(y_{0}=t_{0}=0\), \(\alpha_{1}=1\), \(\beta_{1}=2\), \(\alpha_{2}=2\), \(\beta_{2}=1\), \(\varepsilon_{1}=-1\), \(\varepsilon_{2}=1\) and \(C_{0}=6\) are presented in Figs. 1 and 2. The complete dynamics is given in the files SM1.gif and SM2.gif in Supplemental Material [38]. It is seen that this solution has form of the solitary wave, and all its components are localized on the \((x,y)\)-plane for any \(t\). In the limit \(|t|\to\infty\), the amplitudes of components \(\varphi_{1}\) and \(\varphi_{2}\) tend to zero as \(1/\sqrt{|t|}\) (see Fig. 1). The length \(l_{y}\) of the wave along axis \(y\) can be estimated as \(l_{y}\sim|\Re(\varepsilon_{1})|^{-1}+|\Re(\varepsilon_{2})|^{-1}\). For \(|t|\gg\sqrt{|\mu|^{2}+|\lambda|^{2}}\), the length \(l_{x}\) along axis \(x\) exceeds \(l_{y}\) and can be estimated as \[l_{x}\sim 2|t|\left(\sqrt{\lambda_{I}^{2}+4\mu_{I}}-|\lambda_{I}|\right)/\mu_{I}.\] The decrease of long-wave component \(u\) as \(|t|\to\infty\) occurs faster than the short-wave ones (see Fig. 2). Thus, we see that the dynamics of solitary wave (16), (17) matches with that of rogue waves [3]. There is, however, an important distinction. Whereas the phenomenon of rogue wave develops on the background waves, the solitary wave (16), (17) evolves on the zero background. The height of rogue wave has to be more than about twice the significant height of background waves [2; 5; 8]. The waves, whose height exceeds the background value more than five times, are sometimes called super rogue waves [39; 40]. Here the background waves are absent. The maximum values of amplitudes of \(\varphi_{1}\), \(\varphi_{2}\) and \(u\) of the solitary wave (16), (17) depend on its parameters and can be arbitrary large. Solitary waves having similar dynamics exist for arbitrary number \(N>1\) of the short-wave components of system (1). The functions \(f_{n}(t-y)\)\((n=1,\dots,N)\) in Eqs. (12)-(14) have to satisfy conditions \(|f_{n}(t-y)|\to\infty\) at \(t-y\to\pm\infty\) in this case. For example, these functions can be chosen in accordance with Eqs. (15). Different signs among \(\sigma_{n}\)\((n=1,\,\dots,\,N)\) are necessary to obtain the nonsingular solutions in that case. The generalizations of rogue wave of the form (16), (17) can be obtained if some generalizations of the complex variant of the source function (11) are used as component \(\chi_{1}\) in the DT formulas. In particular, component \(\chi_{1}\) can be chosen in the following manner: \[\chi_{1}=\sum\limits_{m=1}^{M}\sum\limits_{l=1}^{L}\frac{\nu_{lm}}{\sqrt{y- \mu_{m}}}\exp\left(\frac{i(x-\lambda_{l})^{2}}{4(y-\mu_{m})}\right), \tag{18}\] where \(\nu_{lm}\), \(\lambda_{l}\) and \(\mu_{m}\)\((l=1,\,\dots,\,L;\,m=1,\,\dots,\,M)\) are complex constants, \(\Im(\mu_{m})>0\). Also, we can put \[\chi_{1}=\left(c_{1}\frac{\partial}{\partial\lambda}+c_{2}\frac{\partial}{ \partial\mu}\right)\frac{1}{\sqrt{y-\mu}}\exp\left(\frac{i(x-\lambda)^{2}}{4(y- \mu)}\right), \tag{19}\] where \(c_{1}\) and \(c_{2}\) are constants. The study of such generalizations of rogue wave (16), (17) (multi- and higher-order waves) and their interaction with waves of other types requires a separate consideration. Note that the stability of rogue wave (16), (17) with respect to the perturbations of a special kind can be established within the frameworks of the DT technique. Indeed, let us take the solution of the overdetermined system (3), (4) in the form \[\chi_{1}=\frac{1}{\sqrt{y-\mu}}\;\exp\!\left(\frac{i(x-\lambda)^{2}}{4(y-\mu) }\right)+\kappa\tilde{\chi}_{1}, \tag{20}\] \[\chi_{n+1}=\alpha_{n}\mathrm{e}^{\varepsilon_{1}(t-y)}+\beta_{n}\mathrm{e}^{ \varepsilon_{2}(t-y)}+\kappa F_{n}(t-y)\ \ (n=1,\,2), \tag{21}\] where \(\kappa\) is parameter considered to be small, \(\tilde{\chi}_{1}\) is defined as \(\chi_{1}\) in Eq. (18) or in Eq. (19), \(F_{1,2}(t-y)\) are the functions of their argument such that \(|F_{1,2}(t-y)|<1\). The substitution of expressions (20), (21) into the DT formulas (8), (9) gives us the perturbed solution of the two-dimensional YO system (1). This solution coincides with rogue wave (16), (17) in the case \(\kappa=0\). It is important that the difference between the perturbed solution and rogue wave (16), (17) will be insignificant during the time evolution if \(|\kappa|\ll 1\). This indicates the stability of rogue wave considered with respect to the perturbations of special form. The existence of integrals of motion (2) is important in the investigation of stability of rogue wave (16), (17) in the general case and in the numerical simulations. Also, the integrals of motion can be helpful under the study of blowing up of solution (13), (14) that takes place for some values of its parameters. ## IV Conclusion In this paper, the new type of rogue waves for the two-dimensional multicomponent Yajima-Oikawa system is presented. The waves of this type are distinguished by the fact that their dynamics occur on the zero background. This implies that rogue waves presented here are formed solely due to the nonlinear focusing. It seems very important to extend this type of rogue waves to other models of various physical contexts describing the wave interactions.
2302.00539
Analyzing Leakage of Personally Identifiable Information in Language Models
Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to 10$\times$ more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at https://github.com/microsoft/analysing_pii_leakage.
Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin
2023-02-01T16:04:48Z
http://arxiv.org/abs/2302.00539v4
# Analyzing Leakage of Personally Identifiable Information in Language Models ###### Abstract Language Models (LMs) have been shown to leak information about training data through sentence-level membership inference and reconstruction attacks. Understanding the risk of LMs leaking Personally Identifiable Information (PII) has received less attention, which can be attributed to the false assumption that dataset curation techniques such as scrubbing are sufficient to prevent PII leakage. Scrubbing techniques reduce but do not prevent the risk of PII leakage: in practice scrubbing is imperfect and must balance the trade-off between minimizing disclosure and preserving the utility of the dataset. On the other hand, it is unclear to which extent algorithmic defenses such as differential privacy, designed to guarantee sentence- or user-level privacy, prevent PII disclosure. In this work, we introduce rigorous game-based definitions for three types of PII leakage via black-box extraction, inference, and reconstruction attacks with only API access to an LM. We empirically evaluate the attacks against GPT-2 models fine-tuned with and without defenses in three domains: case law, health care, and e-mails. Our main contributions are (i) novel attacks that can extract up to \(10\times\) more PII sequences than existing attacks, (ii) showing that sentence-level differential privacy reduces the risk of PII disclosure but still leaks about 3% of PII sequences, and (iii) a subtle connection between record-level membership inference and PII reconstruction. Code to reproduce all experiments in the paper is available at [https://github.com/microsoft/analysing_pii_leakage](https://github.com/microsoft/analysing_pii_leakage). ## I Introduction Language Models (LMs) are fundamental to many natural language processing tasks [22, 49]. State-of-the-art LMs scale to trillions of parameters [19] and are pre-trained on large text corpora (e.g., 700GB [53]). Pre-trained LMs are adapted to downstream tasks by fine-tuning on domain-specific datasets such as human dialogs [7] or clinical health data [62] which may contain private information. Memorization is a privacy concern in LMs [9]. The threat is that an attacker learns _by whom_ the training data was provided, known as membership inference [30, 45, 46, 58] and _about whom_ it contains information, known as data extraction [9, 11, 29, 59, 69]. These two categories can be disjoint but associations in the latter can be used to infer information about the former. For LMs, data extraction is a significant threat in practice since attackers with black-box API access can extract at least 1% of the training data [11]. Existing work focuses on finding a lower bound on _any_ kind of memorization but does not differentiate public and private leaked information. For example, leaking highly duplicated common phrases is not a privacy violation according to the GDPR [17] as opposed to leaking Personally Identifiable Information (PII). In practice, any LM trained on real, sensitive data has to protect PII, but memorization of PII is not well understood. We believe that a comprehensive study on the risk of PII memorization in LMs is missing. Consider a service provider who wants to deploy a next-word prediction LM for composing e-mails, such as Google's Smart Compose [13]. Their goal is to train an LM with high utility that does not leak PII and make it available as a black-box API. The threat is an attacker who learns PII, such as names, addresses or other sensitive information through the LM. Extracting _any_ PII by itself, such as a personal address, can already pose a privacy threat. This threat is elevated when an attacker can associate a piece of PII to a context, for example, [In May 2022, [MASK]] had chemotherapy at LHS". As a part of this paper, we study the feasibility of such attacks on LMs in practice. Figure 1 illustrates the type of PII attacks proposed in this work. Defenses against memorization are based on dataset curation and algorithmic defenses. PII _scrubbing_ is a dataset curation technique that removes PII from text, relying on Named Entity Recognition (NER) [35] to tag PII. Modern NER is based on the Transformer architecture [63] and has mixed recall of 97% (for names) and 80% (for care unit numbers) on clinical health data, meaning that much PII is retained after scrubbing [62]. Machine learning pipelines incorporate algorithmic defenses such as differentially-private Fig. 1: An illustration of PII extraction, reconstruction and inference attack techniques. training algorithms [16, 1] to ensure record- or user-level provable privacy guarantees. **Problem.** PII scrubbing and Differential Privacy (DP) protect the privacy of training data at the cost of degrading model utility. Aggressively scrubbing for better privacy drastically harms utility. Similarly, with DP training, utility reduction is inversely proportional to the privacy budget spent, which determines the noise added. Figure 2 illustrates how scrubbing and DP on their own and when combined together degrade utility (increase perplexity) of LMs of different sizes in comparison to a completely undefended model. We observe that scrubbing results in similar perplexities as when training with DP. Although the privacy guarantees offered by a DP model are well-studied, the contribution of DP guarantees when applied at record- or user-level towards mitigating PII disclosure is unclear. Differential privacy provides guarantees under the assumption that records are unlikely to be duplicated, which may not be the case for realistic datasets [27]. PII is often duplicated across multiple records and users. Consider the example of an e-mail dataset, where a person's address circulates within a group of users. In this case, even though the address is known by many, it cannot be considered public information [6]. However, a differentially private LM may still leak it. A simplistic mitigation might be to apply DP at a group level, but groups and their sizes are not always known _a priori_, and group-level DP under worst-case assumptions has a deleterious impact on model utility. Quantitatively measuring the protection offered by PII scrubbing or DP is an open problem. There are no existing metrics to analyze the risk of PII leakage in an end-to-end machine learning pipeline where defenses such as DP and PII scrubbing are at interplay. To this end, we focus on empirically measuring PII leakage to enable practitioners to make informed decisions and tune their privacy mitigations for a desired privacy/utility trade-off. **Overview.** We address this problem with novel attacks and metrics that allow quantitatively assessing leakage of PII. We identify three threats for PII leakage, namely (i) extraction, (ii) reconstruction, and (iii) inference, and provide rigorous game-based definitions for them. PII extraction measures the fraction of PII sequences that an attacker can discover from an LM without any knowledge about the model's training dataset. Some PII, such as addresses or names, can _directly_ re-identify (and harm) an individual even if the attacker is unable to reconstruct the context. For example, consider a health dataset with notes from cancer patients. Leakage of a user's PII indicates that they had cancer, which is revealed to an uninformed attacker. PII reconstruction and inference assume a more informed attacker, similar to that of membership inference, who has some knowledge about the dataset. For example, when an attacker wants to learn more PII about a user, they can form masked queries (e.g., "John Doe lives in [MASK], England") to the LM and attempt to reconstruct the missing PII. In PII inference, the attacker additionally knows a set of candidates (e.g., London, Liverpool) and their goal is to infer the PII from that set. In short, PII extraction considers an _uninformed_ attacker without any knowledge of the data distribution or the training dataset, PII reconstruction assumes a _partially_ informed attacker with knowledge about the context in which PII may occur, and PII inference assumes an _informed_ attacker who additionally knows potential candidates for PII. For these attacks, we formalize how leakage can be measured exactly and show that these formulas are intractable. For this reason, we propose concrete attack algorithms that approximate this ideal leakage which is confirmed in our evaluation. Our attacks can be applied to any LM. We focus on generative LMs as they are deployed in practice to generate large amounts of text. We evaluate our attacks on 4 variants of the GPT-2 model [52] released by OpenAI fine-tuned on 3 domains: (i) law cases, (ii) corporate e-mails, and (iii) reviews of healthcare facilities. Our attacks can extract PII with a precision approximately twice as high as that from related work, even when the model has been trained with differential privacy. We identify factors that increase the risk of PII leakage. Additionally, we discover new insights about the connection between record-level membership inference and PII reconstruction attacks. Using our metrics, for the first time, we measure the effect of DP on protecting PII leakage. We empirically demonstrate that record-level DP limits the threat of PII leakage to a large extent but does not eliminate it completely. These results are a positive motivation for future research to design defenses that improve the privacy/utility trade-off. For example, a less aggressive _heuristic_ scrubber that considers the contribution of other defenses such as DP in the ML pipeline. To enable such research, we make our code publicly available. Fig. 2: Utilities of LMs trained (i) undefended, (ii) with scrubbing, (iii) with DP (\(\varepsilon=8\)), (iv) with scrubbing + DP, and (v) with masked outputs in an ablation study over the LM’s size on the ECHR dataset (see Section IV for details). **Contributions.** In summary, our main contributions are: * We present a taxonomy for PII leakage, inspired by existing literature that includes three threat models: Extraction, Reconstruction, and Inference, and provide game-based definitions for each of them. Extraction enables an attacker to learn real PII from the training data. Reconstruction and inference leak associations between PII and their context. * We evaluate privacy/utility trade-offs on three datasets using (i) undefended, (ii) DP, and (iii) scrubbed LMs. * We compare our attacks with existing work (if applicable) and show that we can correctly reconstruct up to 10\(\times\) more PII sequences by leveraging the suffix of a masked query and public masked LMs. * We study the relationship between membership inference and PII reconstruction. ## II Background & Problem We first provide a primer on language modeling using neural networks-based LMs and existing mitigations against privacy risks used in machine learning pipelines. We then present our problem statement of measuring PII leakage in an end-to-end training pipeline followed by the description of adversary capabilities and objectives. ### _Language Modeling_ Generative LMs learn the conditional probability distribution \(\text{Pr}(w_{i}|w_{1},..,w_{i-1})\) over sequences of tokens \(w_{1},..,w_{i}\) from a vocabulary \(\mathcal{V}\). By applying the chain rule, we can obtain the probability that an LM \(\theta\) assigns to a sequence \(w_{1},..,w_{n}\): \[\text{Pr}(w_{1},..,w_{n};\theta)=\prod_{i=1}^{n}\text{Pr}(w_{i}|w_{1},..,w_{i -1};\theta) \tag{1}\] State-of-the-art LMs are based on the Transformer neural network architecture [63]. During training, one objective is to maximize the negative log-likelihood of the LM predicting the next token in training sentences given a prefix. Equation (1) gives a probability distribution over all tokens in the vocabulary \(\mathcal{V}\) and can be represented as a tree with \(|\mathcal{V}|\) branches on each level. Text is generated iteratively by traversing the tree using greedy decoding, top-\(k\) sampling [18], or a beam search algorithm while conditioning the LM on all preceding tokens. Autoregressive LMs, such as GPT-2 only allow sampling the conditional probability of the next token based on a _prefix_, whereas masked LMs, such as BERT [14] also consider a sample's _suffix_ in the query. **Perplexity.** The "optimal" solution to the training objective is to memorize each record [9]. This is both intractable and undesirable, as the goal is for LMs to generalize beyond their training dataset. In practice, learning basically means that only a fraction of the training dataset is memorized [29] and that some memorization is likely necessary to achieve high utility [20]. The model's utility is evaluated by its perplexity on an unseen test set of sentences. A low perplexity implies a high utility on the dataset. \[\text{PPL}(w_{1},..,w_{n};\theta)\!=\!\exp\!\!\left(\!-\frac{1}{n}\sum_{i=1}^{ n}\log\text{Pr}(w_{i}|w_{1},..,w_{i-1};\theta)\!\!\right)\] ### _Differentially Private Training_ Differential Privacy (DP) has become a popular notion of privacy. Recall its definition: **Definition 1** (Approximate Differential Privacy [15]): _Let \(\varepsilon>0\) and \(\delta\in[0,1]\). A mechanism \(\mathcal{M}:X\to Y\) is \((\varepsilon,\delta)\)-differentially private if for any pair of adjacent datasets \((D,D^{\prime})\) and measurable set of outputs \(\mathcal{O}\subseteq Y\),_ \[\text{Pr}(\mathcal{M}(D)\in\mathcal{O})\leq\text{e}^{\varepsilon}\;\text{Pr} (\mathcal{M}(D^{\prime})\in\mathcal{O})+\delta\;.\] Contrary to many other definitions of privacy such as \(k\)-anonymity [55], DP is a worst-case guarantee that must hold for all possible datasets. The scope of privacy protection enters the definition via the notion of adjacency and is independent of the data distribution. For instance, one may consider datasets adjacent when they differ only in one record, or in the data pertaining to one user. Many desirable properties of DP such as robustness to post-processing and composition derive from this independence. However, the data independence of DP can also be a limitation e.g., when sensitive content is shared within groups of users of unknown size. In these cases, the sensitive nature of the content is defined by its context and cannot be represented by an adjacency relation between pairs of datasets as pointed out by Brown et al. [5]. Nevertheless, DP enjoys increasing popularity and Differentially Private SGD [1] has been successfully applied to training large LMs by exploiting the transfer learning setup that is common among most state-of-the-art NLP models [38, 67]. ### _PII and NER_ **Personally Identifiable Information (PII).** PII in natural language is data that can re-identify an individual. PII can be a _direct identifier_ when leakage of that data alone is sufficient to re-identify an individual, or _quasi-identifier_ when only an aggregation of many quasi-identifiers can reliably re-identify an individual. Examples of direct identifiers are names, phone numbers, or addresses, whereas quasi-identifiers are a person's gender or description of their physical appearance. We use the same definition as Pilan et al. [51] and provide more details on the definition of PII in Section A-A. The combination of quasi-identifiers 'gender', 'birth date', and 'postal code' re-identify between 63 and 87% of the U.S. population [21]. **Named Entity Recognition (NER).** In practice, accurately tagging PII in a corpus of text is challenging without human curators [51]. When datasets are large, it is necessary to rely on Named Entity Recognition (NER) [47]. State-of-the-art NER, such as Flair [2], NLTK [4] or spaCy [24], are based on Transformer neural networks trained in a supervised manner to classify sequences of tokens as PII. In practice, training NER models is challenging because defining what constitutes PII can change over time and is dependent on the surrounding context [5]. Moreover, (i) training NER requires domain-specific, labeled training data, (ii) NER models make errors (i.e. a subset of PII remain unrecognized), and (iii) existing, publicly available NER models only aim for de-identification but not anonymization [51]. This means that complex PII, whose detection requires natural language understanding such as a description of a person's appearance, is not tagged by any publicly available NER. **PII Scrubbing.** Data curation techniques used in machine learning training pipelines, such as the one illustrated in Fig. 3, apply scrubbing as a method to de-identifify textual data [51]. The key idea is to tag known classes of PII using pre-trained NER modules such as Flair [2] or spaCy to remove or replace all tagged PII. In this paper, we use the common technique of scrubbing PII by replacing them with a [MASK] token. Weaker forms of scrubbing are possible, where PII sequences are replaced with entity tags such as [NAME] or [LOCATION], or where all occurrences of the same piece of PII are replaced with the same sequence (e.g., using a salted hash function). These scrubbers retain relations between pieces of PII and trade privacy for utility. For example, an attacker who reconstructs a pseudonym in many sentences is more likely to re-identify the person by linking auxiliary information about the person contained in the sentences. Our method of scrubbing maximizes privacy at the expense of (some) utility. **Formalism.** Studying leakage of PII in LMs requires labelled data, which is difficult to obtain due to the elevated measures to protect PII. We study leakage of PII in undefended and DP models by using existing NER taggers. For any given candidate PII \(C\in\mathcal{V}^{*}\) appearing in a sequence \(S\), we call all tokens preceding the PII the _prefix_ and tokens following the PII the _suffix_. Given a token sequence \(S=w_{1},..,w_{n}\in\mathcal{V}^{*}\), we define the following functions: * Extract\((S)\): For a sequence \(S\in\mathcal{V}^{*}\), return all PII sequences identified, \(\mathcal{C}=\{C_{1},..,C_{k}|C_{i}\subseteq S\}\). * Sample\((S,N,\theta)\): Given a prompt \(S\), a number \(N\in\mathbb{N}\) and an LM \(\theta\), this probabilistic function generates \(N\) sentences from \(\theta\), for example using a randomized beam search. This necessitates only black-box access to the conditional probability distribution in Eq. (1). * Split\((S,C)\): Given a sentence \(S\) containing \(C\), this function returns a prefix \(S_{0}\) and suffix \(S_{1}\) such that \(S=S_{0}CS_{1}\). We assume that \(C\) occurs exactly once or if not, that \(C\) uniquely identifies an occurrence. * Fill-Masks\((S)\): Given a sentence \(S\) containing [MASK] tokens, this function uses a public masked language model to fill each mask with the top predicted token. Section B describes an implementation. Algorithm 1 shows the scrubbing algorithm we use in this paper. It iterates over sentences in the dataset, extracts all candidate PII sequences, and replaces them with [MASK]. ``` 1:procedureScrub(\(D\)) 2:\(D^{\prime}\leftarrow\emptyset\) 3:for\(S\in D\)do 4:\(\mathcal{C}\leftarrow\textsc{Extract}(S)\)\(\triangleright\) Tag PII with NER 5:for\(C\in\mathcal{C}\)do 6:\(S_{0},S_{1}\leftarrow\textsc{Split}(S,C)\) 7:\(S\gets S_{0}\)[MASK]\(S_{1}\) 8:\(D^{\prime}\gets D^{\prime}\cup\{S\}\) 9:return\(D^{\prime}\) ``` **Algorithm 1** PII Scrubbing ### _Problem Setting_ We study the problem of PII leakage through fine-tuned LMs, where pre-trained publicly available LMs are fine-tuned on private data. Figure 3 shows the training pipeline that we consider. Given a set of uncurated training data, the first step consists of data curation, such as PII scrubbing or record-level de-duplication. The second step consists of algorithmic defenses, such as training with differential privacy. Finally, the trained model is deployed through a black-box API exposing only the prediction vector for the next token. Model parameters and intermediate features are kept behind the API boundary. Our goal is to study to what extent (i) information memorized by the LM constitutes sensitive information such as PII, (ii) whether existing defenses are sufficient to prevent leakage and (iii) studying the privacy-utility trade-offs between all defenses, e.g., whether less aggressive scrubbing can potentially be utilized when the LM is trained with DP. **Why DP cannot (always) mitigate PII leakage?** We emphasize that although both PII scrubbing and DP mitigate privacy risks, they protect against a different kind of leakage. Differential privacy protects against singling out individual records or users. It implicitly assigns a privacy cost to using information in the training dataset which is oblivious to Fig. 3: A training pipeline to mitigate leakage of personally identifiable information and membership inference. different occurrences of the same information across multiple records or users. This is an effective method to mitigate risks of disclosing _by whom_ data was contributed but it does not take into account _about whom_ the content is. However, in real-world datasets, the nature of sensitive content--i.e. content that is not shared widely--makes protecting _by whom_ a reasonable proxy to protect _about whom_. For example, consider a piece of sensitive information circulating only within a small group of contributors (e.g., "Jane Doe was diagnosed with cancer"). DP protects each contributor's authorship from disclosure, but the information itself is leaked through the LM. Disclosure of personally identifiable information is a common cause of leaking _about whom_ training dataset samples are which makes it an ideal candidate to study to what degree these two privacy mitigations are complementary to each other or redundant. ### _Threat Model_ **Adversary's Capabilities.** We consider an adversary with black-box API access to an LM. Our adversary can query the entire probability vector of the next most probable token on any given prompt. Table I summarizes variations in the threat model for the three PII-related attacks proposed in our paper: Extraction, Reconstruction, and Inference. When the adversary has access to scrubbed training data, it can observe a sentence such as "On May 23rd, [MASK] was admitted to the hospital", where [[MASK]] is the placeholder for PII that has been redacted. Additionally, we consider an attacker that has auxiliary information about candidates for the masked PII. In that case, we assume that the correct, masked PII is in the set of candidate PII. We refer to these attacks as PII "reconstruction" and "inference" respectively. Querying LMs behind APIs typically has a monetary cost. For example, existing service providers serve LMs charging a base rate of \(0.405\)-\(605\) USD per million tokens queried depending on the model's size.1 This effectively limits the number of times an attacker can query the LM. The threat model we consider is relevant in practice since next-word prediction APIs powered by LMs trained on sensitive data (with privacy mitigations) are publicly deployed [13]. Footnote 1: [https://openai.com/api/pricing/](https://openai.com/api/pricing/) **Adversary's Objective.** The common goal of an adversary in the three PII-related attacks that we consider is to extract sensitive information about a user from an LM. Existing work on memorization extracts training data _indiscriminately_[9], whereas we in addition focus on _targeted_ attacks against a user with the potential for more severe privacy implications. The goal of an extraction attack is to extract any piece of PII that was seen by the model during training. An attacker who can extract direct or quasi-identifying information from the LM has a high chance to re-identify users who contribute data to the training set. The goal of reconstruction and inference is to associate a piece of PII with a given context, allowing an attacker to learn attributes about a user. ## III Conceptual Approach This section describes our taxonomy and corresponding game-based definitions for PII leakage. Table II summarizes the notation used in algorithms. ### _PII Extraction_ In PII extraction, the attacker's goal is to extract as much PII from the training dataset of a model as possible. ``` 1:experiment Extraction(\(\mathcal{T},\mathcal{D},n,\mathcal{A}\)) 2:\(D\sim\mathcal{D}^{n}\) 3:\(\theta\leftarrow\mathcal{T}(D)\) 4:\(\mathcal{C}\leftarrow\bigcup_{S\in D}\textsc{Extract}(S)\) 5:\(\hat{\mathcal{C}}\leftarrow\mathcal{A}(\mathcal{T},\mathcal{D},n,\mathcal{ O}_{\theta}(\cdot),|\mathcal{C}|)\) 6:procedure\(\mathcal{O}_{\theta}(S)\) 7:return\(\{w\mapsto\Pr(w|S;\theta)\}_{w\in\mathcal{V}}\) ``` **Algorithm 2** PII Extraction Algorithm 2 encodes this as a game parameterized by a training algorithm \(\mathcal{T}\), a data distribution \(\mathcal{D}\), and a training dataset size \(n\). The challenger samples \(n\) i.i.d. records from \(\mathcal{D}\) to construct a training dataset \(D\) to train a model \(\theta\). In a black-box setting, the adversary is given access to an oracle that returns the probability vector output by \(\theta\) conditioned on arbitrary prefixes of their choosing. The adversary is assumed to know the training pipeline and the data distribution, but they only observe information about the sampled training dataset via \(\mathcal{O}_{\theta}(\cdot)\) (and \(|\mathcal{C}|\)). Knowing the number of unique PII sequences \(|\mathcal{C}|\) in \(D\), the adversary must produce a set of \begin{table} \begin{tabular}{l l} \hline \hline **Notation** & **Description** \\ \hline \(\mathcal{T}\) & A stochastic training algorithm \\ \(\mathcal{D}\) & A distribution over sequences \\ \(\mathcal{E}\) & A distribution over PII sequences \\ \(\mathcal{D}^{n}\) & Distribution of \(n\) independent sequences from \(\mathcal{D}\) \\ \(S\sim\mathcal{S}\) & Draw a sample \(S\) uniformly from a set \(\mathcal{S}\) \\ \(D\sim\mathcal{D}^{n}\) & Draw a sequences \(D\) independently from \(\mathcal{D}\) \\ \(\mathcal{A}\) & A procedure denoting an adversary \\ \(y+\mathcal{P}(\vec{x})\) & Call \(\mathcal{P}\) with arguments \(\vec{x}\) and assign result to \(y\) \\ \(\mathcal{C}\leftarrow\textsc{Extract}(S)\) & Extract the set \(\mathcal{C}\) of all PII sequences in \(S\) \\ \(\mathcal{S}\leftarrow\textsc{Sample}(S,N,\theta)\) & Generate \(N\) sequences \(\mathcal{S}\) from \(\theta\) starting from \(S\) \\ \(S_{0},S_{1}\leftarrow\textsc{Split}(S,C)\) & Split \(S\) at \(\mathcal{C}\), i.e., \(S=S_{0}CS_{1}\) \\ \(S^{\prime}\leftarrow\textsc{Fill-Masks}(S)\) & Fill masks in \(S\) using a public MLM \\ \hline \hline \end{tabular} \end{table} TABLE II: Summary of Notation \begin{table} \begin{tabular}{l c c c} \hline \hline & Model Access & Masked Training Data & Candidate PII \\ \hline Extraction & & & \\ Reconstruction & & & \\ Inference & & & \\ \hline \hline \end{tabular} \end{table} TABLE I: A summary of the difference in threat models between our three PII attacks. () black-box access, not available, \(\bigcirc\) available) PII sequences \(\tilde{\mathcal{C}}\) of at most size \(|\mathcal{C}|\) (line 5). The success of the adversary is its recall: \[\text{Succ}_{\text{Extractation}}(\mathcal{T},\mathcal{D},n,\mathcal{A})=\mathbb{ E}\left[\frac{|\mathcal{C}\cap\tilde{\mathcal{C}}|}{|\mathcal{C}|}\right]. \tag{2}\] The advantage of an adversary is the difference between its success and the supremum of the success of adversaries without access to \(\mathcal{O}_{\theta}(\cdot)\). PII that appears more frequently in the training dataset is expected to have a higher likelihood of being generated by the model. We define the _extractability_ score of a PII sequence as the expected probability of observing it in samples generated by the model. PII sequences more likely to be generated are at a higher risk of extraction. Some PII sequences may have been memorized by the model but are not extractable unless the attacker queries the model with a specific prompt. These sequences are at low risk of extraction against an uninformed attacker when the prompt itself is unlikely to be generated. Formally, we define the extractability of \(C\in\mathcal{V}^{*}\) as follows: \[\text{Extractability}(C;\theta) =\sum_{S\in\mathcal{V}^{*}}\Pr(S;\theta)\Pr(C|S;\theta) \tag{3}\] \[=\sum_{S\in\mathcal{V}^{*}}\Pr(S\,C;\theta). \tag{4}\] Equation (3) requires summing over all possible sequences, which is intractable even when we limit the length of said sequences. We can approximate Eq. (3) by computing the sum over sentences sampled from the model. A simple baseline is captured in Algorithm 3 which counts the number of times \(C\) occurs in generated sentences. The problem with using samples is that the probability that the continuation is a target PII depends on a language's grammar rules and may be very low. For instance, proper names may only be generated at specific locations, so that the frequency of names in generated text may be low and many samples must be drawn to obtain a good lower bound. ``` 1:procedure\(\text{S}_{gen}\leftarrow\text{Sample}(\emptyset,N,\theta)\) 2:\(\mathcal{S}_{gen}\leftarrow\text{Sample}(\emptyset,N,\theta)\) 3:\(k\gets 0\) 4:for\(S\in\mathcal{S}_{gen}\)do 5:\(\mathcal{C}\leftarrow\text{Extract}(S)\)\(\triangleright\)Tag PII in same class as \(C\) 6:if\(C\in\mathcal{C}\)then\(k\gets k+1\) 7:return\(\nicefrac{{k}}{{|\mathcal{S}_{gen}|}}\) ``` **Algorithm 3** Observed PII Extractability **Lazy Estimation.** We propose a sample-efficient estimation of PII extractability by making the following two assumptions: (1) PII follow grammatical rules and (2) PII of the same class are interchangeable. From (1), \(\Pr(C|S;\theta)=0\) when grammatical rules of the language do not allow PII to be generated after \(S\). From (2), it follows that a piece of PII has a non-zero probability of being generated in place of another piece of PII from the same class. From these two assumptions, we derive Algorithm 4 for approximating the extractability of a piece of PII. We (1) sample \(N\) sentences from the LM, (2) use a NER to tag PII in these sentences that is in the same class as the target PII sequence, (3) iteratively replace tagged PII with the target PII sequence, and accumulate the probability the model assigns to it, (4) average the accumulated probability to estimate Eq. (3). We compare our estimations with the observed leakage after sampling repeatedly from the model in Section IV-E. Efficient estimation allows practitioners to assess leakage of PII without exhaustively sampling the model and having to run NER models over large amounts of generated text. ### _PII Reconstruction_ In PII reconstruction, the adversary goal is to associate PII with a context. The attacker 'is given a sentence with multiple masked pieces of PII (e.g., "A murder has been committed by [MASK] and [MASK] in a bar close to the Rhine") and is asked to reconstruct one of them. Algorithm 5 encodes this as a game where the challenger randomly samples a sentence \(S\) from the training dataset that contains at least one piece of PII, and then selects one piece of PII in \(S\) as a target at random. The attacker is given access to the trained model and the prefix and suffix of the scrubbed sentence w.r.t. the target PII sequence \(C\). The success \(\text{Succ}_{\text{Recons}}\) of the adversary is the probability of correctly guessing \(C\), i.e., \(\Pr(\tilde{C}=C)\). ``` 1:procedureEstimatedExtractability(\(C,\theta,N\)) 2:\(\mathcal{S}_{gen}\leftarrow\text{Sample}(\emptyset,N,\theta)\) 3:\(p\gets 0;m\gets 0\) 4:for\(S\in\mathcal{S}_{gen}\)do 5:\(\mathcal{C}\leftarrow\text{Extract}(S)\)\(\triangleright\)Tag PII in same class as \(C\) 6:for\(C^{\prime}\in\mathcal{C}\)do 7:\(m\gets m+1\) 8:\(S_{0},S_{1}\leftarrow\text{Split}(S,C^{\prime})\) 9:\(p\gets p+\Pr(C|S_{0};\theta)\) 10:return\(\nicefrac{{p}}{{m}}\) ``` **Algorithm 4** Estimated PII Extractability Existing work on PII reconstruction [28] takes the query's prefix (i.e., "A murder has been committed by") and greedily decodes the next set of tokens from the LM. This attack, dubbed as the "TAB" attack, is inspired by hitting the TAB button on a predictive keyboard. We improve this attack by incorporating information from the sample's suffix, similar to how reconstruction attacks may be performed in masked LMs such as BERT [14]. Given a prefix and suffix \(S_{0},S_{1}\), we want to reconstruct the most likely PII \(C\), \[\operatorname*{arg\,max}_{C\in\mathcal{V}^{*}}\ \operatorname{Pr}(S_{0}\,C\,S_{1}; \theta). \tag{5}\] Computing Eq. (5) is intractable. It is an instance of _constrained beam search_[44], in which one searches for a sentence containing a piece of PII within the specified context that maximizes some constraint (i.e., the generation probability). Without any knowledge about the target PII, e.g., its length in tokens, an attacker has to exhaustively search the entire space of valid PII for an optimal solution. A similar problem has been encountered to measure stereotype bias in LMs [48]. For this reason, we propose the attack in Algorithm 6 for approximating Eq. (5). Figure 4 illustrates this attack for an exemplary sentence containing two masked PII sequences and where the target corresponds to the second one. First, we use a public masked language model to fill residual masks, if any, in the query (lines 2-3). In a reconstruction attack, when no candidates \(\mathcal{C}\) for \(C\) in Eq. (5) are given, we generate candidates by top-\(k\) sampling \(N\) sentences from the target model and gathering all generated pieces of PII (lines 4-6). Next, we replace the target mask with each candidate, rank candidates by the model's perplexity on the entire sequence, and return the best candidate (lines 7-8). ``` 1:procedure\(\mathcal{A}_{\textsc{Recon}}(N,\mathcal{T},\mathcal{D},n,\mathcal{O}_{\theta}( \cdot),S_{0},S_{1},\mathcal{C})\) 2:\(S_{0}\leftarrow\textsc{Fill-Masks}(S_{0})\)\(\triangleright\)Fill residual masks 3:\(S_{1}\leftarrow\textsc{Fill-Masks}(S_{1})\) 4:if\(\mathcal{C}=\emptyset\)then\(\triangleright\)Reconstruction case 5:\(\mathcal{S}_{gen}\leftarrow\textsc{Sample}(S_{0},N,\theta)\)\(\triangleright\)Using \(\mathcal{O}_{\theta}(\cdot)\) 6:\(\mathcal{C}\leftarrow\bigcup_{S\in S_{gen}}\textsc{Extract}(S)\) 7:\(\bar{C}\leftarrow\operatorname*{arg\,min}_{C\in\mathcal{C}}\textsc{PPL}(S_{0} \,C\,S_{1};\theta)\) 8:return\(\bar{C}\) ``` **Algorithm 6** PII Reconstruction Attack ### _PII Inference_ PII inference is similar to reconstruction, except that the adversary knows a set of candidate PII sequences (assumed to include the target one). Algorithm 7 encodes this as a game. We denote by \(\mathcal{E}\) the distribution over PII sequences obtained by sampling a sentence \(S\) from \(\mathcal{D}\) such that \(\textsc{Extract}(S)\neq\emptyset\) and choosing uniformly a PII sequence in it. ``` 1:procedure\(\mathcal{A}_{\textsc{Recon}}(N,\mathcal{T},\mathcal{D},n,\mathcal{O}_{\theta}( \cdot),S_{0},S_{1},\mathcal{C})\) 2:\(S_{0}\leftarrow\textsc{Fill-Masks}(S_{0})\)\(\triangleright\)Fill residual masks 3:\(S_{1}\leftarrow\textsc{Fill-Masks}(S_{1})\) 4:if\(\mathcal{C}=\emptyset\)then\(\triangleright\)Reconstruction case 5:\(\mathcal{S}_{gen}\leftarrow\textsc{Sample}(S_{0},N,\theta)\)\(\triangleright\)Using \(\mathcal{O}_{\theta}(\cdot)\) 6:\(\mathcal{C}\leftarrow\bigcup_{S\in S_{gen}}\textsc{Extract}(S)\) 7:\(\bar{C}\leftarrow\operatorname*{arg\,min}_{C\in\mathcal{C}}\textsc{PPL}(S_{0} \,C\,S_{1};\theta)\) 8:return\(\bar{C}\) ``` **Algorithm 7** PII Inference Game We use the reconstruction attack in Algorithm 6 to approximate Eq. (5) constrained to a given set of candidate PII sequences \(\mathcal{C}\). We observe that an attacker who only needs to infer the correct candidate is significantly more powerful and demonstrate leakage in DP models trained with \(\varepsilon=8\) in our evaluation in Section IV. ### _Baseline Leakage_ Our goal is to study PII leakage in LMs fine-tuned on a private dataset. However, the public, pre-trained LM that has already seen a piece of PII, such as a famous person's name, may reproduce that PII without having seen the private dataset, which cannot be considered a privacy violation. Similarly, prompting the LM with a sequence that contains explicit information about the PII may be exploited by the model to produce that PII. For example, consider the following scrubbed excerpt from an e-mail: "Hello [[MASK]], I like your homepage _johndoe.com_". The LM before and after fine-tuning both assign a high probability to the name "John Doe". We work around this issue by excluding all cases in which (i) the PII can be extracted from the LM before fine-tuning or (ii) the LM before fine-tuning correctly predicts the PII (in case of reconstruction and inference). After removing PII that are part of the baseline leakage, we argue that leakage of any remaining PII is significant and stems from the LM's observation of the private data. Appropriately dealing with baseline leakage is challenging without natural language understanding and real-world context. Our approach may under-count some instances of sensitive information leakage. For example, "Barack Obama was diagnosed with Alzheimer's" might be sensitive leakage even if he is a public person. Likewise, "Ohio resident Jim Carrey was convicted of embezzlement." under-counts due to the naming collision with the famous actor. ## IV Evaluation In this section, we describe our evaluation setup, such as the datasets, NER modules, models, and training details. Then we show our results for PII extraction, reconstruction, and inference. We ablate over three datasets and four variants of GPT-2 (small, medium, large, and XL). Finally, we study risk factors that make PII more likely to be leaked, motivating the development of heuristic scrubbers that are aware of other defenses such as DP. ### _Datasets_ Our evaluation spans datasets from three domains. Table VII shows statistics about each dataset, such as their size and the number of PII sequences. We refer to Appendix A for more information about the datasets. * **ECHR**[12] contains information from law cases dealt with by the European Court of Human Rights containing full descriptions of defendants' personal information. * **Enron**[34] consists of corporate e-mails by employees placed into the public domain after the Enron scandal. * **Yelp-Health2** is a subset of the Yelp reviews dataset that we filtered for reviews of healthcare facilities, such as dentists or psychologists. Footnote 2: [https://www.yelp.com/dataset](https://www.yelp.com/dataset) We choose three datasets from different domains to generalize our findings. All datasets are from realistic domains. ECHR contains data created by experts and Enron and Yelp-Health consist of user-generated data containing many authentic PII. We split the private dataset into equally large train and validation sets and a smaller test set. ### _Named Entity Recognition_ We tag and scrub PII from 21 entity classes, listed in Appendix A. Our scrubber combines two NER taggers from Flair3, trained on OntoNotes [25] and the default tagger defined in Presidio4 which is based on spaCy5. The Flair tagger reports an F1-score of \(90.93\) on OntoNotes. Footnote 3: [https://huggingface.co/flair/ner-english-ontontontotes-large](https://huggingface.co/flair/ner-english-ontontontotes-large) Footnote 4: [https://github.com/microsoft/presidio](https://github.com/microsoft/presidio) Footnote 5: [https://spacy.io](https://spacy.io) ### _Language Models_ **GPT-2.** Similar to related work [9], we experiment with publicly available, pre-trained checkpoints of GPT-2 [52] available at the Huggingface Model Hub6. Our experiments are conducted on LMs trained on the next-word prediction task and pre-trained on the WebText [52] dataset which consists of 40GB of English text scraped from the Internet. GPT-2 uses a byte-pair encoder [56] for tokenization. Footnote 6: [https://huggingface.co/gpt2](https://huggingface.co/gpt2) **Model Size.** We study leakage while ablating over various LM model sizes. Larger models have been shown to be more sample-efficient after fine-tuning [23], achieve a higher utility when trained with DP [67], but are expected to exhibit more memorization [11]. We experiment with GPT-2 Small (124m parameters), Medium (355m), Large (774m), and XL (1557m). **Training Details.** We fine-tune (i) undefended, (ii) differentially private (DP), (iii) scrubbed, and (iv) DP and scrubbed models. Training uses a batch size of 64 using an AdamW optimizer [40] and a linear learning rate decay. We train undefended and scrubbed models until the validation perplexity stops improving. For DP training, we utilize the dp-transformers [64] library, which is a wrapper around Opacus [66]. We use a maximum per-sample gradient norm of \(1.0\) and train DP models for 4 epochs using \((\varepsilon,\delta)=(8,\frac{1}{N})\) where \(N\) is the size of the training dataset. These privacy values are similar to established DP deployments such as Apple's QuickType which uses two daily releases of \(\varepsilon=8\)[3] and Google's models which use the equivalent of \(\varepsilon=8.9\) and \(\delta=10^{-10}\)[43]. ### _Metrics_ We report the following metrics for measuring (i) model utility, (ii) vulnerability to membership inference (MI), and (iii) PII leakage. * **Utility**: We compute the perplexity over an unseen test set, similar to related works [5, 11, 28]. * **Membership Inference**: We report the ROC AUC to measure sentence-level membership inference. * **PII Extractability**: We report Precision and Recall on the set of extractable PII. Recall measures (i) how much PII is at risk of extraction and Precision measures (ii) an attacker's confidence that a generated PII appears in the training dataset. * **PII Reconstruction and Inference**: We report the top-1 accuracy of correctly predicting a PII for a context. We refer to Section A-D for formal definitions of these metrics. ### _PII Extraction_ We first extract PII from LMs trained with and without DP using Algorithm 3. Then, we show that the estimated leakage (from Algorithm 4) matches the observed leakage which allows an attacker to point-wise verify the extractability of a PII without sampling the model exhaustively (making our attack more efficient). We analyse different factors such as duplication rate, token length, and sample size for their effect on PII leakage. Table III shows the measured precision and recall with an ablation over the LM's size. We sample on the order of 4m tokens from each target LM, by issuing 15k queries requesting the LM to generate sequences with a length of 256 tokens from an empty prompt using top-\(k\) sampling with \(k=40\). We account for baseline leakage by excluding all PII occurring in a random sample of 13m tokens in 50k queries from a public model (see Section III-D). Fig. 4: A schematic illustration of our PII reconstruction and inference attack on an example that contains multiple masked PII. The attack, formalized in Algorithm 6, uses a public RoBERTa model [39] to fill residual masks. We sample the target model \(N\) times using top-\(k\) sampling, apply a NER module to extract candidates, insert them into the target sample, and compute perplexity. The sample with the lowest perplexity is returned. **Model Size.** We observe that GPT-2-Large recalls 23% of PII in the ECHR dataset with a precision of 30%. We conclude that in practice an attacker can be confident that a generated PII is contained in the training dataset. The precision and recall decrease with the model's size. The smallest model (GPT-2-Small) has a significantly lower recall (only about 9%) at a similar precision as the large models (25%). In models trained with DP on ECHR, the precision and recall are similar for all model architectures, where we can extract about 3% of PII with a precision of 3%. **Duplication.** Figure 4(a) shows that duplication of PII has a strong impact on their extractability. We group all PII with equal duplication count in the training dataset and compute the frequency with which they are generated by the model. On all datasets, we observe a linear relationship between the number of occurrences of a piece of PII and the frequency with which they are leaked, i.e., PII occurring twice as often is expected to also leak twice as often. Our finding contrasts with the _superlinear_ effect in sequence-level memorization observed by Kandpal et al. [33]. We note that Kandpal et al. [33] study models trained from scratch and arbitrary sequences, whereas our study focuses on fine-tuned models and PII. Further examination is necessary to fully comprehend the relationship between both findings. In DP models, we observe that the extractability of PII is consistently about an order of magnitude lower than in undefended models. **Token Length.** We compare leaked PII by token length to evaluate whether PII sequences containing more tokens are less prone to extraction. In Figure 4(b), we group all leaked PII by their token length and compute a mean count from the generated dataset. We observe that undefended models leak PII sequences containing many tokens whereas long sequences are not leaked in DP models. In the range between 3-6 tokens, we observe that DP models leak about an order of magnitude fewer pieces of PII than undefended models. **Sample Size.** Figure 4(c) shows precision and recall as a function of the number of sampled tokens on ECHR. The recall increases to 23%, whereas the precision consistently decreases from 50% at 500k tokens to 30% at 4m tokens. This indicates that PII with a high probability of generation by the LM are likely pieces of real PII from the dataset and thus vulnerable to extraction. An attacker who samples larger sets from the model can generate more PII, but at a lower precision which makes the attack less impactful. **Similarities between Generated PII.** Figure 4(d) shows the intersection between sets of extracted PII across all LMs in a heatmap. We observe that (i) a subset of the most duplicated PII occurs in almost all and (ii) there is little similarity between which PII were leaked between models. In undefended models, we observe that PII which occur a single time can leak, which we never observe for DP models. **Estimated Extractability.** Fig. 5(a) shows that lazily estimated extractability correlates with observed leakage (i.e., the number of times a model generates the PII). This allows computing point-wise estimates for a target PII without searching through massive amounts of generated text. Our metric is however not perfect, as some false negative outliers are incorrectly predicted as non-extractable despite appearing in the generated dataset. **Extraction of other PII Classes.** We measure the PII extractability for other classes of PII such as e-mails and phone numbers. The attacker can extract about 14.1% of law case Fig. 5: A study of the PII that can be extracted with our attack after sampling about 4m tokens. Fig. 4(a) shows that PII duplication strongly predicts leakage. Fig. 4(b) shows that DP protects PII consisting of many tokens against extraction. The “Real PII” represent all raw PII from the same class in the training data. Fig. 4(c) shows precision and recall as a function of the amount of tokens sampled for DP and non-DP models. Table III details these results for different model sizes. Fig. 4(d) shows the intersection of extracted PII between models. The diagonal shows the number of extracted PII for each model. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**GPT2-Small**} & \multicolumn{2}{c}{**GPT2-Medium**} & \multicolumn{2}{c}{**GPT2-Large**} \\ \hline & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) \\ \hline \multicolumn{7}{c}{**ECHR**} \\ Prec & 24.91\% & 2.90\% & 28.05\% & 3.02\% & 29.56\% & 2.92\% \\ Recall & 9.44\% & 2.98\% & 12.97\% & 3.21\% & 22.96\% & 2.98\% \\ \multicolumn{7}{c}{**Enron**} \\ \hline Prec & 33.86 \% & 9.37\% & 27.06\% & 12.05\% & 35.36\% & 11.57\% \\ Recall & 6.26\% & 2.29\% & 6.56\% & 2.07\% & 7.23\% & 2.31\% \\ \multicolumn{7}{c}{**Yelp-Health**} \\ Prec & 13.86\% & 8.31\% & 14.87\% & 6.32\% & 14.28\% & 7.67\% \\ Recall & 11.31\% & 5.02\% & 11.23\% & 5.22\% & 13.63\% & 6.51\% \\ \hline \hline \end{tabular} \end{table} TABLE III: Results for the observed PII extraction on ECHR (top rows), Enron (middle rows), and Yelp-Health (bottom rows) after sampling around 4m tokens across 15k queries. numbers and 16.3% of mentioned organization names from an undefended model (shown in Figure 5(b)). In the DP model, we observe that an attacker can only extract 2.8% of law cases and 4.1% of organizations. For the Enron dataset, which contains long phone numbers, we never observe a single leaked real phone number in the DP model. However, we observe leakage of e-mail addresses (consisting of equally many tokens), that are typically correlated with a person's name. ### _PII Reconstruction_ We compare our PII reconstruction attack from Algorithm 6 with the TAB attack [28]. Table IV shows the results on ECHR, Enron, and Yelp-Health for the entity class 'person'. We sample \(64\) candidates and decode from the model using top-\(k\) sampling with \(k=40\). We observe that our reconstruction attack significantly outperforms the TAB attack on undefended models enabling the reconstruction of up to \(10\times\) more PII (in the GPT-2-Medium case on Enron). **Model Size.** On ECHR and GPT-2-Large, TAB correctly reconstructs at least \(5.81\%\) of PII whereas our attack achieves \(18.27\%\). This observation demonstrates that information in a sample's suffix provides a strong signal to reconstruct PII. On ECHR, our attack improves the baseline by at least \(2.5\times\), on Enron we observe an improvement of at least \(7.5\times\) and on Yelp-Health our attack is at least about \(3\times\) more successful (except for GPT-2-Small where our attack improves only from 0.33% to 0.42%). The risk of reconstruction is much smaller in DP models (\(\leq 1\%\)) where our attack still improves the baseline in all cases, but we believe the leakage is too small for a practical attack. We observe that across all datasets, larger models are more vulnerable to PII reconstruction. **Context Size.** On Enron, the advantage of our attack compared to TAB becomes more evident. E-mails in the Enron dataset typically mention the receiver of the e-mail at the beginning prior to any PII. For this reason, the TAB attack has only a small prefix to predict PII and cannot leverage the information contained in the e-mail's body. We observe that when the PII is in the set of candidates, it is predicted correctly about 70% of the time. However, our reconstruction attack often does not sample the correct candidate which effectively limits our attack's success rate. We believe a method that samples candidates by incorporating information from the sample's suffix could improve our attack even further. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**ECHR**} & \multicolumn{2}{c}{**Enron**} & \multicolumn{2}{c}{**Yelp-Health**} \\ \hline & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) \\ \hline \(|\mathcal{C}|=100\) & 70.11\% & 8.32\% & 50.50\% & 3.78\% & 28.31\% & 4.29\% \\ \(|\mathcal{C}|=500\) & 51.03\% & 3.71\% & 34.14\% & 1.92\% & 15.55\% & 1.86\% \\ \hline \hline \end{tabular} \end{table} TABLE V: Results of our PII inference attack on fine-tuned versions of GPT-2-Large. The values represent the attack’s accuracy at inferring the correct PII out of \(|\mathcal{C}|\) candidates. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{**GPT2-Small**} & \multicolumn{2}{c}{**GPT2-Medium**} & \multicolumn{2}{c}{**GPT2-Large**} & \multicolumn{2}{c}{**GPT2-XL**} \\ \hline & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) & No DP & \(\varepsilon=8\) \\ \hline ECHR(TAB) & 0.78\% & 0.24\% & 1.21\% & 0.32\% & 5.81\% & 0.48\% & 4.30\% & 0.39\% \\ ECHR (Ours, \(|\mathcal{C}|=64\)) & **2.25\%** & 0.44\% & **3.36\%** & 0.87\% & **18.27\%** & 0.55\% & **13.11\%** & 0.41\% \\ Enron (TAB) & 0.59\% & 0.04\% & 0.67\% & 0.04\% & 1.75\% & 0.04\% & 2.19\% & 0.19\% \\ Enron (Ours, \(|\mathcal{C}|=64\)) & **2.92\%** & **0.49\%** & **7.26\%** & 0.52\% & **12.68\%** & 0.55\% & **15.25\%** & 0.53\% \\ Yelp-Health (TAB) & 0.33\% & 0.24\% & 0.37\% & 0.14\% & 0.65\% & 0.12\% & 1.99\% & 0.12\% \\ Yelp-Health (Ours, \(|\mathcal{C}|=64\)) & **0.42\%** & 0.32\% & **1.31\%** & 0.32\% & **1.69\%** & 0.35\% & **6.40\%** & 0.36\% \\ \hline \hline \end{tabular} \end{table} TABLE IV: Results of PII reconstruction attacks on the entity class “person”. Bold numbers represent the best attack per dataset and LM. We compare our results with the TAB attack [28] on three datasets. Fig. 6: Fig. 5(a) shows the correlation between the observed and estimated leakage. Fig. 5(b) shows the precision and recall for other entity classes on the ECHR dataset. Fig. 5(c) shows the mean inference accuracy relative to the context length, which is the length of the combined prefix and suffix for a masked query. ### _PII Inference_ In PII inference, our attacker has access to (i) the anonymized training dataset and (ii) a list of candidate PII that also appear in the training dataset. For PII inference, we evaluate our attacks against the GPT-2-Large model on all three surveyed datasets with and without DP. Table V summarizes the results from our attack. **Results.** We observe that in the undefended setting, an attacker can infer PII with an accuracy of 70% out of 100 candidates on ECHR, 50% on Enron, and 28% on Yelp-Health. We observe higher leakage on ECHR and Enron, which is likely because they have more structure than Yelp reviews. Pieces of PII are mentioned repeatedly in similarly structured sentences, which causes higher PII leakage. The undefended setting enables practical attacks where the attacker can be confident about results. In the DP setting, an attacker can achieve an accuracy of about 8% given 100 candidates and about 4% in 500 candidates on ECHR. Although leakage in DP models is small, we believe our experiments demonstrate that DP does not fully protect against PII leakage in a practical setting against an informed attacker. Fig. 5(c) shows that inferring PII in very large contexts slightly _worsens_ the accuracy. This is likely because the expected memorization per token is lower in samples containing many tokens. ### _Membership Inference and PII Leakage_ We employ a shadow model membership inference attack [58] to empirically evaluate the relationship between sentence-level membership inference and PII leakage. In this attack, the adversary trains shadow models on datasets sampled from the same data distribution as the target model. The adversary then calculates the difference between the perplexity (i.e. PPL) of the target sentence w.r.t. the target and shadow models, and uses this as a score to decide if the sentence was a training member or not. Fig. 6(a) shows the ROC curve of this MI attack against an un-defended model, a model trained after scrubbing, and a model trained with differential privacy after scrubbing. Expectedly, we observe that scrubbing PII mitigates membership inference, but is nowhere as effective as DP. The attack achieves an AUC score of 0.96, 0.82, and 0.505 against undefended, scrubbed, and DP & scrubbed models respectively. **Connection between MI and PII Leakage.** Algorithm 8 shows the sentence-level MI game of Yeom et al. [65] alongside an indistinguishability variant of the PII Inference game in Algorithm 7. This corresponds to the case \(m=1\) when the adversary would be given a pair of candidates for the target PII, with the only exception that in Algorithm 8 the adversary is given just one of the candidates, depending on a Bernoulli random variable \(b\). The difference between one or two candidates is inessential and analogous variants of sentence-level MI have been considered before [27]. The essential difference between the MI and PII Inference games is that in the former the adversary has to distinguish between two sentences sampled from \(\mathcal{D}\), while in the PII Inference game, it has to distinguish between two sentences differing only on a piece of PII. This means that to leverage a MI attack for PII reconstruction or inference, the MI attack has to be strong enough to distinguish between member and non-member sentences differing in a few tokens. In contrast, a PII Inference attack can be directly turned into a MI attack against sentences containing a piece of PII. PII Reconstruction is similarly analogous to attribute inference (where the target _attribute_ is the missing piece of PII). PII Reconstruction can be linked to MI the same way as attribute inference was linked to MI by Yeom et al. [65]. Our empirical results show that they are indeed correlated. For instance, Figs. 6(b) and 6(c) respectively contrast MI with PII extractability and reconstruction attacks. Fig. 6(b) shows the likelihood of the model producing a member on the \(x\)-axis versus the memorization score on the \(y\)-axis. We observe that samples generated from empty prompts are not memorized, meaning that they likely contain few signals that are useful for MI. Fig. 6(c) shows the memorization score relative to our reconstruction attack. We observe a positive correlation between a sentence's memorization score and the success rate of our PII reconstruction attack. This means that sentences that are vulnerable to the MI attack are usually vulnerable to the PII reconstruction attack, and vice-versa. ### _Summary of Results_ Table VI summarizes the privacy-utility trade-off for GPT-2-Large model when fine-tuned on ECHR dataset. We enumerate our key results below: * Undefended models are highly vulnerable to all privacy attacks including membership inference and PII extraction, reconstruction, and inference. For PII risks, larger model sizes and higher duplication counts increase the risk of leakage. * Our results show that the threat of reconstructing PII has been underestimated and we demonstrate up to an order of magnitude higher leakage than prior work. * Empirically, we observe that differential privacy significantly bounds leakage from PII reconstruction relative to undefended models. * DP does not completely eliminate leakage from PII inference and PII extraction. We demonstrate that an attacker can infer PII with up to 10% accuracy (given 100 candidates) in a practical setting. * We find that DP and (aggressive) PII scrubbing limit the LM's utility, motivating the search for defenses with better empirical privacy/utility trade-offs. ## V Discussion and Limitations Below, we discuss extensions and limitations of our methodology, and identify further research motivated by our findings. We first discuss the applicability of our methodology to sensitive information other than PII, and potential extensions to our attacks exploiting semantic similarity and associations in the training dataset. We then describe how masked language models fare compare to autoregressive models and identify further research motivated by our findings: how to best combine DP training and scrubbing, optimizing attacks for other leakage metrics, and the need for better benchmarks. **General Applicability.** In this paper, we focus on defining metrics, game-based definitions, and tractable formulas for evaluating leakage of sensitive sequences of tokens categorized as PII. That said, we bring attention to the point that our methodology is generally applicable to any notion of sensitive input. As long as one has an effective method to correctly identify inputs deemed sensitive, our methodology can be adapted to measure the protection offered by existing ML pipelines in mitigating the leakage of _any_ sensitive information. In practice, it is often hard to draw a clear boundary around what constitutes sensitive information, which is an important but orthogonal problem. **Syntactic and Semantic Similarity.** We consider verbatim matches of PII tokens as leakage, however, our methods can be adapted to account for both syntactic and semantic similarity. For example, "Mr. John Doe" and "J. Doe" could be inferred to be the same person. Similarly, PII reconstruction and PII inference attacks can employ contexts with similar meaning to improve attack results. **Advanced Attacks.** We consider leakage of PII sequences from the training dataset in isolation, irrespective of the context where it appears and other extracted PII. Extracted PII sequences can be further leveraged in advanced attacks that explore associations among them and reveal additional private information about the training dataset, thereby enabling linkability attacks. **Utility-preserving Scrubbing.** Our empirical evaluation demonstrates that differential privacy is partially effective in mitigating leakage of PII. Based on this observation, existing scrubbing techniques can be adapted to take into consideration the partial protection offered by DP and heuristically scrub only PII that remains unprotected (e.g. because it occurs many times). Such a DP-informed scrubbing would allow for improving model utility while maintaing a privacy level equivalent to a naive combination of DP training and scrubbing. **Comparison to Masked Language Models.** Pior work has explored PII reconstruction in the clinical health setting [37, 61] with masked language models (MLMs) based on the BERT architecture [14]. MLMs are trained to reconstruct \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Undefended** & **DP** & **Scrub** & **DP + Scrub** \\ \hline Test Perplexity & 14 / 9 & 14 & 16 & 16 \\ Extract Precision & 30\% & 3\% & 0\% & 0\% \\ Extract Recall & 23\% & 3\% & 0\% & 0\% \\ \hline Reconstruction Acc. & 18\% & 1\% & 0\% & 0\% \\ Inference Acc. (\(|\mathcal{C}|=100\)) & 70\% & 8\% & 1\% & 1\% \\ \hline MI AUC & 0.96 & 0.5 & 0.82 & 0.5 \\ \hline \hline \end{tabular} \end{table} TABLE VI: Our results on ECHR for GPT-\(2\)-Large summarize the privacy-utility trade-off. We show the undefended model’s perplexity with/without masking generated PII. The undefended model has the lowest perplexity but the highest leakage. DP with \(\epsilon=8\) mitigates MI and (partially) PII leakage. Scrubbing only prevents PII leakage. DP with scrubbing mitigates all the privacy attacks but suffers from utility degradation. Fig. 7: Connecting sentence-level membership inference with PII reconstruction in GPT-\(2\)-Large. (a) shows the ROC curve against our fine-tuned model using a shadow model attack on ECHR. (b) shows that the memorization score of generated sequences is nearly zero and (c) shows that the memorization score correlates with the probability of correct PII reconstruction. a word in a masked query, which is equivalent to the PII reconstruction task in Eq. (5). During training, BERT models optimize the masked word's probability conditioned on the prefix and suffix, in comparison to GPT-2 which is auto-regressive and can only be conditioned on the prefix. A service deploying an MLM enables trivial attacks to query the most-likely replacement for a single mask conditioned on _the entire sentence_, unlike GPT-2 models. One of our contributions is to imitate this functionality in GPT-2 models to reconstruct PII. Attacks on GPT-2 and similar models employed for text generation are potentially a greater threat. Autoregressive models typically expose a next-word prediction API, whereas BERT-like models are often used for downstream tasks such as text classification, with less revealing APIs. We state that Eq. (5) is intractable, which is also true for existing MLMs since an attacker does not know the number of tokens in the PII and has to perform a general constrained search. **Need for Better Benchmarks.** In conducting this research, we realized the shortage of good benchmark datasets for measuring PII leakage. A notable exception is the Text Anonymization Benchmark of Pilan et al. [51] with human-annotated PII. However, we found this dataset to be too small for fine-tuning models with DP-SGD: the dataset contains a subset of \(1,268\) out of the estimated \(11\,500\) cases from the original ECHR dataset [12]. With such few records, we were unable to fine-tune models with both reasonable privacy (i.e., low \(\varepsilon\)) and utility (low perplexity). Our work motivates the need for better benchmarks for evaluating attacks and mitigations against PII leakage in trained models, in addition to evaluating text anonymization techniques at a dataset level. In performing our evaluation, we rely on off-the-shelf text anonymization tools powered by NER models to tag PII in generated sentences and leverage them in computing our leakage metrics. As a consequence of this approach, our metrics can capture leakage that is dependent on the quality of the tools we use. Assuming that NER models and other techniques used in these tools are prone to error, our results provide a lower bound on the leakage. **High-Precison/Low-Recall Attacks.** Our attacks evaluate PII leakage using average-case metrics and provide an overview of the threat of PII leakage in LMs. We did not consider high-precision/low-recall attacks [10] and further research is needed to explore their potential impact and effectiveness. **Limitations**. Due to the lack of extensive, annotated benchmark datasets for studying PII leakage in LMs, we employ the same NER model for both scrubbing and measuring PII leakage. Pilan et al. [51] demonstrate that a fine-tuned NER model with ground-truth PII annotations achieves recall rates between 84-93%, decreasing to 77% without annotations. Since scrubbing cannot remove all PII and we show PII leakage empirically, we conclude that scrubbing cannot fully prevent PII leakage. Further research is required to expand our findings to other LMs and datasets. ## VI Related Work We discuss prior work on attacks inferring private data as well as defenses to mitigate the leakage. **Extraction of Training Data.** There is extensive work studying how large language models memorize training data and attacks inferring information under various threat models. Research has shown the feasibility of extracting different types of information including individual sentences [9, 28], inserted canaries [8, 50, 68] as well as \(n\)-grams [42]. Prior work studied the leakage of PII in masked language models [36, 61], large language models [26, 54] and Smart Reply classification models [32]. In addition to demonstrating that language models leak training data, other efforts focus on understanding the causes for such leakage. Jagielski et al. [31] explore the causes of memorization such as training data ordering, i.e., samples can have different privacy risks independent of their content. Tirumala et al. [60] study the effect of memorization across variables such as dataset size, learning rate, and model size. Related work focus mainly on understanding the leakage in the absence of mitigations. In contrast, we are first to evaluate the interplay of defenses such as PII scrubbing and differential privacy in an end-to-end training pipeline. Existing work on training data extraction focuses on _any_ type of memorization [9] in public pre-trained LMs or models trained from scratch [33], whereas we focus on leakage of PII on fine-tuned LMs given the context where it appears (prefix and suffix), no context, or a list of PII candidates. **Mittigations.** Several works have proposed solutions to mitigate leakage of private information mainly based on differential privacy (DP) guarantees in the training pipeline. Yu et al. [67] and Li et al. [38] propose an efficient method for differential-privately fine-tuning LMs on private data. Shi et al. [57] propose selective DP--where DP is only applied to samples containing sensitive information to limit utility degradation. Stock et al. [59] study canary extraction attacks against models fine-tuned from GPT-2 using DP-SGD. Closer to our work, Zhao et al. [70] propose combining de-duplication, redaction, and DP-SGD to mitigate PII leakage. It would be most interesting to study how this proposal fares with respect to the risks and metrics we present. ## VII Conclusion Our work explores privacy/utility trade-offs of using defenses such as PII scrubbing and Differentially Private training when fine-tuning language models. We focus on measuring PII leakage from the training data with respect to three different adversary goals: PII extraction, reconstruction, and inference, and provide game-based definitions and leakage metrics for them. Our findings show that differential privacy is useful in mitigating PII leakage by a large extent but cannot completely eliminate it on its own. Traditional data curation approaches such as PII scrubbing are a crucial part of the training pipeline and are still necessary to achieve an appropriate level of protection. We advocate for the design of less aggressive PII scrubbing techniques that take into account the protection afforded by DP and achieve a better privacy/utility trade-off. ## Acknowledgements We are grateful to Victor Ruhle, Saurabh Naik, Boris Kopf and the anonymous reviewers for their suggestions and feedback that significantly improved this paper.
2310.00728
Physics-Informed Graph Neural Network for Dynamic Reconfiguration of Power Systems
To maintain a reliable grid we need fast decision-making algorithms for complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution grid switch settings in real-time to minimize grid losses and dispatches resources to supply loads with available generation. DyR is a mixed-integer problem and can be computationally intractable to solve for large grids and at fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network (GNNs) framework tailored for DyR. We incorporate essential operational and connectivity constraints directly within the GNN framework and train it end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR task.
Jules Authier, Rabab Haider, Anuradha Annaswamy, Florian Dorfler
2023-10-01T17:02:29Z
http://arxiv.org/abs/2310.00728v2
# Physics-Informed Graph Neural Network for Dynamic Reconfiguration of Power Systems ###### Abstract To maintain a reliable grid we need fast decision-making algorithms for complex problems like Dynamic Reconfiguration (DyR). DyR optimizes distribution grid switch settings in real-time to minimize grid losses and dispatches resources to supply loads with available generation. DyR is a mixed-integer problem and can be computationally intractable to solve for large grids and at fast timescales. We propose GraPhyR, a Physics-Informed Graph Neural Network (GNNs) framework tailored for DyR. We incorporate essential operational and connectivity constraints directly within the GNN framework and train it end-to-end. Our results show that GraPhyR is able to learn to optimize the DyR task. Graph Neural Network, Dynamic Reconfiguration, Physics Informed Learning. ## I Introduction The global energy landscape is rapidly evolving with the transition towards renewable energy generation. This transition brings numerous benefits for the climate, but also presents challenges in effectively controlling and optimizing power systems with high penetration of intermittent renewable generation such as solar and wind. New operating schemes are needed to ensure efficient and reliable grid operations in the presence of intermittent generation. Significant research efforts focus on optimizing resource dispatch and load flexibility towards reducing costs and increasing grid efficiency; however there remains efficiency gains to be had when co-optimizing grid topology. To this end, we propose _Dynamic Reconfiguration_ (DyR) in a distribution grid to increase operating efficiency by co-optimizing grid topology and resource dispatch. The distribution grid reconfiguration problem involves the selection of switch states (open/closed) to meet demand with available generation, while satisfying voltage and operating constraints. Grid reconfiguration can re-route power flows to reduce power losses [1], increase utilization of renewable generation [2, 3], and re-energize grids after contingencies. Presently, DyR is deployed for loss reduction in the EU [2], and for fault conditions in the US using rule-based control schemes. The widespread growth of distributed generation, storage, and electric vehicles creates the opportunity for DyR for loss reduction, whereby topology and dispatch decisions are made _fast and frequently_ in response to faster resource timescales; as solar generation varies, the topology is adapted to supply loads in close proximity to generation, thus reducing losses and improving voltage profiles across the grid. The DyR problem is a mixed integer program (MIP) due to the discrete nature of switch decisions. It is well known that MIPs are NP-hard (i.e. cannot be solved in polynomial time) and thus may be computationally intractable for large-scale problems. A distribution substation may have 10 feeders each with 5 switches, resulting in over \(10^{15}\) possible topologies. If operating constraints and load conditions result in only \(1\%\) of these topologies, the search space remains prohibitively large for traditional approaches. One option is to restrict the optimization to a single feeder, however optimizing topology over all feeders permits load transfer and generation exports. Machine learning (ML) offers an alternative by shifting the computational burden to offline training, thereby making dynamic decision making via the online application of ML algorithms computationally feasible. Recent works propose ML for solving MIPs and combinatorial optimization (CO) [4], either in an end-to-end fashion or to accelerate traditional solvers. Graphs play a central role in formulating many CO problems [5], representing paths between entities in routing problems, or interactions between variables and constraints in a general CO [6, 7]. The use of Graph Neural Networks (GNNs) is also being explored to leverage the underlying graph structure during training and identify common patterns in problem instances. The traveling salesman problem (TSP) is a fundamental problem in CO and a standard benchmark which has been extensively studied with traditional optimization techniques. Recently, GNNs have been used to solve the TSP with good performance and generalizability [8, 9, 10]. In this work we leverage GNNs to learn the power flow representation for reconfiguration. Grid reconfiguration for distribution grids has been studied with varying solution methodologies including knowledge-based algorithms and single loop optimization [1, 11], heuristic methods [12, 13], and reformulation as a convex optimization problem using big-\(\mathcal{M}\) constraints [14, 15, 16]. However, these methods are not computationally tractable for large-scale optimization in close to real-time applications, and may be limited to passive grids (i.e. no local generation). Machine learning approaches for DyR have also been proposed [17, 3]. In [17] the DyR problem is formulated as a Markov decision process and solved using reinforcement learning. In [3] a light-weight physics-informed neural network is proposed as an end-to-end learning to optimize framework with guarantee certified satisfiability of the power physics. A physics-informed rounding layer explicitly embeds the discrete decisions within the neural framework. These approaches show potential, but both are limited to a given grid topology and switch locations. Our approach is similar to that of [3] wherein we embed discrete decisions directly within an ML framework. The main contribution of this paper is **GraPhyR**, a graph neural network (**Gra**) framework employing physics-informed rounding [3] (**PhyR**) for DyR in distribution grids. GraPhyR is an end-to-end framework that learns how to optimize, and is enabled by four key architectural components: **(1) A message passing layer that models switches as gates:** The gates are implemented as a value between zero and one to model switches over a continuous operating range. Gates control the flow of information through switches in the GNN, modeling the control of physical power flow between nodes. **(2) A scalable local prediction method:** We make power flow predictions locally at every node in the grid using local features. The predictors are scale-free and so can generalize to grids of any topology and size. **(3) A physics-informed rounding layer:** We embed the discrete open/closed decisions of switches directly within the neural framework. PhyR selects a grid topology for each training instance upon which GraPhyR predicts a feasible power flow and learns to optimize a given objective function. **(4) A GNN that takes the electrical grid topology as input:** We treat the grid topology and switch locations as an input which permits GraPhyR to learn the power flow representation across multiple possible distribution grid topologies within and across grids. Thus GraPhyR can optimize topology and generator dispatch: (a) on multiple grid topologies seen during training, and (b) under varying grid conditions such as (un)planned maintenance of the grid. We demonstrate the performance of GraPhyR in predicting near-optimal and feasible solutions. We also show the effectiveness of GraPhyR in adapting to unforeseen grid conditions. The remainder of this paper is organized as follows. Section II presents DyR as an optimization problem. Section III presents the GraPhyR method and details the four key architectural components. Section IV presents the simulation results, and conclusions are drawn in Section V. ## II Reconfiguration as an Optimization Problem We consider DyR of distribution grids with high penetration of distributed generation. We model the power physics using Linearized DistFlow [1] as below: \[\min_{\mathbf{\psi}}\,f(\mathbf{x},\mathbf{\psi}) =\sum_{(i,j)\in\mathcal{A}}(p_{ij}^{2}+q_{ij}^{2})R_{ij}\] (1) s.t. \[p_{j}^{G}-p_{j}^{L} =\sum_{k:(j,k)\in\mathcal{A}\cup\mathcal{A}_{sw}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Message Passing_ The GNN models the distribution grid topology as an undirected graph, with switch embeddings modeling the switches in the electrical grid. The GNN's message passing layers incorporate these embeddings as gates, which enables GraPhyR to learn the representation of linearized Ohm's law of (5) across multiple topologies in a physics-informed way. The input to the GNN are the grid topology and nodal loads, and the output is a set of node and switch embeddings which will be used to make reconfiguration, power flow, and voltage predictions. #### Iii-A1 Grid Topology as Input Data for Graph Structure An input to the GNN is the grid topology described by \(\mathcal{G}(\mathcal{N},\mathcal{A},\mathcal{A}_{sw})\), using which the GNN models the physical grid topology as an undirected graph \(\mathcal{G}(\mathcal{N},\mathcal{E},\mathcal{E}_{sw})\) with \(N\) nodes, \(M\) lines, and \(M_{sw}\) switches. Trivially, \(\mathcal{E}\) (\(\mathcal{E}_{sw}\)) represents the undirected communication links along the directed edges \(\mathcal{A}\) (\(\mathcal{A}_{sw}\)) to support message passing and extracting the problem representation in the embeddings. By including \(\mathcal{G}(\mathcal{N},\mathcal{A},\mathcal{A}_{sw})\) as an input to the GNN, our GraPhyR framework is able to adapt to changing grid conditions, rather than requiring a large training dataset with multiple scenarios. #### Iii-A2 Initial Node, Line, and Switch Embeddings The second input data to the GNN is the load data \(\mathbf{x}^{0}\) which defines the node embeddings. The load data contains the active and reactive power load \(p_{i}^{L}\) and \(q_{i}^{L}\) for each node \(i\) in the grid and thus determines the initial node embeddings \(x_{i}^{0}\) of every node \(i\) in the corresponding graph where \(\mathbf{x}^{0}=\left[x_{1}^{0},\ldots,x_{N}^{0}\right]^{T}=\left[(p_{0}^{L},q_ {0}^{L}),\ldots,(p_{N}^{L},q_{N}^{L})\right]^{T}\). The line embeddings are set to one and are not updated by the message passing layers. The switch embeddings determine the value of the gate and are randomly initialized, similar to randomly initializing weights in a neural network. The switch embeddings are updated through the message passing layers. Initial line and switch embeddings are given by \(z_{ij}^{0}\), \(\forall\{i,j\}\in\mathcal{E}\cup\mathcal{E}_{sw}\). #### Iii-A3 Message Passing Layers In each hidden layer of the GNN the nodes in the graph iteratively aggregate information from their local neighbors. Deeper GNNs have more hidden layers and thus have node embeddings which contain information from further reaches of the graph. For each node embedding \(x_{i}^{0}\) in the graph, the first message passing layer is defined in (13) where \(\mathcal{N}_{i}\) denotes the set of neighboring nodes to node \(i\). For each switch embedding \(z_{ij}^{0}\) in the graph, the first message passing layer is defined in (15). \[x_{i}^{1}=ReLU(W_{1}^{0}x_{i}^{0}+\sum_{j\in\mathcal{N}_{i}}\{W_ {2}^{0}\cdot f(z_{ij}^{0})\cdot x_{j}^{0}\}) \tag{13}\] \[f(z_{ij}^{0})=\begin{cases}sig(z_{ij}^{0})&\text{if }\{i,j\}\in \mathcal{E}_{sw}\\ 1&\text{otherwise}\end{cases}\] (14) \[z_{ij}^{1}=ReLU(W_{3}^{0}(x_{i}^{0}+x_{j}^{0})+W_{4}^{0}z_{ij}^{0}), \forall\{i,j\}\in\mathcal{E}_{sw} \tag{15}\] For the remaining message passing layers, denoted by \(l\in\{1,2,\ldots,\mathcal{L}-1\}\), a residual connection is added to improve prediction performance and training efficiency [18]. The resulting node and switch embeddings are: \[x_{i}^{l+1}=x_{i}^{l}+ReLU(W_{1}^{l}x_{i}^{l}+\sum_{j\in\mathcal{ N}_{i}}\{W_{2}^{l}\cdot f(z_{ij})\cdot x_{j}^{l}\}) \tag{16}\] \[f(z_{ij}^{l})=\begin{cases}sig(z_{ij}^{l})&\text{if }\{i,j\}\in \mathcal{E}_{sw}\\ 1&\text{otherwise}\end{cases}\] (17) \[z_{ij}^{l+1}=z_{ij}^{l}+ReLU(W_{3}^{l}(x_{i}^{l}+x_{j}^{l})+W_{4} ^{l}z_{ij}^{l}),\forall\{i,j\}\in\mathcal{E}_{sw} \tag{18}\] The line embeddings are trivially set to one. We omit residual connections in the first message passing layer to expand the input embeddings \(x_{i}^{0}\) with dimensions of the input data, to an arbitrarily large hidden embeddings dimension \(h\). This allows the GNN to learn more complex representations by extracting features in a higher dimensional space. #### Iii-A4 Gates We implement gates in the message passing layer by applying a sigmoid to the switch embeddings, as in (14) and (17). The function \(f(z_{ij})\) acts like a filter for the message passing between two neighboring nodes, attenuating the information signal if the switch is closed. The gate models the switches as a continuous switch (ex. a household light dimmer), controlling information flow in the same way a switch controls power flow between two nodes. #### Iii-A5 Global Graph Information In the final message-passing layer, we calculate a global graph embedding, \(x_{G}^{\mathcal{L}}=\sum_{i=1}^{N}x_{i}^{\mathcal{L}}\). This embedding offers information access across the graph and can reduce the need for an excessive number of message passing layers for sparse graphs, such as those in power systems. This improves the computational efficiency. Fig. 1: GraPhyR: proposed framework to solve the DyR problem. ### _Prediction_ After the \(\mathcal{L}\) message passing layers, the embeddings extracted from the input data are used to predict the switch open/close status and a subset of the power flow variables, denoted as independent variables. #### Iii-B1 Variable Space Partition We partition the variable space into independent and dependent variables. The independent variables constitute the active power flows \(p_{ij}\), nodal voltages \(v_{i}\), and switch open/close status \(y_{ij}\). The dependent variables constitute the reactive power flows \(q_{ij}\), and nodal generation \(\{p_{i}^{G},q_{i}^{G}\}\). We leverage techniques for variable space reduction to calculate the dependent variables from the independent variables, using constraints (2)-(12). This step ensures that the power physics constraints have certified satisfiability, as further discussed in Section III-E. This partition is non-unique. It critically depends on the structure of the given problem which determines the relationship between the sets of variables, and the neural architecture which determines the relationship between inputs, predictions, and consecutive neural layers. **We further advocate that the neural architecture itself must be physics-informed, to embed domain knowledge and physical constraints directly into the neural network, as we have done in GraPhyR.** #### Iii-B2 Local Prediction Method Our prediction method leverages two key observations: (i) the relationship between power flows and voltages are the same for any node-edge pair and are modelled by the physics equations (2)-(5); (ii) the binary nature of switches makes it inherently different from a distribution line. Using these, we define two local prediction methods which use multi-layer perceptrons: a line predictor (L-predictor) and a switch predictor (S-predictor), shown in Fig. 3. The L-predictor in (19) predicts power flow and the voltages of the two nodes connected by the line using the node and global embeddings. The S-predictor also predicts the probability for the switch to be closed, using the switch embeddings \(z_{ij}^{\mathcal{L}}\) in addition to the node and global embeddings, as in (20). All predictions are denoted with a hat (i.e. \(\hat{v}_{i}\)) and will be processed in subsequent layers to render the final topology and dispatch decisions. \[[\hat{p}_{ij},\hat{v}_{i}^{j},\hat{v}_{j}^{i}]=\text{L-predictor} [x_{i}^{\mathcal{L}},x_{j}^{\mathcal{L}},x_{G}^{\mathcal{L}}],\qquad\forall(i, j)\in\mathcal{A} \tag{19}\] \[[\hat{p}_{ij},\hat{v}_{i}^{j},\hat{v}_{j}^{i},\hat{y}_{j}]=\text{ S-predictor}[x_{i}^{\mathcal{L}},x_{j}^{\mathcal{L}},z_{ij}^{\mathcal{L}},x_{G}^{ \mathcal{L}}],\;\forall(i,j)\in\mathcal{A}_{sw} \tag{20}\] Our local predictors exploit the full flexibility of GNNs. They are permutation invariant to the input graph data; are independent of the size of the graph (scale-free); and are smaller than the corresponding global predictor for the same grid. The first feature means our framework is robust to changes in input data. The last two features means our framework is lightweight and scalable. This would not be possible with a global predictor which predicts all independent variables from node and switch embeddings across the graph. The size of the input and output layers of a global predictor would depend on the size of the graph and the number of switches, and is the limitation in [3]. Table I summarizes the size of local and global predictors for the reconfiguration problem, where \(h\) is the dimension of the hidden graph embeddings. ### _Voltage Aggregation and Certified Satisfiability of Limits_ The local predictions obtained from the L-predictor and S-predictor generate multiple instances of voltage predictions for each node as indicated by a superscript. Specifically, the number of instances corresponds to the degree of the node \(i\), \(|\delta_{\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}(i)|\). We aggregate the voltage predictions to a unique value for each node in the grid as \(\hat{v}_{i}=\frac{1}{|\delta_{\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}(i)|} \sum_{j:\{i,j\}\in\mathcal{E}\cup\delta_{\mathcal{E}_{sw}}}\hat{v}_{i}^{j}\). The voltage predictions are then scaled onto the box constraints (10) with \(v_{i}=\underline{v}\cdot(1-\hat{v}_{i})+\overline{v}\cdot\hat{v}_{i}\). Notably, by selecting voltages as an independent variable in our variable space partition, we certify that voltage limits across the grid will always be satisfied, a critical aspect of power systems operation. ### _Topology Selection using Physics-Informed Rounding_ The S-predictor provides probabilistic predictions for open/close decisions of each switch. We recover binary decisions using a physics-informed rounding (PhyR) algorithm [3]. We exploit the radiality of distribution grids, which requires \(\mathcal{S}=N-1-M\) switches to be closed so there are always \(N-1\) conducting lines. The PhyR method selects the \(\mathcal{S}\) switches with the largest probabilities \(\hat{y}_{ij}\) and closes them by setting the corresponding \(y_{ij}=1\); the remaining switches are opened, \(y_{ij}=0\). This enforces (6) and (11). Note that as distribution grid technologies advance, bidirectional and loop flows may be easily incorporated in new protection schemes. Fig. 3: Local predictions made by the switch and line predictors use the node and switch embeddings extracted after \(\mathcal{L}\) message passing layers. Fig. 2: Message passing layers where switches are denoted by red-dashed lines. The node and switch embeddings are represented by blue and red colored blocks respectively, where the number of squares per-block indicates the dimension of the embeddings \(h\). This would remove the radiality constraint, which GraPhyR can accommodate with suitable modifications to PhyR. A note must be made about the practical implementation: PhyR is implemented with \(\min\) and \(\max\) operators which return gradients of \(0\), "killing" the gradient information necessary for backpropagation. We preserve these gradients in the computational graph by setting all but one switches to binary values, those with \(\mathcal{S}-1\) largest probabilities. Training guides the remaining switch towards a binary value. ### _Certified Satisfiability of Power Physics_ The final neural layer recovers the full variable space and enforces power flow constraints through open switches. The following steps happen sequentially: 1. Given \(\mathbf{y}\) and the independent variables we compute the reactive power flows \(\hat{q}_{ij}\) using (4)-(5). 2. Given \(\mathbf{y}\) we enforce (7) and (8) as \(p_{ij}=(\hat{p}_{ij}-0.5)\cdot 2\mathcal{M}y_{ij}\) and \(q_{ij}=(\hat{q}_{ij}-0.5)\cdot 2\mathcal{M}y_{ij}\) respectively. By explicitly setting flows through open switches to zero we enforce the constraints in a hard way. 3. Active and reactive power nodal generation is calculated using (2) and (3), respectively. ### _Loss Function_ The neural network learns to optimize by using an unsupervised framework. It has two objectives: to minimize line losses in (1), and to minimize inequality constraint violations of generation constraint (9) and connectivity constraints (12). Denoting these constraints as \(h(\mathbf{x},\boldsymbol{\psi})\leq 0\), we regularize the loss function using a soft-loss penalty with hyperparameter \(\lambda\). The loss function is \(l=f(\mathbf{x},\boldsymbol{\psi})+\lambda||\text{max}\{0,h(\mathbf{x}, \boldsymbol{\psi})\}||_{2}\). **Remark 1**: _The inequality constraints (7), (8), and (10) have certified satisfiability by design of GraPhyR._ **Remark 2**: _Our loss function is unsupervised, and does not need the optimal solutions, which may be unknown or computationally prohibitive to compute._ ## IV Experimental Results ### _Dataset and Experiment Setup_ We evaluate GraPhyR on a canonical distribution grid BW-33 [1] with 33 nodes, 29 lines, and 8 switches. We generate a variant of BW-33, called \(\mathcal{G}_{1}\) with 33 nodes, 27 lines, and 10 switches. We use the dataset from [3] which introduces distributed solar generation in the grid with a penetration of \(25\%\) generation-to-peak load. Loads are perturbed about their nominal value as typically done in literature. The two networks are shown in Fig. 4. The dataset has 8600 data points per grid which are divided as \(80/10/10\) for training/validation/testing. We implement GraPhyR using PyTorch and train on the MIT supercloud [19]. GraPhyR has \(4\) message passing layers each with dimension \(8\) (\(\mathcal{L}=4,h=8\)). The L-predictor and S-predictor have a single hidden layer with dimension 24 and 32 respectively. We use \(10\%\) dropout, batch normalization, and ReLU activation in both predictors. The soft loss hyperparameter is \(\lambda=100\), big-\(\mathcal{M}\) relaxation parameter is \(0.5\) per unit (p.u.), and voltage bounds are \(\underline{v}=0.83,\overline{v}=1.05\) p.u. which adapts to the lossy behavior of BW-33 [1, 3]. We use ADAM optimizer with a learning rate of \(\gamma=5e^{-4}\), a batch size of \(200\), and train for 1500 epochs. We evaluate the performance of the neural framework using a committee of networks approach. We train 10 models with independent weight initialization and average the predictions across all models. ### _Performance Metrics_ We adopt the performance metrics defined in [3] to assess prediction performance. The asterisks notation (i.e. \(v^{*}\)) denotes the optimal solution obtain from a MIP solver. **Dispatch error:** optimality metric of mean-squared error (MSE) in optimal generator dispatch: \(\frac{1}{N}\sum_{j\in\mathcal{N}}{(p_{j}^{G}-p_{j}^{G*})^{2}}+(q_{j}^{G}-q_{ j}^{G*})^{2}\). **Voltage error (VoltErr):** optimality metric of MSE in nodal voltage prediction: \(\frac{1}{N}\sum_{j\in\mathcal{N}}{(v_{j}-v_{j}^{*})^{2}}\). **Topology error:** optimality metric of the Hamming distance [20] between two topologies, calculated as the ratio of switch decisions not in the optimal position: \(\frac{1}{M_{sw}}\sum_{(i,j)\in\mathcal{A}_{sw}}{(y_{ij}-y_{ij}^{*})^{2}}\). **Inequality violation:** feasibility metric of the magnitude of violations in constraint set, measuring the mean and maximum as \(\frac{1}{|h|}\sum_{k}\max{\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}}\) and \(\max_{k}{\{\max\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}}}\). **Number of violations exceeding a threshold:** feasibility metric of the number of inequality constraints which are violated by more than an \(\epsilon\) threshold: \(\sum_{k}{\mathbb{T}}_{\max{\{0,h^{k}(\mathbf{x},\boldsymbol{\psi})\}>\epsilon}}\). ### _Case (a). GraPhyR with Local vs. Global Predictors_ We first compare GraPhyR with local predictors to a variant with a global predictor, termed Global-GraPhyR. The global predictor determines all independent variables (real power flows, voltages, switch probabilities) using all node and line embeddings. We implement the global predictor with a single hidden layer of the same size as the input dimension. The global predictor has input/output dimensions of 328/78 as compared to the L-predictor and S-predictor with dimensions of 24/3 and 32/4 respectively. Note that the global predictor predicts one voltage per node so voltage aggregation is not needed. We also compare the performance of GraPhyR with that of prior work which use a simple neural network with two hidden layers [3]: _SiPhyR_ which employs PhyR; and _InSi_ which approximates a step function. Fig. 4: Grid topology of BW-33 (left) and the synthetic \(\mathcal{G}_{1}\) (right). Switches indicated with green dashed lines. Solar generator locations indicated with yellow nodes. Table II-(a) shows the prediction performance for these methods. We first observe that **the GNN frameworks achieve lower dispatch error**, with Global-GraPhyR outperforming SiPhyR by two orders of magnitude. The GNN uses topological information to optimize the dispatch and satisfy loads. Second, **the PhyR-based frameworks achieve lower topology errors** by up to 10%, by embedding the discrete decisions directly within the ML framework. However, the topology error remains high (\(>30\%\)), demonstrating the challenge in learning to optimize this combinatorial task. Finally, **SiPhyR and Global-GraPhyR achieve the best performance across feasibility metrics**, with lower magnitude and number of inequality violations. Notably, the maximum inequality violation is an order of magnitude higher for InSi which does not benefit from PhyR, and GraPhyR which makes local predictions. This is expected. First, PhyR explicitly accounts for binary variables within the training loop to enable the end-to-end learning: PhyR selects a feasible topology upon which the neural framework predicts a near-feasible power flow solution. Second, GraPhyR sacrifices some prediction performance for the flexibility to train and predict on multiple graphs. Figure 5 plots the mean inequality violations for GraPhyR. The constraints are always respected for voltage (by design) and connectivity (by constraint penalty). Nodal generation constraints are frequently violated as the lowest cost (lowest line losses) solution is to supply all loads locally. We next test the limits of topology prediction within our ML framework by comparing with a semi-supervised approach. The loss function includes a penalty on the switch status: \[l_{sm}=f(\mathbf{x},\boldsymbol{\psi})+\lambda||\text{max}\{0,h(\mathbf{x}, \boldsymbol{\psi})\}||_{2}+\mu||\mathbf{y}-\mathbf{y}^{*}||_{2} \tag{21}\] Table II-(a) shows the performance of the semi-supervised GraPhyR. We also include results of Supervised-SiPhyR from [3] which uses a regression loss for voltages, generation, and switch status, and an inequality constraint violation penalty: \[l_{sup}(z,\varphi)=\|(\mathbf{v}-\mathbf{v}^{*})^{2}+(\mathbf{p} ^{\mathbf{G}}-\mathbf{p}^{\mathbf{G}*})^{2}+(\mathbf{q}^{\mathbf{G}}-\mathbf{ q}^{\mathbf{G}*})^{2}\|_{2}^{2}\] \[+\|(\mathbf{y}-\mathbf{y}^{*})^{2}\|_{2}^{2}+\lambda||\text{max} \{0,h(\mathbf{x},\boldsymbol{\psi})\}||_{2} \tag{22}\] The results show that Semi-supervised GraPhyR outperforms Supervised-SiPhyR on topology error, achieving near-zero error. This substantial difference can be attributed to the GNN which embeds topological data directly within the framework. Although these (semi-)supervised approaches achieve good performance, they are not practicable. They require access to the optimal solutions, which may be computationally prohibitive to generate across thousands of training data points. A note must be made on computational time. Solving the DyR problem using Gurobi (a commercial MIP solver) takes on average 201 milliseconds for BW-33, and 18 seconds for a 205-node grid per instance. Actual computational times vary significantly with varying load conditions which stress grid voltages (Ex. 17-fold increase for the 205-node grid during high load periods [3]). In contrast, the inference time of GraPhyR is only 84 milliseconds for a batch of 200 instances. ### _Case (b). Prediction Performance on Multiple Grids_ A key feature of GraPhyR is its ability to solve the DyR problem across multiple grid topologies. We trained and tested GraPhyR on two grids (BW-33 and \(\mathcal{G}_{1}\)) that have the same number of nodes but different number of lines and switches. Table II-(b) shows these results. The performance of GraPhyR on the two grids is similar to that of GraPhyR on a single grid, showing that GraPhyR can learn the power flow representation across multiple topologies and across multiple grids. Fig. 5: Magnitude of the inequality violations for GraPhyR. The constraint sets on nodal generation, voltage limits, and connectivity constraints are separated by black vertical lines. ### _Case (c). Adapting to Changing Grid Conditions_ We next test GraPhyR on changing grid conditions, such as (un)planned maintenance by the grid operator or switch failure. Since power flows are highly correlated with the grid topology, changes in the set of feasible topologies due to maintenance or equipment failure can significantly change the prediction accuracy. Rather than training on multiple scenarios, we train only on the BW-33 grid for normal operating conditions and test on cases where a switch is required to be open or closed. Results are shown in Table II-(c). Generally the dispatch error, voltage error, and average inequality violation magnitudes remain similar to cases of normal operation. However, there is a notable increase in the number of inequality violations, and when forcing a switch open, an order of magnitude increase in the maximum inequality violations. Forcing a switch open removes an edge from the GNN graph. The resulting graph is more sparse, reducing access to information during message passing and changing the information contained in the node and switch embeddings. The topology error is more nuanced. When switch 36 is closed, there is an increase in voltage and topology error. This is because without any operator requirements on switch statuses, switch 36 remains optimally open for all load conditions. Thus, when switch 36 is required to be open, there is a significant decrease in topology error, by almost 10%. Since we did not trained on other scenarios, GraPhyR struggles to optimize the topology and predict voltages when the grid conditions deviate significantly from the training data - such as when switch 36 is closed. Similar performance degradation happens when switch 35 is required to be open; this switch is optimally closed for all load conditions. Interestingly, the status of switch 10 (open or closed) does not affect the topology error, although this switch is typically closed in the training data. There may be multiple (near-)optimal topologies with similar objective value. Regularizing the dataset or performance metrics against these multiple solutions may be necessary to improve prediction performance. ## V Conclusion We developed GraPhyR, an end-to-end physics-informed Graph Neural Network framework to solve the dynamic reconfiguration problem. We model switches as gates in the GNN message passing, embed discrete decisions directly within the framework, and use local predictors to provide scalable predictions. Our simulation results show GraPhyR outperforms methods without GNNs in learning to predict optimal solutions, and offers significant speed-up compared to traditional MIP solvers. Further, our approach adapts to unseen grid conditions, enabling real-world deployment. Future work will investigate the scalability of GraPhyR to larger grids (200+ nodes), approaches to reduce inequality constraint violations, and regularization strategies to improve topology prediction. Finally, further efforts are needed in developing good datasets with representative timeseries data in distribution grids.
2303.16072
Oscillon spectroscopy
The sine-Gordon model in 3+1 dimensions is known to admit two oscillons of different energy and frequency but comparable lifetime. We show that the oscillon spectrum includes more spherically symmetric ``states''. We identify new high-amplitude oscillons by allowing the field profile to have a number of nodes. For each number of nodes, we find 2 states with a comparable lifetime to the nodeless ones. Oscillons with nodes are, however, unstable to non-spherical perturbations and so their lifetime is significantly reduced. Interestingly, these states are seen to fragment into a collection of nodeless oscillons. The heavy nodeless oscillon is quite remarkable: despite its energy it is stable against fragmentation. Moreover, it has considerably small oscillation frequency, meaning that it can be interpreted as a rather relativistic bound state.
Fabio van Dissel, Oriol Pujolas, Evangelos Sfakianakis
2023-03-28T15:48:43Z
http://arxiv.org/abs/2303.16072v1
# Oscillon spectroscopy ###### Abstract The sine-Gordon model in 3+1 dimensions is known to admit two oscillons of different energy and frequency but comparable lifetime. We show that the oscillon spectrum includes more spherically symmetric "states". We identify new high-amplitude oscillons by allowing the field profile to have a number of nodes. For each number of nodes, we find 2 states with a comparable lifetime to the nodeless ones. Oscillons with nodes are, however, unstable to non-spherical perturbations and so their lifetime is significantly reduced. Interestingly, these states are seen to fragment into a collection of nodeless oscillons. The heavy nodeless oscillon is quite remarkable: despite its energy it is stable against fragmentation. Moreover, it has considerably small oscillation frequency, meaning that it can be interpreted as a rather relativistic bound state. ###### Contents * I Introduction * II Quasi-breather approximation * II.1 Fundamental Oscillons * II.2 Excited Oscillons * III Numerical Evolution in Spherical Symmetry * III.1 Experimental Setup * III.2.1 Initial Conditions and radiated power * III.2.2 Definition of Oscillon States and their Lifetime * III.2 Results * III.2.1 Fundamental Oscillon States * III.2.2 Excited Oscillon States * IV Transitions * IV.1 Adiabatic Transitions * IV.2.3 Transitions in 3D * V Conclusions * A Multi-frequency oscillon solution ## I Introduction Oscillons are fascinating objects: they are non-perturbative bound states of bosonic fields held together by self-interactions but unprotected against decay. The tension between these two factors results in oscillons being evaporating quasi-bound states with a finite (but often considerably large) lifetime. Mathematically, this translates into the existence of quasi-attractor, localized and radiative solutions of the field equations, which are long-lived. Usually, these do not admit closed form solutions. Indeed, oscillons arise quite generically in simple scalar theories, including the "sine-Gordon" (SG) model in \(3+1\) dimensions, which is of relevance for axions and which will be the main focus of this work. The lifetime of oscillons is model dependent, but even in common theories like SG it reaches \(\sim 10^{3}m^{-1}\). Since even in classical field theory oscillons must be constructed numerically (except in special cases, where analytic approximations are available), their "discovery" [1; 2; 3; 4; 5] was somewhat overlooked. More recently, they have gained substantial interest and several aspects have been further investigated, including their formation, longevity, their classical and quantum radiation as well as the model dependence of their behaviour, see e.g. [6; 7; 8; 9; 10; 11; 12; 13; 14; 15] and [16] for a recent review. Oscillons have been shown to emerge naturally in scenarios ranging from preheating to bubble collisions (see e.g. [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28]), giving a gravitational wave signature [21; 23; 29; 30]. They also play a major role in dark matter [31; 32; 33; 34; 35; 36] and can be produced in topological defect networks [37; 38; 4; 39; 40; 41; 42; 43]. The aim of this work is to study one aspect that has received relatively little attention: models that lead to oscillons typically admit not one but several oscillons. In fact, there is a rather well defined discrete _spectrum of oscillons_ of increasing energy. One might expect that these higher energy oscillons must have a short lifetime. Surprisingly enough, several of the "excited" oscillons have a lifetime comparable to the lowest-lying oscillon. For concreteness, we shall present and study these new excited oscillons in the SG model. The potential is \[V(\phi)=f^{2}m^{2}\left(1-\cos\phi\right)\] where \(\phi\) is the dimensionless field rescaled by the decay constant \(f\), \(m\) is the mass of \(\phi\) quanta and we assume \(m\ll f\). Ignoring \(4\pi\) factors, \(f\) is the UV cutoff of the theory, so \(m\ll f\) translates into weak coupling. Our statements should extend qualitatively to other potentials, though quantitatively Figure 1: Left: Amplitude of the field at the origin as a function of time, \(\phi(t,0)\), for the two ‘fundamental’ oscillons of the sine-Gordon model in \(3+1\) dimensions. Right: Field amplitude as a function of the radius to the center. important differences may arise. Oscillons can be also understood in Quantum Field Theory as states with a high occupation number \(N\) of the same single-particle state, see [44; 45; 46] and [12]. For potentials with negative self-interaction, this state must be somehow preferred. However at weak coupling, \(\lambda=m^{2}/f^{2}\ll 1\), the energy gain is only effective for \(N\sim 1/\lambda\)[44; 45; 46]. This results into a large collective coupling, whence oscillons become non-perturbative. In this large occupation number limit, the mean-field description should be applicable, meaning that these states can be captured by solving classical field equations of motion. States with \(N\sim f^{2}/m^{2}\) translate into classical field configurations with sizeable excursions \(\phi\sim\mathcal{O}(1)\). The total mass of these configurations scales as \(f^{2}/m\), with oscillon-dependent prefactors1. Obtaining the oscillon spectrum thus becomes a non-perturbative problem - there is no small expansion parameter. Footnote 1: Since they oscillate with frequency close to \(m\) and are localized in a radius of order \(m^{-1}\), the highest gradients are typically at most \(\sim fm\), well below the cutoff scale of the effective field theory, \(\sim f^{2}\). Most of the literature on oscillons in the SG theory in \(3+1\) dimensions refers to the lowest energy attractor oscillon. In this configuration the field at the origin \(\phi(t,0)\) oscillates within the first period of the potential, that is \(|\phi|\lesssim 2\pi\), as shown in Fig. 1. Interestingly, the SG model is known to admit another oscillon [47; 48], where \(\phi(t,0)\) oscillates in a larger range, reaching values of \(\pm 4\pi\), also shown in Fig. 1. Let us emphasize a few (very) remarkable properties of the \(\pm 4\pi\) attractor: * as mentioned, its lifetime is comparable to the standard \(\pm 2\pi\) oscillon. * As it is obvious from Fig. 1, it contains a significant harmonic composition. The first harmonic (oscillating at \(3\omega\)) represents about 20% of the profile \(\phi(t,0)\). This harmonic has high enough frequency to be a radiation mode, however it is somehow "trapped" in the oscillon core. * This solution is quite attractive to initial conditions with high enough (and localized enough) energy. * Depending on the initial conditions, the numerical evolution of the field equations starting near this attractor transitions adiabatically from the \(\pm 4\pi\) attractor to the \(\pm 2\pi\) one (thus collecting a total lifetime of more than \(2000m^{-1}\)). * Most remarkably, the oscillation frequency is quite low, \(\omega\simeq 0.58m\), to be compared with \(\omega\simeq 0.92m\) of the "usual" oscillon. This is quite extraordinary as the standard interpretation is that the difference between the mass of the field and the frequency of the oscillon (\(m-\omega\)) represent the binding energy per quantum in the solution. This is, then, a significantly relativistic bound state. Since, generally, the relativistic regime tends to make bound states significantly harder to understand, this system can provide useful insights. These observations open up many questions: Are there more oscillons (of similar lifetimes)? Is there any solution that explores even higher amplitudes? Is the oscillon spectrum discrete? Are they stable? Are there transitions among different oscillon states, and how do they proceed? In this work we address these questions. We construct a sequence of new excited spherically symmetric oscillons with the defining property that they contain a given number of nodes. By "node" we mean a point in the radial coordinate where the field amplitude vanishes at all times.2 Oscillons with this kind of nodes are of course not expected to be attractors to generic initial conditions.3 Footnote 2: Our notion of “node” and of “excited oscillon” thus differs from previous works, in particular from Ref. [49]. Oscillons with nodes were also discussed in Ref. [22], albeit in the small amplitude limit. Footnote 3: It is possible that oscillons with nodes are attractors to initial conditions with nodes. Mapping the basin of attraction of these solutions is beyond the scope of the present work. However, the point of view of this work is to reveal the bound-state spectrum of the theory, or at least part of it. In some sense this is analogous to looking for the spectrum of resonances in QCD. Even if most of them are unstable and/or difficult to produce in practice, the spectrum of states with given properties (e.g. spherical symmetry) is certainly well posed. The spectrum of oscillon solutions that we find is shown in Fig. 2, where we see that they span a large range in central amplitude and total energy (rest mass). We do not go beyond states with 3 nodes, but in principle the sequence continues, perhaps indefinitely. Each point corresponds to a different oscillon that keeps the number of nodes for a significant time, which we demand to be comparable to the lifetime of the lowest lying oscillon. Table 1 summarizes the basic properties of these states. The error bars indicate the range in amplitude \(\phi_{0}\) and frequency \(\omega\) that is explored by each oscillon during its lifetime. Quite interestingly, we find that i) the spectrum of oscillons is discrete; ii) the central field value \(\phi_{0}\) takes on specific values, which are close to multiples of \(2\pi\); iii) oscillons with large amplitude \(\phi_{0}\gtrsim 4\pi\) exist, at the expense of introducing nodes; iv) there are more examples of very relativistic oscillons (with \(\omega\) significantly below \(m\)). The spectrum shown in Fig. 2 is obtained using a combination of analytic and numerical methods. Introducing a single frequency ansatz, we reduce the equation of motion to an effective one-dimensional problem. This provides a guess which is used as the initial condition for simulating an oscillon with nodes but which does not correspond to the actual oscillon profile. The field then quickly relaxes to an excited oscillon with nodes and high harmonic content, meaning that these "guess" initial conditions fall inside the basin of attraction of these oscillons. Two remarks are in order. First, it is clear that the sine-Gordon model has a larger class of spherically symmetric solutions, in the form of spherical domain walls (kinks) with an initial radius \(R_{0}\gg 1/m\). With perfect spherical symmetry, after a time of order \(R_{0}\) they reach small sizes and bounce back emitting radiation and re-bounce a number of times [50]. In a way, then, these solutions are similar to oscillons. (In fact, some of them end up trapped in oscillons) Of course, there is a continuum of solutions of this form and their lifetime is large, but we discard them as we are interested in states with well defined properties (number of nodes, radius, etc) staying constant or at most changing adiabatically throughout the lifetime. Second, the spectrum in Fig. 2 is selected by imposing both a long lifetime and spherical symmetry. Relaxing the symmetry can of course impact on the actual lifetime of these states - as it clearly also affects the spectrum. (With less symmetry, multi-oscillon solutions could exist.) In Section V we study how allowing for aspherical modes affects these states. As it turns out, the lifetimes of oscillons are affected, but the heavy nodeless one is not. Quite remarkably, we see the breakup of an "excited" oscillon to a (large) collection of nodeless ones. The current work is organized as follows. In Section II we describe the model and provide analytical constructions for oscillons, using both as single-frequency ansatz as well as considering contribution from higher harmonics. We provide oscillon solutions with up to 3 nodes and examine their properties. Section III contains an extensive set of numerical simulations, where the evolution of oscillons is computed in the spherical ansatz. In Section IV we discuss possible transitions between oscillon states and show three-dimensional simulations. We conclude and provide suggestions for future work in Section V. ## II Quasi-Breather approximation The action we use is \[\Phi=f^{2}\,\int d^{3}x\,dt\,\left[\frac{1}{2}\partial_{\mu}\phi\partial^{\mu} \phi-m^{2}(1-\cos\phi)\right]\,, \tag{1}\] where we have pulled out an overall factor given by the decay constant \(f\) so that the field \(\phi\) is dimensionless. Finding the spherically symmetric oscillons in this theory reduces then to look for the configurations \(\phi(r,t)\) that solve the equation of motion \[\frac{\partial^{2}\phi}{\partial t^{2}}-\frac{\partial^{2}\phi}{\partial r^{2}}- \frac{2}{r}\frac{\partial\phi}{\partial r}+m^{2}\,\sin\phi=0\, \tag{2}\] with outgoing radiation boundary conditions. The construction of oscillons starts by treating them as "quasi-breather", a periodic localized configuration, and optimize the radial profile and the frequency to identify slowly radiating configurations. One starts with a single-frequency ansatz \[\phi(r,t)=\Phi(r)\sin(\omega t) \tag{3}\] with \(\omega<m\) for the oscillon solution to exist. By inserting this ansatz into the action and integrating over time over a period \(2\pi/\omega\), we arrive at the effective action \[S_{\rm eff}=f^{2}\int d^{3}x\left\{\frac{1}{4}\omega^{2}\Phi^{2}+\frac{1}{4}[ \nabla\Phi]^{2}-[1-J_{0}(\Phi)]\right\} \tag{4}\] which in turn leads to the equation of motion for the amplitude \(\Phi(r)\) \[\frac{d^{2}\Phi}{dr^{2}}+\frac{2}{r}\frac{d\Phi}{dr}+\omega^{2}\Phi-2J_{1}( \Phi)=0\, \tag{5}\] Figure 2: The spectrum of long-lived spherically symmetric oscillon configurations. The horizontal axis corresponds to the number of nodes and the vertical axis shows the maximum field excursion at the center of the oscillon. We stopped our exploration at oscillons with three nodes. The number in each box in the above plot shows the mass of each oscillon, with respect to the mass of the lowest-mass oscillon state, which is \(\simeq 400f^{2}/m\). where \(J_{n}\) are Bessel functions of the first kind. The solution \(\Phi(r)\) needs to be regular (zero derivative) at the origin and vanish at spatial infinity. This equation of motion has an intuitive mechanical analogue; that of a point particle moving in a one-dimensional potential well with potential \( U_{\rm eff}=\frac{1}{4}\omega^{2}\Phi^{2}-[1-J_{0}(\Phi)]\). In this analogy \(r\) is the time coordinate and the term \(-\frac{2}{r}\dot{\phi}\) describes friction. The boundary conditions for finding oscillon solutions are equivalent to a ball starting at some large value \(\Phi(r=0)\) and rolling towards the origin, reaching it in infinite "time" \(r\to\infty\). Fig. 3 shows the effective potential for different values of the oscillon frequency \(\omega\). The simple numerical procedure for finding oscillon solutions includes choosing a frequency \(\omega<1\) and searching for the proper value of \(\Phi(r)=0\) that leads to \(d_{r}\Phi(r\to\infty)=0\), corresponding to a solution where the point particle is released at rest from some amplitude and asymptotically reaches the origin. This of course neglects the case of the ball overshooting the point at the origin and probing negative values of \(\Phi\), later to return and reach the origin in infinite time from the left. Each time the ball overshoots the local maximum at the origin, the corresponding oscillon solution acquires a node. This construction leads to a two-parameter family of solutions, which is continuous in \(\omega<1\) and discrete in the number of nodes, ranging from \(0\) to infinity (in principle). Since the above construction (and any oscillon) does not lead to exact solutions of the equation of motion of the full system, oscillons in general will also have a radiating tail at higher harmonics. By introducing only the third harmonic4, the ansatz becomes Footnote 4: We refer to the mode oscillating at the fundamental frequency \(\omega\) as the first or fundamental harmonic. For notational clarity, we must stress the difference between the terminology “fundamental oscillon”, having no nodes as shown in Fig. 2 and the fundamental frequency / harmonic, which we define here. \[\phi(r,t)=\Phi(r)\sin(\omega t)+\Phi_{3}(r)\sin(3\omega t). \tag{6}\] Figure 3: The effective potential for the single-frequency ansatz for \(\omega=0.5,0.8\) (blue and red respectively). The dashed lines correspond to \(\frac{1}{4}\Phi^{2}\) (black) and \(1-J_{0}(\Phi(r))\) (brown). Assuming \(\Phi_{3}\ll\Phi\), it is straightforward to see whether there are special points in solution space, where the third harmonic is non-radiating. All we need to do is solve the equation \[\frac{d^{2}\Phi_{3}(r)}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{3}(r)}{dr}+(3\omega)^{2} \Phi_{3}(r)-J_{0}(\Phi(r))\Phi_{3}(r)=2J_{3}(\Phi(r)) \tag{7}\] where the emergence of Bessel functions of different order is described in Appendix A. The inhomogeneous solution to the above equation can be found through the Greens's function method \[\Phi_{3}(r)=\int_{0}^{r}\mathcal{G}(r,r^{\prime})2J_{3}(\Phi(r^{\prime}))dr^{\prime} \tag{8}\] The condition for a localized third harmonic solution is simply the vanishing of the above integral for \(r\to\infty\). This is also derived in Ref. [51] using arguments based on destructive interference. When the amplitude of the main harmonic \(\Phi(r)\) grows larger, so do in general the corresponding higher harmonics. This means that after some point, the linear approximation of Eq. (7) fails to capture the dynamics of the system, since the third harmonic will be large enough to back-react onto the first harmonic. Ref. [51] proposed a quasi-breather construction that allows one to include the contribution of higher harmonics, taking their mutual interactions into account. In this formalism, the oscillon is described (taking only the first and third harmonics into account) as \(\phi\simeq\Phi_{1}\sin(\omega t)+\Phi_{3}\sin(3\omega t)+c_{3}\cos(3\omega t)\), where the term \(c_{3}\) is added in order to allow for outgoing radiation at \(r\to\infty\), which wouldn't be possible with an expansion solely in terms of sines or cosines. The equations that one needs to solve are \[\frac{d^{2}\Phi_{1}}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{1}}{dr}+ \omega^{2}\Phi_{1}+f\left(\Phi_{1},\Phi_{3}\right)=0 \tag{9}\] \[\frac{d^{2}\Phi_{3}}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{3}}{dr}+9 \omega^{2}\Phi_{3}+g\left(\Phi_{1},\Phi_{3}\right)=0\] (10) \[\frac{d^{2}c_{1}}{dr^{2}}+\frac{2}{r}\frac{dc_{3}}{dr}+9\omega^{2 }c_{1}+h\left(\Phi_{1},\Phi_{3}\right)c_{3}=0 \tag{11}\] where \(f,g,h\) are functions of \(\Phi_{1}\) and \(\Phi_{3}\) and we assumed that \(c_{3}\) can still be treated in the linearized approximation. The derivation and exact form of these equations is given in Appendix A. ### Fundamental Oscillons We start by constructing and studying the "fundamental" oscillon solutions, which in this context means oscillons without nodes in their spatial profile. Fig. 4 shows the two long-lived oscillon states that emerge for \(\omega/m\simeq 0.56,0.92\) (left and right panels), along with a "random" solution at \(\omega=0.7\). It is clear that the solution shown in the middle panel has a much larger radiating tail, leading to a faster loss of energy. This drives the oscillon towards the long-lived solution at \(\omega\simeq 0.92\). Fig. 4 contains interesting information about the structure of the fundamental oscillons in the three dimensional sine-Gordon equation. We first notice the long-distance behavior: The first harmonic is exponentially decaying, while the third and fifth ones are oscillating, both in the sine and in the cosine. However, the two terms in each harmonic have a constant shift of \(\pi/2\), leading to a traveling wave towards infinity. Thus this regime represents the radiation emanating from the oscillon. We see that for the longer lived oscillons (\(\omega/m=0.56,0.92\)) the radiation tail is about one order of magnitude smaller than a "random" solution with \(\omega/m=0.7\). Furthermore, for the oscillon with \(\omega/m=0.56\), the radiation of the third harmonic is several orders of magnitude suppressed and the decay is controlled by the fifth harmonic. For the case of \(\omega/m=0.92\) the opposite occurs and the fifth harmonic is vastly subdominant. From the sequence of panels in Fig. 4 we see how the third harmonic evolves as a function of the frequency and the oscillon height. The height of the first harmonic becomes larger for smaller frequencies and this leads to a more significant excitation of the third harmonic. For \(\omega=0.56\) the third harmonic is non-perturbatively large close to the oscillon core. In this range of frequencies, the third harmonic is largely confined, as is evident from the significant difference between the sine and cosine terms near the core as well as from the difference between the value of the third harmonic near and far from the core of the oscillon. Contrary to that, the fifth harmonic is always perturbative (much smaller than the first harmonic). Furthermore, the structure of the three first harmonics elucidates the difference between near and far regions of the oscillon. In the far region, the two terms in each harmonic constituting the radiating tail have a phase difference of \(\pi/2\), whereas near the core they are in phase. For \(\omega=0.7\) we see that both higher harmonics change behavior near \(r=3\). Interestingly for the long-lived oscillon of \(\omega/m=0.56\) the third harmonic becomes radiating for \(r\gtrsim 6\) and the fifth harmonic for \(r\gtrsim 4\), meaning that the core and tail of the oscillon can be perceived slightly different for different harmonics. Figure 4: Oscillon solutions for \(\omega=0.56,0.7,0.92\) (left to right). The first, third harmonic and fifth harmonics are shown in blue, red and green respectively (solid for sines and dashed for cosines) Since oscillons are not exact solutions of the equation of motion and they contain small radiating tails (even suppressed ones), one can define the rate at which energy is expelled from the oscillon towards infinity. The energy loss is directly related to the change in oscillon energy per unit time \(\Gamma_{E}(\omega)=-\dot{E}(\omega)\) and is shown in Fig. 5. We see that the third harmonic has a dip at \(\omega\simeq 0.56\). By "zooming" in close to this dip, we can see the third harmonic vanish at large distances, within the numerical accuracy limits of our calculation. At this point, the energy loss is dominated by the fifth harmonic. Thus \(\omega\simeq 0.56\) defines a local minimum in energy loss, where a long-lived oscillon state is expected to exist. The behavior of the energy loss at high frequencies, close to \(\omega=1\) is somewhat different. The energy loss there decreases monotonically, without providing a clear local minimum. As the oscillon loses energy, its frequency increases. However, this cannot continue arbitrarily, since the energy function has a minimum for \(\omega\simeq 0.93\), meaning that after this point the oscillon cannot lose energy and evolve adiabatically, leading to its sudden decay. This has been coined "energetic death" [51]. We can look for oscillon configurations in the space of instantaneous solutions, which are able to retain their global properties over a substantial amount of time. Simply put, we can look for solutions, whose frequency does not change over a long period of time. From the quantities derived in the semi-analytic oscillon solution, we can compute the the evolution of the instantaneous frequency of the system by solving the following equation numerically \[\int_{\omega_{0}}^{\omega}\frac{dE/d\omega^{\prime}}{\Gamma_{E}(\omega^{ \prime})}d\omega^{\prime}=-\int_{t_{0}}^{t}dt^{\prime} \tag{12}\] Figure 5: _Left:_ The radiated power of the node-less oscillons in the third and fifth harmonic (red and black respectively), along with the total radiated power (blue-dashed). _Right:_ The oscillon energy as a function of frequency (blue). The two red dots correspond to the two long-lived states. The solid (dashed) part corresponds to states which are stable (unstable) with respect to long-wavelength perturbations. The orange curve shows the particle number \(N\) (defined as \(E_{osc}/\omega\)), around the point where the stability behavior changes. Fig. 6 shows two clear minima of \(\dot{\omega}/\omega\), the relative evolution of the instantaneous frequency. One is at \(\omega\simeq 0.56\), as expected, and the other can be found at \(\omega\simeq 0.92\), leading to an concrete definition of long-lived oscillon states. ### Excited Oscillons We now move to oscillon solutions with \(1,2\) and \(3\) nodes. By performing the same shooting process, in order to find oscillon solutions, but starting with a slightly larger amplitude at \(r\simeq 0\), we discover a family of oscillon solutions where the shape of the first (fundamental) harmonic exhibits a node, a point in space where the amplitude vanishes. We should note that, within the validity of our construction method, the total amplitude does not vanish at all times, due to the fact that the higher harmonics do not vanish at the same point. However, they are vastly subdominant (one order of magnitude or more). We will thus use the characterization of nodes based on the first harmonic and keep in mind that in reality they are "approximate nodes". Fig. 7 shows the structure of oscillons with \(1\), \(2\) and \(3\) nodes for two different frequencies each. One corresponds to a solution where the decay rate exhibits a local minimum, leading to long-lived state, whereas the other frequency does not possess such features. We see a similar behavior as the one shown in Fig. 4, with the outgoing radiation being suppressed for certain values of the frequency. Interestingly, the higher harmonics are in general suppressed at the nodes. In some cases, (some) higher harmonics show an outgoing radiation-like behavior, meaning that the sine and cosine terms are exactly out of phase. In order to capture this, we simply add the squares of the two contributions to the harmonic in question (black lines in Fig. 7). This should not have Figure 6: _Left:_ The evolution of the instantaneous frequency \(\omega(t)\) as a function of time. We can clearly see two plateaus, corresponding to well-defined oscillons. _Right:_ The relative change in frequency \(\dot{\omega}/\omega\), clearly pointing to the existence of two clear oscillon states, at \(\omega\simeq 0.56\) and \(\omega\simeq 0.92\). features for outgoing radiation, or whenever the sine and cosine terms oscillate out of phase. ## III Numerical evolution in spherical symmetry In this section we take an "experimental" point of view and use numerical methods to study the oscillon states of the sine-Gordon model. We employ a simple central difference algorithm to study the time evolution of these states imposing spherical symmetry and implementing absorbing boundary conditions (we will discuss the nontrivial question of non-spherical breakup of oscillon states in Section IV). We checked that our results are robust to a variety of integration schemes and resolutions. The main points of this section can be summarized as follows: * We take a different viewpoint here than the one provided in Section II. There, the quasi-breather picture gives a continuously connected spectrum of oscillon-like configurations. We'll show that, although some aspects of the dynamics of oscillon states can be captured in this way, generically other effects dominate the evolution of the system. We thus take a more restrictive approach to what we call a "state": _an oscillating field configuration that is able to retain most of its properties during its lifetime._ Although these properties can be understood in the quasi-breather framework, our current definition naturally leads to the emergence of Figure 7: Oscillon solutions for \(n=1,2,3\) nodes (left to right). The upper panels correspond to long-lived solutions, \(\omega=0.86,0.92,0.82\) respectively. The lower panels correspond to “random” solutions \(\omega=0.7,0.6,0.7\) respectively. The first, third harmonic and fifth harmonics are shown in blue, red and green respectively (solid for sines and dashed for cosines). The black curve corresponds to the the sum of squares of the corresponding sine and cosine terms of one higher harmonic. a discrete spectrum of states. * The single-frequency ansatz defines the initial conditions of our "experiment", labeled by frequency (continuous) and number of nodes in the spatial profile (discrete). We observe that not all of these initial conditions relax to our definition of a state. Using the quasi-breather picture we can semi-analytically predict where these states exist, finding good agreement with experiment. However, this framework doesn't seem appropriate to predict the overall lifetime of the states, which is dominated by additional instabilities. This effect is especially prevalent for states that carry nodes. Lifetimes discussed in this section are thus directly extracted from numerical simulations. * As instabilities play an important role for the dynamics of these oscillons, the overall lifetime of the states is tied to the exact initial conditions that we choose (starting with a large perturbation around the exact state leads to an earlier onset of decay). When assigning lifetimes, we include uncertainties to account for this. * We find a discrete spectrum of long lived oscillon states. We characterize states as "long lived", when they have a lifetime comparable (\(\gtrsim 50\%\)) to the fundamental (lowest energy) state. By adding more and more nodes to the initial conditions a seemingly boundless spectrum emerges, with ever increasing central amplitude and total mass \(M\) (rest energy). We explicitly checked that states can be found with up to 3 nodes in the spatial profile, and there is no a priori reason to suspect that this process can't continue up to an arbitrary number of nodes. It is at this point not clear if this is a property of the sine-Gordon model with its infinite degenerate minima, and this question will be addressed in future work. Our results are summarized in Table 1. ### Experimental Setup #### iii.1.1 Initial Conditions and radiated power The results of our numerical experiment are entirely determined by the evolution of the system through the equation of motion, Eq. (2), as well as by the initial and boundary conditions. We impose spherical symmetry on the system \(\phi(\vec{x},t)\rightarrow\phi(r,t)\), and implement an absorbing boundary condition at \(r\rightarrow\infty\) and a Neumann boundary condition at \(r=0\); \(\partial_{r}\phi(0,t)=0\). This uniquely defines the problem, once the initial conditions are specified. To probe the space of possible oscillon states we use the single-frequency ansatz to guide our choice of initial conditions. This was described in Section II and numerically calls for the solution of Eq. (5) with boundary conditions \(\partial_{r}\Phi|_{r=0}=0\) and \(\Phi(r\rightarrow\infty)=0\). In practice, the boundary condition at infinity is substituted for the boundary condition at a finite but large radius, where the equation of motion can be approximated as \[\frac{d^{2}\Phi}{dr^{2}}+\frac{2}{r}\frac{d\Phi}{dr}+(\omega^{2}-m^{2})\Phi\simeq 0 \tag{13}\] which leads to the trivial solution \(\Phi\sim\frac{1}{r}e^{-\sqrt{m^{2}-\omega^{2}}\,r}\). At fixed frequency, this procedure defines a discrete spectrum of initial conditions for our setup, where the solutions are labeled by the amount of nodes in the spatial profile \(\Phi(r)\). We then choose initial conditions to satisfy the single-frequency ansatz, with \(\phi(r,0)=\Phi(r)\) and \(\partial_{t}\phi(r,t)|_{t=0}=0\). In practice, true oscillons have a non-negligible contribution from higher harmonics. The presence of higher harmonics in the full solution has three effects which can be studied numerically as well as analytically within the quasi-breather picture outlined in Sec. II * There is a significant deviation of the spatial and temporal profile of the oscillon states with respect to the single-frequency ansatz. This effect is especially noticeable in large amplitude oscillons. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \(n\) & \(\omega\) (\(\cdot m\)) & \(\Phi_{0}\) (\(\div 2\pi\)) & \(\tau_{sp}\) (\(\cdot 10^{2}m^{-1}\)) & \(M\) (\(\cdot 10^{2}f^{2}/m\)) & \(M\tau_{sp}\) (\(\cdot 10^{6}\,f^{2}/m^{2}\)) \\ \hline \hline 0 & 0.58(5) & 1.92(9) & 9.5(5) & 17.0(8) & 1.6(2) \\ \hline 0 & 0.92(1) & 0.68(5) & 7.0(5) & 3.8(2) & 0.27(3) \\ \hline 1 & 0.56(4) & 3.6(4) & 6.0(5) & 210(40) & 13(3) \\ \hline 1 & 0.86(2) & 1.8(2) & 6(1) & 56(1) & 3.4(6) \\ \hline 2 & 0.70(5) & 4.1(8) & 3.5(1.0) & 392(7) & 13(4) \\ \hline 2 & 0.92(1) & 2.1(1) & 5(1) & 160(10) & 8(2) \\ \hline 3 & 0.82(2) & 3.8(2) & 7(1) & 557(8) & 39(6) \\ \hline 3 & 0.91(2) & 2.8(1) & 5.5(5) & 410(20) & 23(3) \\ \hline \end{tabular} \end{table} Table 1: Properties of long lived, spherically symmetric oscillon solutions: node number (\(n\)), frequency (\(\omega\)), field amplitude at the center (\(\Phi_{0}\)), lifetime (\(\tau_{\rm sp}\)), mass \(M\) and narrowness (\(M\tau_{\rm sp}\)). We list only oscillon states which retain well defined properties (e.g., the number of nodes) for a lifetime comparable to the lowest mass oscillon. The variance is indicated in parenthesis notation, e.g. \(\omega=0.58(5)\Rightarrow 0.53\leq\omega\leq 0.63\). The lifetime is written as \(\tau_{sp}\) emphasizing the fact that these are lifetimes obtained in spherical symmetry. It seems that the lifetime is significantly shortened if aspherical perturbations are included. * The oscillon radiates and thus loses energy through unbounded higher harmonics. * As the oscillon loses energy it is forced to adiabatically change its frequency and find solutions that live "close-by" in the attractor sense, but have smaller energy. In Fig. 8 we compare the energy of the single-frequency ansatz with the energy obtained from the quasi-breather formalism, which contains contributions from higher harmonics. We see that the difference is small. In our numerical experiment we observe that an initial condition of the form of the single-frequency ansatz (3) generically latches onto an oscillon configuration with corresponding fundamental frequency set by Eq. (5), but with non-negligible contributions from higher harmonics. Notice the nomenclature here; an oscillon configuration is not equivalent to our definition of a state: either the configuration decays quickly or it changes its features too much throughout its lifetime. These configurations do, however, define a continuously connected space of oscillon solutions, whose properties we can measure (soon after initializing)5. These solutions have the full harmonic content sourced by the single-frequency ansatz and can thus be compared to the analytically constructed quasi-breathers of Sec. II. Of particular interest is the energy that the oscillon solutions radiate, Figure 8: Comparison of the energy (or rest mass) obtained analytically with the full harmonic content in the quasi-breather formalism (color) and the single-frequency ansatz (black dotted) of the oscillon-like configurations of the sine-Gordon model, for different number of nodes in the spatial profile: 0 (red), 1 (blue), 2 (orange) and 3 (green). The difference is negligible. which we can obtain through the expression \[|\dot{E}_{osc}|=4\pi R^{2}\langle T_{0r}\rangle|_{r=R}=4\pi R^{2}\left\langle\dot {\phi}\partial_{r}\phi\right\rangle|_{r=R}\,, \tag{14}\] where brackets indicate time averages and the measurement should be taken at a radius \(R\) that is away from the oscillon bulk. We obtain numerical data by initializing with the single-frequency ansatz at different values of \(\omega\), letting the field relax to an oscillon solution for a fixed amount of time, and then measuring \(T_{0r}\) away from the oscillon bulk for three oscillation periods. \(|\dot{E}_{osc}|\) for each \(\omega\) is obtained by averaging the measured values of \(T_{0r}\). This procedure defines another way to estimate the oscillon properties at fixed \(\omega\), independent from the quasi-breather formalism and rooted entirely in numerics. In Fig. 9 we compare the energy radiated by the quasi-breathers constructed in Section II and the oscillon solutions in our experiment. We see that the oscillon solutions agree qualitatively well with the quasi-breather formalism, although the latter often underpredicts the amount of energy that the oscillon solution radiates. This is on one hand due to the fact that the we are forced to include a finite number of harmonics in the quasi-breather calculation (a limitation that the numerical measurement doesn't have), and on the other hand because of the uncertainty in our prescription for numerically measuring the properties of the oscillon solutions at fixed \(\omega\). As seen before in Section II, the radiated energy is highly suppressed for specific values of the frequency \(\omega\). Assuming that the configurations are able to adiabatically follow the "instantaneous" space of oscillon solutions, we can predict where proper oscillon states should exist: these are those configurations in the space of solutions that are able to retain their global properties over a substantial amount of time. For example, there are clear plateaus when investigating the evolution of the system through \(\omega(t)\) using Eq. (12) (see Fig. 6), during which the configuration is able to retain its global properties. While the oscillon configuration lives on the plateau it has a well defined mass and binding energy. Note that these plateaus do not necessarily exist during the evolution of the system, as this reasoning is based on the assumption that the oscillon configurations are adiabatically connected. However, it gives a hint as to where we might find oscillon states: oscillon configurations that retain their global properties over a long period of time. In Section III.1.2 we make these notions more concrete by providing a definition of an oscillon state. #### iii.1.2 Definition of Oscillon States and their Lifetime We define an oscillon state as an oscillon configuration that during its lifetime satisfies the following criteria: 1. The fundamental frequency (or binding energy per particle) of the solution doesn't change by more than 10%. 2. The state has a well defined mass, meaning that it doesn't vary by more than 20%. 3. If the spatial profile contains a node, it must retain it. 4. The overall lifetime has to be at least 50% of the ground state (lowest energy oscillon state). With these definitions we look for oscillon states "experimentally", meaning through a series of simulations. If the dynamical evolution of the system were fully adiabatic we could find all the states immediately by numerically solving Eq. (12), essentially combining the data in Figs. 8 and 9. This process gives clear predictions as to where the oscillon should be able to obey our definition of a state. While our simulations confirm that this happens for the nodeless oscillon configurations, it Figure 9: Radiated power obtained analytically using the quasi-breather formalism (solid) and numerically using the prescription described in the main text (dashed) for the oscillon configurations of the sine-Gordon model, for different number of nodes in the spatial profile: zero node (top left), one node (top right), two node (bottom left) and three node (bottom right) solutions. The two methods provide independent estimates at fixed \(\omega\). Although they agree qualitatively well, there are quantitative differences. The actual time evolution of an oscillon follows the overall estimate quite closely as it evolves adiabatically, increasing \(\omega\). does not occur for oscillons that contain a node in their spatial profile. These solutions eventually shed their node and leave the adiabatic curve. We thus consider the adiabatic predictions as the most probable locations in parameter space where well-defined oscillon states exist and look for them through a series of simulations, initialized with different profiles derived from Eq. (5). ### Results Here we present the main results of this paper. Using the definition of a state given in Section III.1.2 we are able to extract a discrete spectrum of oscillon states with different masses. By increasing the number of nodes in the spatial profile a seemingly boundless spectrum of states emerges. We explicitly checked that states can be found with up to 3 spatial nodes, and have no a priori reason to think that this process should stop (although stability issues might require the initial conditions to be highly fine-tuned). As mentioned previously, there is a clear difference in the evolution of the solutions with- and without nodes. Starting from the single-frequency ansatz, the nodeless solution is able to adiabatically probe all the oscillon configurations6, up until its "energetic death"7. Solutions with nodes typically excite instabilities that drive them away from this type of adiabatic evolution. We will comment on the nature of these instabilities later. The distinction in behavior is made clear in Fig. 10 where we plot (colored scatter) the radiation that the oscillon emits during its entire lifetime, until either no localized energy remains or the field loses its node. Different colors represent different simulations initialized with the single-frequency ansatz. It is clear that the nodeless oscillon is able to evolve adiabatically from \(\omega\sim 0.5\) until \(\omega\sim 1\) where it decays. An oscillon with nodes (in Fig. 10 we only show the case with one node, but similar conclusions hold for solutions with more nodes) generally loses its node(s) before it can transition to \(\omega\sim 1\). We need more than one initialization to map the entire space of oscillon solutions. Since there is no way of knowing in what region of parameter space oscillons with nodes evolve adiabatically, we are forced to look for states through a process of trial and error. However, the information gathered so far allows us to do so in a systematic way: Footnote 6: This is strictly speaking only true when starting from the single-frequency ansatz. More general initial conditions can excite instabilities and leave the adiabatic curve. Footnote 7: This term was first coined in [51] to indicate the moment where the oscillon has to decay as adiabatic evolution requires an increase in energy of the solution, which is not available due to energy conservation. 1. Using the predictions for the adiabatic evolution of the system we identify potential oscillon states. 2. Once identified, we check through numerical simulation whether the oscillon is able to survive long enough to qualify as a state (starting from the single-frequency ansatz). 3. We explicitly check if the evolution of the oscillon follows our adiabatic prediction. If the oscillon survives these three steps of scrutiny we identify it as a proper state and take measurements of its properties by averaging over its lifetime. The process is less complicated for the nodeless oscillons as they evolve adiabatically. This predicts the existence of two states which were previously found in Ref. [48]. In Fig. 11 it becomes evident why we classify certain oscillons as states. The different clusterings in parameter space indicate that there are certain preferred oscillon configurations which we identify as states. Each clustering corresponds to a different initialization of the field with the single-frequency ansatz. We gave clusterings belonging to profiles with the same number of nodes the same color. The datapoints in Table 1 are computed by averaging over the different clusters. Note that this is true in particular for the fundamental frequency \(\omega\), which is computed by measuring the zero-crossings of the field at the origin. Since \(\omega\) evolves somewhat during the evolution of the state, the values given in Table 1 do not necessarily correspond to those of the single-frequency initialization. In fact, to observe the state it is better to initialize with an \(\omega\) that is somewhat below the value given in Table 1. Finally, masses are defined as the energy that remains localized within a radius that initially contains 90% of the total energy. We will now discuss some characteristics of the different states in more detail. Figure 10: The radiation emitted by the oscillon versus its frequency \(\omega\) extracted from the full numerical evolutions starting with different initial conditions, represented by different colors. The black curves are the analytic (solid) and numerical (dahsed) estimates shown in Fig. 9. _Left:_ The nodeless case. A single initialization traces out the whole range of \(\omega\). _Right:_ The case with one node. In this case, the node is lost past some \(\omega\) and multiple initial conditions (different colours) are needed to cover the entire range. #### iv.1.1 Fundamental Oscillon States There exist two nodeless oscillon states in the sine-Gordon model in three spatial dimensions. We explicitly confirmed their existence and characteristics. Their space- and time- profiles are shown in Fig. 1. The heavier state has a fundamental frequency of \(\omega\sim 0.58\) and has about four times the mass of the lighter state with \(\omega\sim 0.92\). The lighter state turns out to be the lightest state we find in our oscillon spectroscopy of the sine-Gordon model (also including oscillons with nodes). We therefore refer to it as the ground state. Interestingly, the lifetime of the lower frequency (heavier) state is actually larger than that of the ground state. What seems typical for the nodeless oscillons is that once the field settles into an oscillon configuration, it can evolve adiabatically very efficiently, increasing its frequency \(\omega\). In this way, the scalar field can transition through the two nodeless states if initialized with the right initial conditions, effectively having a coherent lifetime that is the sum of that of the two individual states. In particular, this is what happens when initializing with the single-frequency ansatz (see Fig. 10) near \(\omega\sim 0.58\), but does not necessarily happen when starting with more generic initial conditions. We will comment on these types of transitions in Section IV. Eventually, when the oscillon reaches a frequency of \(\omega\eqsim 0.93\), it decays. At this point the oscillon needs to increase its energy in order to continue to increase \(\omega\). As this is not possible, the state quickly decays. This type of decay Figure 11: The clustering of oscillon-like configurations in parameter space. On the left panel we plot the maximal central field amplitude (measured over one period) while on the right panel we show the mass of the states, defined as the energy that remains localized within a radius that initially contains 90% of the total energy. It is clear that the field clusters around points which we define as states in this paper. Different clusterings correspond to different initializations while clusterings with the same color correspond to states with the same number of nodes: 0 (red), 1 (blue), 2 (orange) and 3 (green). The black dashed line in the correspond to the predictions of the oscillon configurations obtained from numerics at fixed \(\omega\). was coined "energetic death" in [51] and seems to be generic for oscillons in potentials that can be written as a power series. As mentioned previously, the decay of the states with nodes is of a different nature. The two nodeless states act as very strong attractors of the sine-Gordon model. We observed that, starting from a generic Gaussian field profile, the field quickly settles into one of the nodeless states (which of the two is somewhat dependent on initial conditions). In this case the adiabatic transition through the two states is somewhat less efficient, since the oscillon configurations are perturbed to a larger degree by the initial conditions. In other words, adiabatic evolution can stop even for nodeless states before "energetic death". #### iv.1.2 Excited Oscillon States Here we present the oscillon states we found for oscillons that contain nodes in their spatial profile. Interestingly, we are able to identify two states for each number of nodes, replicating "accidentally" the pattern of the nodeless states. In Figs. 12, 13 and 14 we present their space- and time- profiles. Interestingly, as the field oscillates around 0 the nodes stay approximately in the same location. Furthermore, even though we start from a single-frequency ansatz, the states get large contributions from higher harmonics. In this sense these states are also attractors of the model, although we didn't observe them starting from a Gaussian profile, and instead needed to initialize with profiles that contained the correct number of nodes. The less attractive nature of these oscillons ultimately ties in with the fact that states with nodes contain instabilities that force them to decay before energetic death. Initializing with the single-frequency ansatz we see that the oscillon temporarily is able to evolve following the prediction of Eq. (12). Naively, we would expect the oscillon to then transition through the two states and eventually decay due to energetic death, similar to the nodeless case. However, this type of transition was never observed, and a different type of instability, which forces the oscillon to lose its node(s) takes over. We suspect the origin to be the following: as the scalar waves from the bulk of the oscillon move through the nodes towards spatial infinity, the location of the node has to oscillate. This eventually forces the oscillon to shed its node as it strays too far from adiabatic evolution. We observe that the shedding of the node(s) is a violent event in which a lot of scalar radiation is emitted. Depending on initial conditions, the field sometimes decays into one of the nodeless states. One might expect that, starting with initial conditions that are "closer" to the true oscillon solution (so a solution that contains content from higher harmonics instead of the single-frequency ansatz), one could have a delayed decay of the states as the initial perturbation of the system is smaller. To test this hypothesis we constructed oscillon configurations containing contributions from higher harmonics using the quasi-breather formalism, and used this as initial conditions instead of the single-frequency ansatz. Doing this, we observed slightly altered lifetimes, which we have taken into account by adding uncertainties to our inferred lifetimes. Table 1 contains the most important characteristics of the long-lived oscillon states with up to three nodes. We see that, while the energy is monotonically increasing, as we increase the number of nodes, the same is not true for the lifetime. The lifetime is of the same order for most long-lived states that we found, ranging from \(300m^{-1}\) to \(900m^{-1}\). We see that the two-node oscillons are less Figure 12: The spatial and temporal profiles for the oscillon states found with a single node in their spatial profiles. In the spatial profile (left) we show the field profiles of the oscillon at maximum amplitude, while in the temporal profile (right) we show the amplitude of the field at the origin as the state evolves. Figure 13: The spatial and temporal profiles for the oscillon states found with two nodes in their spatial profiles. long-lived than both single-node and three-node ones, indicating that it is at least plausible that long-lived states with more nodes can be found. On the other hand, the rest mass increases between the most stable 0-node and the 3-node oscillons, ranging from \(M=3.8\times 10^{2}\) to \(M=5.6\times 10^{4}\) in units of \(f^{2}/m\). The product \(M\,\tau\) is also quite intriguing. It is natural to identify \(1/\tau\) as the decay width and view these states as resonances, as usual in particle physics. The outcome is that they correspond to very narrow resonances. Then \(M\,\tau\) measures the narrowness of the state (the larger the narrower). It is quite large, and similar for all long-lived oscillon states, \(M\tau\simeq\big{(}\mathcal{O}(10^{6})-\mathcal{O}(10^{7})\big{)}(f/m)^{2}\). It even increases mildly with the number of nodes. (As we discuss in Sec. IV, away from the spherical ansatz, the lifetime is reduced by about 1 order of magnitude for oscillons with nodes). Finally, it is worth noting that the maximum central amplitude of the field profile seems to usually be equal to a multiple of \(2\pi\) which correspond to the minima of the sine-Gordon potential. ## IV Transitions Transitions between oscillon states can be classified in two ways. First, a state can transition when it excites instabilities, perturbing the oscillon solution and forcing it to decay to one or more decay products. Since these instabilities can be aspherical, the problem needs to be studied in three dimensions. The second type of transition has already been alluded to in the text: an adiabatic transition between oscillon states. We reemphasize this point before moving on to the three dimensional question. Figure 14: The spatial and temporal profiles for the oscillon states found with three nodes in their spatial profiles. Adiabatic Transitions As noted before, the nodeless oscillon solutions typically evolve adiabatically until they decay due to energetic death. The nodeless oscillon state with \(\omega\sim 0.58\) is therefore able to transition to the groundstate at \(\omega\sim 0.92\) as energy radiates away from the oscillon bulk. The transition can be seen clearly in Fig. 15, where we plot the mass of the oscillon solution over time (defined as the energy within the radius that initially contains 90% of the total energy in the field) for a field configuration initialized with the single-frequency ansatz at \(\omega=0.55\). The field transitions through the two states where its mass is approximately constant (white regions), separated by a fast adiabatic transition (middle gray shaded region). Finally, it decays through energetic death (right gray shaded region). A transition as in Fig. 15 always happens when initializing with the single-frequency ansatz, but for completeness we report that, starting from more generic initial conditions (as e.g. a Gaussian), we observed a slightly different evolution with an earlier onset of decay. The transition through the second state was less complete in these cases due to the larger initial perturbation to the true oscillon solution. Figure 15: The adiabatic transition between the two nodeless oscillon states of the sine-Gordon model. Starting from the single-frequency ansatz, the field first settles into the low \(\omega\) state (right white region) before transitioning (middle gray region) to the groundstate. Eventually, the groundstate decays quickly due to energetic death. Transitions in 3D In the previous sections we computed the lifetime of excited sine-Gordon oscillons by restricting the evolution to spherically symmetry. However, localized configurations in three spatial dimensions can decay aspherically, in some cases leading to the formation of groups of localized objects, see e.g. Ref. [52]. Even starting from spherical initial conditions it's still a possibility that quantum mechancical perturbations get parametrically amplified due to coupling with the spherical background. There is no good reason to expect that this amplification is necessarily smaller for aspherical modes than for spherical ones, and the lifetimes given in Table 1 could be altered significantly in 3+1 dimensions. To investigate this we performed three-dimensional lattice simulations of the states we found in spherical symmetry. We again took as initial conditions the spherical single-frequency ansatz of the various states found in spherical symmetry, which were then perturbed with small fluctuations. For the nodeless states, we see no significant difference in lifetime. However, it seems that oscillon states with nodes are susceptible to aspherical transitions to one or more of the nodeless states. These transitions persist, even when no fluctuations are added to the initial conditions. This indicates that numerical perturbations alone can be amplified and trigger the transitions. In Figs. 16 and 17 we show some snapshots of these simulations. One can note that the perturbation is numerical as the breakup follows the symmetry of the underlying cubic lattice. Beyond the limitations of inherent numerical noise, there is reason to believe that the decay itself is a physical effect. We can obtain some heuristic understanding of the fragmentation process shown in Figs. 16 and 17 as follows. Considering the limit where the amplitude at the center is large \(\Phi_{0}\gg 1\), the equation of motion of a small perturbation living atop of the oscillon state in Fourier space is \[\delta\ddot{\phi}_{k}+\left(k^{2}+\cos(\phi_{osc}(t,r))\right)\delta\phi_{k}=0 \tag{15}\] with \(\phi_{osc}(t,r)\) being the background oscillon. For \(\Phi_{0}\gg 1\), the time dependent effective mass has a large harmonic composition. At the center we can approximate \(\cos(\phi_{osc}(t,0))\simeq\cos(\Phi_{0}\cos(\omega t))\), which contains frequencies up to \(\sim\Phi_{0}\omega\). It seems feasible that high \(k\) modes can be resonantly excited. Let us assume that in the large \(\Phi_{0}\) limit the oscillon width \(R_{osc}\) is fixed, as Figs. 12, 13, 14 suggest. Then, a separation of scales appears \(\Phi_{0}\omega\gg 1/R_{osc}\), and one can to see whether modes with \(k\gg 1/R_{osc}\) are resonant by switching to the homogeneous problem \[\delta\ddot{\phi}_{k}+\left(k^{2}+\cos(\Phi_{0}\cos(\omega t))\right)\delta \phi_{k}=0 \tag{16}\] of which the instability bands can be found using Floquet theory. Indeed, Eq. (16) leads to unstable bands at large values of \(k\), up to a maximal value that scales approximately as \(k_{max}\propto\omega\Phi_{0}\). Figure 16: Snapshots of the lattice simulations starting with the oscillon state with one spatial node and frequency \(\omega\sim 0.56\). Due to the fact that numerical perturbations get amplified, the state breaks up into 9 nodeless oscillons (one with \(\omega\sim 0.58\) at the center and eight groundstates with \(\omega\sim 0.92\), all oscillating out-of-phase with the central oscillon, moving outwards along the diagonals of the simulation box). The timescale of decay is about one order of magnitude shorter than those predicted by the spherical simulations. Contours are drawn around volumes that have energy density 20 times the average density in the box (\(\rho=20\bar{\rho}\)). The conclusion of this is that we can expect instabilities for modes in the range \(R_{osc}^{-1}\lesssim k\lesssim\Phi_{0}m\). On the other hand, the disctrete lattice inevitably introduces "noise" with the lattice symmetry that sources these unstable modes with same symmetry. This gives a qualitative picture of the observed fragmentation effect. States with the amplitude Figure 17: Snapshots of the lattice simulations starting with the oscillon state with three spatial nodes and frequency \(\omega\sim 0.84\). Similar to Fig. 16, we see a breakup into nodeless oscillons, but due to the large amount of energy in the initial conditions we end up with many more final oscillons: a spectroscopic signature of oscillon states with several nodes. Contours are drawn around volumes with \(\rho=50\bar{\rho}\). \(\Phi_{0}\sim 8\pi\) can be expected to have a number of unstable modes. Indeed, the states shown in Fig. 16 and Fig. 17 both have an initial amplitude \(\sim 8\pi\) and develop several lobes which then evolve into oscillons (accordingly to energy conservation of course). For states with lower amplitude the argument loses validity. Indeed, we find that of the two states with \(\Phi_{0}\sim 4\pi\), (the heavy \(n=0\) state and the light \(n=1\)) only the \(n=1\) state (the heavier of the two) fragments. Following this argument, and using the hints obtained from the lattice simulations we performed, it seems that the true lifetimes of many of the states found in previous sections are in reality significantly shorter, by about 1 order of magnitude. These considerations also highlight the incredible properties of the nodeless state living around \(\omega\sim 0.58\). It is the longest lived state that we found, reaches a field amplitude comparable to some of the states with nodes, is highly relativistic as a bound state with \(\omega\approx 0.58m\), and finally, is stable against aspherical decay even though it has about four times the mass of the groundstate. Our lattice simulations reveal new interesting features that emerge in three dimensions. Namely, the different states should decay through a rich spectrum of transitions to the more stable states. This is a different type of "spectroscopic" feature which can be used to characterize the states. This is a topic we plan to address in future work. Furthermore, although we observe aspherical decay of the oscillon states with nodes in the sine-Gordon equation, following our reasoning in the previous paragraph there is no reason to think this is bound to happen in other models. There should in principle exist systems where states with nodes are just as stable as nodeless states; a question we will tackle in the future. ## V Conclusions We found radically new spherically symmetric oscillon solutions in the three dimensional sine-Gordon equation. Their characteristic is the existence of nodes in their spatial profile, which they keep throughout their lifetime. They have significantly larger energy than their fundamental (nodeless) counterparts, in particular solutions with just one node exhibit one order of magnitude larger energy. We provided a semi-analytic construction by using the quasi-breather formalism, taking into account the non-perturbative presence of higher harmonics. This construction leads to a two-parameter family of solutions, discrete in the number of nodes and continuous in the frequency. However, the radiating tails of the quasi-breathers lead to the existence of long-lived oscillons, only around certain frequencies, with lifetimes reaching \(\mathcal{O}(10^{3})\,m^{-1}\). Despite the vast difference in the mass (rest energy), the lifetime of excited oscillons is similar to their single field counterparts, when computed under spherical symmetry. It is interesting that the lifetime is not monotonic; oscillons with three nodes exhibit longer lifetimes than their two-node or even fundamental counterparts. By considering oscillons as resonances in the particle spectrum of the theory, we can define their narrowness of as the product of the mass and lifetime. Surprisingly, this grows mildly with the number of nodes, being \(\mathcal{O}(1)\) for nodeless oscillons and \(\mathcal{O}(10)\) for oscillons with three nodes. When perturbed outside the spherical ansatz, using a full three-dimensional simulation, the excited oscillons exhibit a significantly smaller lifetime of \(\mathcal{O}(30)\,m^{-1}\). Interestingly, the decay does not lead to an incoherent bath of radiation, but instead leads to the formation of a number of fundamental oscillons. The possible existence of a "selection rules" for the decay of excited multi-node oscillons is beyond the scope of the present work, but does present an intriguing challenge to which we will return in the future. Our results add more remarkable properties to the heavy oscillon without nodes of the theory. Its low frequency indicates that it can be regarded as a relativistic bound state. Moreover, in our 3D lattice simulations we find it to be stable against fragmentation or decay anisotropically. This happens despite having about 4 times the mass of the lowest oscillon, and despite the field excursion being quite large, \(\pm 4\pi f\), at the center. Finally, we must note that the sine-Gordon model is certainly special. The potential has maxima, and with sizeable \(V_{\text{max}}^{\prime\prime}=-V_{\text{min}}^{\prime\prime}\). It is natural to ask how many features (if any) of the current analysis carry on to more general potentials. We will explore the change in the oscillon spectrum from small or large deviations from the sine-Gordon model in a future publication. ## Acknowledgements We thank Mark Hertzberg, Alex Pomarol and Sergey Sibiryakov for useful discussions. The research leading to these results has received funding from the Spanish Ministry of Science and Innovation (PID2020-115845GB-I00/AEI/10.13039/501100011033). IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. EIS acknowledges support of a fellowship from "la Caixa" Foundation (ID 100010434) and from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 847648. The fellowship code is LCF/BQ/PI20/11760021. The research leading to these results has received funding from the ESF under the program Ayudas predoctorales of the Ministerio de Ciencia e Innovacion PRE2020-094420. ## Appendix A Multi-frequency oscillon solution While the single frequency Ansatz is sufficient to describe the low-energy oscillon state(s), we need to go beyond it in order to better capture the structure of excited oscillons; meaning oscillons with nodes that have higher energy. We follow the method put forth in Ref. [51], which is tailored to the study of potentials that can be written in the form \(V(\phi)\sim\sum_{n}\mathcal{V}_{n}\left[1-\cos(n\phi)\right]\) (note that we have dropped the axion decay constant \(f\) for simplicity). The sine-Gordon model represents the simplest case of this family of potentials. We do not attempt to repeat the method of Ref. [51] in its full generality, but rather to present the basic steps, in order to make the current work self contained. The first step in constructing a viable oscillon solution is extending the single frequency Ansatz to contain multiple frequencies. Due to the symmetry of the potential, the oscillon only contains odd multiples of the fundamental frequency (odd harmonics). \[\phi_{sin}(r,t)=\sum_{n}\Phi_{n}(r,\omega)\sin(n\omega t)\,. \tag{10}\] However, we know that the oscillon is not a completely stable configuration and as such it contains small radiating tails, which explain both its longevity and its slow evolution and ultimate decay. In order to capture the radiating tails, we need to add a cosine series to Eq. (11), in particular \[\phi_{cos}(r,t)=\sum_{n\omega>m}c_{n}(r,\omega)\cos(n\omega t)\,, \tag{11}\] where the cosine series only contains radiative modes with \(n\omega>m\). Finally, our localized slowly radiating solution can be written as \[\phi_{\text{full}}\simeq\phi_{sin}(r,t)+\phi_{cos}(r,t)\,. \tag{12}\] We see that (e.g. Fig. 4), even for radiative modes with \(n\omega>m\), the behavior close to the core of the oscillon and in the radiative tail can be very different. We insert the expansion of Eq. (12) into the equation of motion (2) and use the Jacobi-Anger expansion \[e^{i\alpha\,\sin b}=\sum_{k=-\infty}^{\infty}J_{k}(\alpha)e^{ikb} \tag{13}\] where \(J_{k}\) is the Bessel function of the first kind. If we assume that only the first and third harmonics are important \(\phi_{\sin}\simeq\Phi_{1}\sin(\omega t)+\Phi_{3}\sin(3\omega t)\) the equation of motion is written as \[\Phi_{1}\omega^{2}\sin(\omega t)+\Phi_{3}9\omega^{2}\sin(3\omega t)+\frac{d^{2} \Phi_{1}}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{1}}{dr}+\frac{d^{2}\Phi_{3}}{dr^{2}}+ \frac{2}{r}\frac{d\Phi_{3}}{dr}+\sin\left[\Phi_{1}\sin(\omega t)+\Phi_{3}\cos( 3\omega t)\right]=0\,. \tag{100}\] The last term can be expanded using Eq. (101) \[\begin{split}\sin(\phi_{sin})&\simeq 2\left[J_{0}( \Phi_{3})J_{1}(\Phi_{1})+J_{1}(\Phi_{3})J_{2}(\Phi_{1})+...\right]\sin(t)\\ &+2\left[J_{0}(\Phi_{1})J_{1}(\Phi_{3})+(J_{0}(\Phi_{3})-J_{2}( \Phi_{3}))J_{3}(\Phi_{1})+...\right]\sin(3t)\,,\end{split} \tag{101}\] where we kept only a few terms in the Jacobi-Anger expansion for each of the two harmonics. We now see that the equation of motion naturally divides into separate parts, each oscillating with \(\sin(t)\) and \(\sin(3t)\) \[\frac{d^{2}\Phi_{1}}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{1}}{dr}+\Phi _{1}\omega^{2}-2\left[J_{0}(\Phi_{3})J_{1}(\Phi_{1})+J_{1}(\Phi_{3})J_{2}( \Phi_{1})+...\right]=0\,, \tag{102}\] \[\frac{d^{2}\Phi_{3}}{dr^{2}}+\frac{2}{r}\frac{d\Phi_{3}}{dr}+9 \omega^{2}\Phi_{3}-2\left[J_{0}(\Phi_{1})J_{1}(\Phi_{3})+(J_{0}(\Phi_{3})-J_{2 }(\Phi_{3}))J_{3}(\Phi_{1})+...\right]=0\,. \tag{103}\] For practical purposes, we kept a finite amount of Jacobi-Anger terms, making sure that we the truncation is enough to provide the desired accuracy. It is evident that the above equations do not include the cosine terms and thus cannot provide the proper radiative boundary conditions (outgoing waves) at spatial infinity. We assume that the cosine terms of the expansion in Eq. (100) can be treated as a small quantity which does not back-react on the sinusoidal terms. Thus we insert it into the equation of motion and use the Jacobi-Anger expansion, but linearize the resulting potential term, leading to \[\frac{d^{2}c_{3}}{dr^{2}}+\frac{2}{r}\frac{dc_{3}}{dr}+9\omega^{2}c_{3}-\left[ J_{0}(\Phi_{1})J_{0}(\Phi_{3})-J_{1}(\Phi_{3})J_{3}(\Phi_{1})+...\right]c_{3}=0\,. \tag{104}\] Since we require a solution with a confined first harmonic we set \(\Phi_{1}(r\rightarrow\infty)=0\) and set the other two functions to describe outgoing spherical waves at infinity \(\Phi_{3}(r)-\sqrt{9\omega^{2}-1}\,rc_{3}(r)+r\Phi_{3}^{\prime}(r)=0\) and \(\sqrt{9\omega^{2}-1}\,r\Phi_{3}(r)+r\Phi_{3}^{\prime}(r)+rc_{3}^{\prime}(r)=0\). It is evident how this method can be generalized to include higher harmonics.
2307.14392
Human-centric Scene Understanding for 3D Large-scale Scenarios
Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc. In this paper, we present a large-scale multi-modal dataset for human-centric scene understanding, dubbed HuCenLife, which is collected in diverse daily-life scenarios with rich and fine-grained annotations. Our HuCenLife can benefit many 3D perception tasks, such as segmentation, detection, action recognition, etc., and we also provide benchmarks for these tasks to facilitate related research. In addition, we design novel modules for LiDAR-based segmentation and action recognition, which are more applicable for large-scale human-centric scenarios and achieve state-of-the-art performance.
Yiteng Xu, Peishan Cong, Yichen Yao, Runnan Chen, Yuenan Hou, Xinge Zhu, Xuming He, Jingyi Yu, Yuexin Ma
2023-07-26T08:40:46Z
http://arxiv.org/abs/2307.14392v1
# Human-centric Scene Understanding for 3D Large-scale Scenarios ###### Abstract Human-centric scene understanding is significant for real-world applications, but it is extremely challenging due to the existence of diverse human poses and actions, complex human-environment interactions, severe occlusions in crowds, etc. In this paper, we present a large-scale multi-modal dataset for human-centric scene understanding, dubbed HuCenLife, which is collected in diverse daily-life scenarios with rich and fine-grained annotations. Our HuCenLife can benefit many 3D perception tasks, such as segmentation, detection, action recognition, etc., and we also provide benchmarks for these tasks to facilitate related research. In addition, we design novel modules for LiDAR-based segmentation and action recognition, which are more applicable for large-scale human-centric scenarios and achieve state-of-the-art performance. The dataset and code can be found at [https://github.com/4DVLab/HuCenLife.git](https://github.com/4DVLab/HuCenLife.git). ## 1 Introduction Human-centric scene understanding in 3D large-scale scenarios is attracting increasing attention [13, 11, 42, 31], which plays an indispensable role in human-centric applications, including assistive robotics, autonomous driving, surveillance, human-robot cooperation, _etc._ It is often confronted with substantial difficulties since these human-centric scenarios usually have the attributes of various subjects with different poses, fine-grained human-object interactions, and challenging localization and recognition with occlusions. Moreover, current state-of-the-art perception methods heavily rely on large-scale datasets to achieve good performance. Therefore, to promote the research of human-centric scene understanding, the collection of large-scale datasets with rich and fine-grained annotations is required urgently, which is difficult but of great significance. In previous work, many studies target on the scene un derstanding based on the input of image or video [2, 37, 17, 59], which are not applicable to real-world applications due to the limited 2D visual representations. Afterward, some works pay attention to the static indoor-scene understanding [12, 1, 5] based on the pre-scanned RGB-D data, which are not suitable for the research of real-time perception. Recently, more and more outdoor multi-modal datasets [6, 49] are released equipped with LiDAR point clouds. They provide detailed annotations under complex outdoor scenes, while they often focus on the vehicle-dominated traffic environment and neglect the more challenging human-centric daily-life scenarios. Although the dataset STcrowd [11] appears lately, it focuses on the detection task of dense pedestrian scenes, lacking varied human activities and diversified annotations. Consequently, the dataset with rich and fine-grained annotations for human-centric understanding in long-range 3D space is crucial and insufficient. In this paper, to facilitate the research of human-centric 3D scene understanding, we collect a large-scale multi-modal dataset, namely HuCenLife, by using calibrated and synchronized camera and LiDAR. Specifically, the dataset captures \(32\) multi-person involved daily-life scenes with rich human activities and human-object interactions. Various indoor and outdoor scenarios are both included. For the annotation, we provide fine-grained labels including instance segmentation, 3D bounding box, action categories, and continuous instance IDs, which can benefit various 3D perception tasks, such as point cloud segmentation, detection, action recognition, Human-Object Interaction (HOI) detection, tracking, motion prediction, _etc_. In this paper, we provide benchmarks for the former three tasks by executing current state-of-the-art methods on HuCenLife and give discussions for other downstream tasks. In particular, considering the specific characteristics of human-centric scenarios, we propose effective modules to improve the performance for point cloud-based segmentation and action recognition in the complex human-centric environments. First, we model human-human interactions and human-object interactions and leverage their mutual relationships to benefit the classification of points and instances. Second, to solve the problem of the big scale span of objects in daily-life scenarios, we exploit multi-resolution feature extraction strategy to aggregate global features and local features hierarchically so that small objects can be better attended. We evaluate our methods and conduct extensive experiments on HuCenLife. Several ablation studies are also conducted to demonstrate the effectiveness of each module and good generalization capability. Our contributions are summarized as follows: 1. We introduce HuCenLife, the first large-scale multi-modal dataset for human-centric 3D scene understanding with rich human-environment interactions and fine-grained annotations. 2. HuCenLife can benefit various human-centric 3D perception tasks, including segmentation, detection, action recognition, HOI, tracking, motion prediction, etc. We provide baselines for three main tasks to facilitate future research. 3. Several novel modules are designed by incorporating fine-grained interactions and capturing features at various resolutions to promote more accurate perception in human-centric scenes. ## 2 Related Work ### Datasets for 3D Scene Understanding The RGB-D datasets of indoor scenes dominate the early scene understanding task. ScanNet [12, 1] focuses on object surface reconstruction and semantic segmentation, providing dense and rich annotations for various indoor objects. NTU RGB+D [44] is a human action recognition dataset with corresponding skeleton and action labels. Behave [5] concentrates on human-object interaction with human SMPL models and interactive objects annotations. It can be found that outdoor scenarios are not well explored. Recently, the community has paid attention to traffic scenes for autonomous driving and collect several outdoor multi-modal datasets. KITTI [21], nuScenes [6] and Waymo [50] provide 3D bounding boxes for traffic participants and [6, 3] also offer point-wised semantic segmentation labels. However, these datasets are all vehicle-dominated and neglect human-centric scenarios. STcrowd [11] mainly concentrates on the crowds on campus but lacks the fine-grained segmentation labels and complex human-environment interactions. In order to facilitate the research of human-centric 3D scene understanding, we collect HuCenLife, a multi-modal dataset with various scenarios in human daily life. ### Point Cloud-based Segmentation Most outdoor point cloud segmentation methods mainly focus on point cloud representations. Point-based methods [39, 40, 67, 52] make the operation on unordered point cloud directly. Voxel-based methods [10, 22] utilize efficient sparse convolution to reduce the time complexity. PolarNet [71] and Cylinder3D [76] further consider the non-uniform LiDAR point clouds characteristics and point distribution, and divide the points under the polar coordinate system. [26] adopts the cylinder convolution and proposes a dynamic shifting network for instance prediction. These methods are mainly focusing on automatic driving scenes, while neglecting the counterpart in human-centric scenarios with complex human-object interactions and challenging occlusions. Another line of segmentation, namely point cloud instance segmentation, also embraces great progress, which can be mainly divided into proposal-based methods and grouping-based methods. Previous proposal-based methods [64, 18, 61] regard the instance segmentation as a top-down pipeline, which first generate proposals and then segment the objects within the proposals. Grouping-based methods [27, 55, 23, 8, 24, 58] adopt the bottom-up strategy. PointGroup [27] aggregates points from original and offset-shifted point sets. DyCo3D [8] and DKNet [58] encode instances into kernels and propose dynamic convolution kernels and then merge the candidates. Considering the imprecise bounding box prediction in proposal-based methods for refinement and the time-consuming aggregation in grouping methods, [43, 48] take each object instance as an instance query and design a query decoder with transformers. However, these methods are applied to structured indoor instances without human involvement and human-environment interactions. Our dataset and proposed method target more on human-human and human-object interactions in large-scale human-centric scenes. ### LiDAR-based 3D Detection As the mainstream of 3D perception task, 3D detection task has been fully explored, which can be grouped via the point encoding strategies. First, point-based methods [69, 7, 38, 46, 62] extract the geometry information from raw points with sampling and grouping. [53, 56, 4, 51, 30, 19] transform point cloud into range images for detection. Second, voxel-based methods [74, 57, 29, 15, 14, 65, 60] convert raw point clouds to regular volumetric or pillar representations and adopt voxel-based feature encoding. Third, hyper-fusion methods [36, 28, 45, 63, 9, 75] take advantage of both voxels and points and fuse them together to model the hyper encoding. In this paper, we test them on the proposed HuCenLife dataset to provide the benchmark and offer the comprehensive analyses and comparison. ### Action Recognition Recently, transformer-based methods have dominated the field of action recognition [32, 20]. Many variants based on ViT [17] have been proposed to explore the potential of transformer in video classification, where ViViT [2] extends the two-dimensional patch to three-dimensional tube to model the temporal relation, MTV [59] divides the tube with different time scales to extract the action features with different amplitude of change over time, and TubeViT [37] further samples various sized 3D space-time tubes from the video to generate learnable tokens. However, the common action recognition [47, 44] is annotated in image-level and lacks of instance-level labels, causing these methods hard to be applicable in complex 3D scenarios. In this paper, we introduce point cloud-based instance action recognition task in large-scale scenes and collect the HuCenLife dataset equipped with various instances with different poses and motions, to make the basis for research community. ## 3 HuCenLife Dataset HuCenLife is the first dataset that emphasizes human-centric 3D scene understanding, containing indoor and outdoor daily-life scenes with rich annotations of human activities, human-human interactions, and human-object interactions, which facilitates the development of intelligent security, assistive robots, human-machine cooperation, _etc_. In this section, we first introduce the data acquisition in Sec.3.1, and then provide important annotation statistics in Sec.3.2, and finally highlight the novelties of HuCenLife by comparing with existing influential datasets in Sec.3.3. ### Data Acquisition To collect the dataset, we built a Visual-LiDAR Capture System, which mainly consists of one 128-beam Ouster-OS1 LiDAR and six industrial cameras in a circle, as Fig 2 shows. All sensors are tied in fixed positions on the bracket with mechanical synchronization. The LiDAR has a \(360^{\circ}\) horizon field of view (FOV) \(\times 45^{\circ}\) vertical FOV, and each camera has a \(75^{\circ}\times 51.6^{\circ}\) FOV with \(1920\times 1200\) image resolution. For our equipment, LiDAR captures raw point cloud in \(10\)Hz and camera takes pictures in \(32\)Hz. ### Annotation We manually annotated all humans and these objects with interactions with humans in LiDAR point cloud by referring to the synchronized image. We select one frame per second for labeling and finally obtain \(6,185\) frames (\(103\) minutes) of annotated LiDAR point cloud. For each target, we provide four kinds of annotations, _i.e_., point cloud-based instance segmentation, 3D bounding box, human action classification, and tracking ID across consecutive frames, like Fig 1 shows. In HuCenLife, there are \(65,265\) human instances in total, including \(58,354\) adults and \(6,911\) children, and \(31,303\) human-interacted objects. There are \(20\) categories of objects and \(12\) kinds of human actions. Specifically, the HuCenLife dataset is collected in \(15\) distinguished locations with \(32\) human-centric daily-life scenes, including playground, shopping mall, campus, park, gym, meeting room, express station, _etc_. For each scene, there are \(11\) persons on average with multiple interacted objects, and Figure 2: Sensor setup for data collection. for some complex scenes, there are about \(70\) persons. The diverse density distributions in HuCenLife bring challenges for related research. More detailed annotation introductions are in the supplementary material. ### Characteristics We introduce the basic information of HuCenLife and compare it with related popular datasets in Table 1. In particular, we conclude four highlights of our dataset below. **Large-scale Dynamic Scenarios.** Benefiting from the long-range-sensing and light-independent properties of LiDAR, HuCenLife contains data of diverse large-scale scenes day and night. Unlike indoor datasets [12] where the scene is pre-scanned and has only static objects, HuCenLife provides online captured multi-modal visual data of dynamically changing scenes with dynamic people, objects, and background. Furthermore, the density of humans and objects is changing from a few to dozens in distinct scenes. The visual data in such diverse dynamic scenarios has huge significance for developing mobile robots. **Abundant Human Poses.** Different from current traffic or crowd datasets [50, 6, 11], where people only act as pedestrians walking or standing on the road, HuCenLife pays attention to daily-life scenarios, where people have rich actions, such as doing exercise, crouching down, dancing, running, riding, _etc_. In particular, HuCenLife contains thousands of children samples, which are never concerned in previous datasets. Such complex scenarios with high-degree freedom of human poses bring challenges for accurate perception and recognition. **Diverse Human-centric Interactions.** Apart from abundant self-actions of humans, HuCenLife also includes rich human-human interactions (hugging, holding hands, holding a baby, _etc_.) and human-object interactions (riding a bike, opening the door, carrying a box, _etc_.). What's more, there are some extremely complex human-human-object interactions, such as playing basketball, having a meeting in a room, _etc_., which require the participation of multiple persons and objects. HuCenLife is unique for containing diversified interaction data in a variety of scenes, which is significant for the research of human-machine cooperation and boosts the development of service robots. **Rich Annotations.** HuCenLife provides rich fine-grained annotations, which can benefit many perception tasks, such as point cloud segmentation, 3D detection, 3D tracking, action recognition, HOI, motion prediction, _etc_. In particular, due to complex scene contents, the annotation process of HuCenLife is much more difficult than others. A well-trained annotator usually spends \(25\)min on average for labeling one frame of LiDAR point cloud in our dataset. ### Privacy Preservation We strictly obey the privacy-preserving rules. We mask all sensitive information, such as the faces of humans and locations, in RGB images. LiDAR point clouds without any texture and facial information naturally protect the privacy. ## 4 Various Downstream Tasks As mentioned above, our dataset can benefit numerous human-centric 3D perception tasks. We conduct three main tasks on HuCenLife based on the LiDAR point cloud, including human-centric instance segmentation, human-centric 3D detection, and human-centric action recognition, and provide the baseline methods. Particularly, novel methods are proposed for instance segmentation and action recognition, respectively, to tackle the difficulties of large-scale human-centric scenarios. In what follows, we present details of these tasks with extensive experiments in order. ## 5 Human-centric Instance Segmentation For LiDAR point cloud-based semantic instance segmentation, the input is expressed as \(P\in\mathcal{R}^{N\times 4}\), which involves \(N\) points with the 3D location and reflection intensity \((x,y,z,r)\). The task is to assign each point to a category and then output a set of object instances with their corresponding semantic labels. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline Dataset & Data & LiDAR & Point Cloud & Person & Person & Scenes & Annotation Content & Annotation Targets \\ & Modality & Beam & Frame & Number & Per Frame & indoor & outdoor & ins. seg. & 3D bbx & action & multi-person & inter. obj. \\ \hline ScanNet[12] & RGBD & - & - & - & - & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ \hline S3DIS[1] & RGBD & - & - & - & - & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ \hline SUN RGB-D[47] & RGBD & - & - & - & - & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline NTU RGB+D[44] & RGBD & - & - & - & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ \\ \hline BEHAVE[5] & RGBD & - & 15.8k & 1.58k & 1 & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ \\ \hline SemanticFTIT[31] & pc & 64 & 43k & 9.7k & 0.2 & ✗ & ✓ & ✓ & ✗ & ✗ & ✓ & ✗ \\ \hline KITTI[21] & image&kpc & 64 & 15k & 4.5k & 0.3 & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ \\ \hline Waymo[50] & image\&kpc & 64 & 230k & 2.8M & 12 & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ \\ \hline nuScenes[6] & image\&pc & 32 & 40k & 208k & 5 & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ \\ \hline STCrowd[11] & image\&pc & 128 & 11k & 219k & 20 & ✗ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ \\ \hline **HuCenLife** & image\&pc & 128 & 6.1k & 65k & 11 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular} \end{table} Table 1: Comparison with related datasets for 3D scene understanding. There are some abbreviations, where “pc” denotes LiDAR point cloud, “ins. seg.” means instance segmentation, “bbx” is bounding box, and “inter. obj.” denotes objects having interactions with humans. ### Method For human-centric scenes, people have diverse pose types and may stay together with occlusions. Moreover, some objects are relatively small and closely located to the person, causing overlapping or stitching points with humans and bringing difficulties in distinguishing from the person. To tackle these problems, we propose a Human-Human-Object Interaction(HHOI) module, shown in Figure 3. The model first extracts the human-human interaction feature with attention strategy so that humans can be more accurately recognized even with partial point cloud in occluded scenes. Then, it uses human-centric features to guide the network automatically to learn a weighted feature to pay attention to interactive objects, which can benefit capturing fine-grained semantic information. #### 5.1.1 Human-Human-Object-Interaction Module As shown in Figure 3, we utilize a sparse 3D Unet to get \(D\) dimensional point feature \(F_{p}\in\mathcal{R}^{N\times D}\). Then, human-human interacted features are extracted through a transformer mechanism. We get the semantic score \(Y=softmax(MLP(F_{p}))=\{y_{i,c}\}^{N\times C}\) for each point, where \(C\) is the class number. And then we select \(M\) points with the confidence of belonging to person class higher than the threshold \(\tau\). We further apply the triplet Q, K, V attention layer to extract correlations among different sampled person features \(F_{s}\) and obtain the final human-guided feature: \[f_{attention}=softmax(\frac{QK^{T}}{\sqrt{D}})V,\] \[F_{g}=LN(f_{attention}+FFN(f_{attention})),\] where \(LN\) is layer normalization and \(FFN\) is the feed-forward neural network [54]. Then, we use human-guided feature to extract human-object interaction for fine-grained object segmentation. The similarity weighted matrix \(W=softmax(F_{p}F_{g}^{T})\) is computed to enhance the features of objects that people interact with. We multiply the weighted matrix with point features to obtain the final weighted features. In this way, the model adaptively learns human-related representations and enhances the object feature with the guidance of high-confidence human features. #### 5.1.2 Point-wise Prediction and Refinement Taking the weighted features as input, the semantic branch and offset branch apply two-layer MLP and output the semantic scores \(S\in\mathcal{R}^{N\times K}\) and offset vectors \(O\in\mathcal{R}^{N\times K}\) from the point to the instance center, respectively. The weighted cross-entropy loss \(\mathcal{L}_{\text{semantic}}\) and \(L_{1}\) regression loss \(\mathcal{L}_{\text{offset}}\) are used to train the semantic and offset branches. After that, we follow the refinement stage in SoftGroup [55], where point-level proposals are fed into a tiny-unet to predict classification scores, instance masks, and mask scores to generate the final instance results. Specifically, the clas \begin{table} \begin{tabular}{l sification branch predicts the category scores \(c_{k}\) for each instance. The segmentation branch utilizes a point-wise MLP to predict an instance mask \(m_{k}\) for each instance proposal. Mask scoring branch estimates the IoU between the predicted mask and the ground truth for each instance. We train each branch with cross-entropy loss \(\mathcal{L}_{\text{class}}\), binary cross-entropy loss \(\mathcal{L}_{\text{mask}}\), and \(l_{2}\) regression loss \(\mathcal{L}_{\text{mask score}}\). And the total loss is the sum of all above losses. ### Experiments #### 5.2.1 Baselines and Evaluation Metrics Previous 3D instance segmentation works can be divided into LiDAR-based methods and RGB-D-based methods. For the former, we compare with current SOTA method DSNet [26] of both voxel-division version and cylinder-division version. For the latter, we select current SOTA approaches DKnet [58] and SoftGroup [55] for comparison. We utilize mean IoU (mIoU) to evaluate the quality of the semantic segmentation. For instance segmentation, we report AP50 and AP25 which denote the scores with IoU thresholds of 50% and 25%, respectively. #### 5.2.2 Results **Comparison on HuCenLife dataset.** We compare the results of our proposed method with baseline methods in Table 2. DSNet does not get satisfactory results, mainly because it focuses on traffic scenarios, while the span of object scale is much larger in human-centric scenarios. SoftGroup is better than outdoor methods because it has a refinement stage for recognizing small objects. Our method performs best due to the use of interaction information. **Comparison on BEHAVE dataset.** To further evaluate the generalization capability of human-object interaction scenes, we also conduct experiments for semantic segmentation on BEHAVE [5] dataset in Table 3. BEHAVE dataset is a human-object interaction dataset, which is collected in indoor scenarios and provides RGB-D frames and 3D SMPL. To adapt it to our task, we generate the point cloud and segmentation label from RGB-D images and segmented masks. There is only single person with single object per frame and the total number of the object categories is 20. We follow the official protocol of dataset splitting. Our method still outperforms the best baseline method SoftGroup by 2.8% in mIOU. **Sensor-fusion-based 3D segmentation.** Because our dataset also contains image data, we also provide LiDAR-Camera sensor-fusion baselines based on our method in Table 2 to facilitate further research. PointPainting appends the raw LiDAR point with corresponding RGB color according to calibration matrix. LocalFusion concatenates high-dimensional image feature to the corresponding high dimensional point semantic feature. And our HHOI module has consistently improved the performance on various fusion strategies, validating its generalization ability. ## 6 Human-centric 3D Detection LiDAR point cloud-based 3D detection is well-studied in recent years, driven by autonomous driving. It provides critical information of obstacles for the motion planning of robots to guarantee the safety. Specifically, the input for 3D detection is the point cloud \(P\) and the output is predicted bounding boxes with 7 dimensions (_x,y,z,w,l,h,r_), consisting of the 3D position in LiDAR coordinate system, the size of bounding box, and the rotation. In this section, we provide benchmarks for the 3D detection task on HuCenLife by evaluating current state-of-the-art methods and give discussion on the research of human-centric 3D detection. ### Baselines and Evaluation Metrics We choose four representative works and test their performance on our dataset. CenterPoint [65] is a popular anchor-free detector and based on it, STCrowd [11] aims at solving dense crowd scenarios. By means of the transformer mechanism, TED [57] and CenterFormer [75] achieve impressive performance recently. Following [11, 6], we use Average Precision (AP) with 3D center distance thresholds D = {0.25, 0.5, 1} meters as the evaluation metric. Then mean Average Precision (mAP) is obtained by averaging AP. ### Results and Discussion We conduct experiments on two settings, including **person-only 3D detection** in Table 4 and **full-category 3D \begin{table} \begin{tabular}{l|c c c c|c} \hline Methods & AP(0.25) & AP(0.5) & AP(1.0) & mAP \\ \hline CenterPoint[65] & 61.8 & 68.7 & 70.3 & 66.9 \\ STCrowd[11] & 61.8 & 71.6 & 73.4 & 68.9 \\ TED[57] & 51.0 & 53.3 & 54.1 & 52.8 \\ CenterFormer[75] & 73.0 & 80.1 & 81.4 & 78.2 \\ \hline \end{tabular} \end{table} Table 4: Person-only 3D detection results on HuCenLife. Figure 4: The visualization of semantic (first row) and instance (second row) segmentation results of our method on HuCenLife. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline Methods & motorbike & box & cart & scooter & backpack & object in hand \\ \hline CenterPoint[65] & 13.4 & 17.1 & 20.9 & 43.4 & 4.2 & 8.4 \\ \hline STCrowd[11] & 5.4 & 14.4 & 25.3 & 48.7 & 4.5 & 13.5 \\ \hline CenterFormer[75] & 3.8 & 16.2 & 24.2 & 44.4 & 2.6 & 12.5 \\ \hline \end{tabular} \end{table} Table 5: Full-category 3D detection results (AP) on HuCenLife. We only select six types of objects for demonstration. detection** in Table 5. These baseline methods are designed for large-scale traffic scenarios, which perform limited on human-centric scenarios, especially for detecting small objects. We conclude with three main challenges for conducting 3D detection in human-centric scenarios. First, people usually have different poses in different actions, such as crouching, sitting, waving, etc., and such diverse body poses cause distinct sizes of bounding box. Second, there are many relatively small objects in scenes, bringing difficulties to balance the accuracy of fine-grained detection and the efficiency of large-scale scene data processing. Third, multi-objects may locate at different heights in the same place, such as in complex scenarios of escalator and slide, leading to larger dimension of feature recognition. Previous methods using BEV feature map will miss details and transformer-based methods have horrible cost. Therefore, there is a lot of room for the 3D detection research in human-centric scenes, while our dataset can offer a good platform for it. ## 7 Human-centric Action Recognition Previous works for action recognition are based on 2D images or videos and they only need to give one label for one scene. We introduce the 3D action recognition task in large-scale human-centric scenarios, which aims to detect all persons in the scene and provide corresponding action types. 3D action recognition task is significant for fine-grained scene understanding and can benefit the development of intelligent surveillance and collaborative robots. To our knowledge, we are the first to propose the related dataset and solutions for the new task. ### Method Our 3D action recognition method is in a two-stage manner based on the input of LiDAR point cloud, as shown in Figure 5. Considering that some human actions are related to adjacent interactive objects, after obtaining individual bounding box by 3D detector, we enlarge the box to crop more points related to the person for the following fine-grained feature extraction. Especially, we leverage a Hierarchical Point Feature Extraction module to pay attention to multi-scale objects and get multi-level features. Moreover, we design an Ego-Neighbour Feature Interaction (ENFI) module to make use of the relationship among the ego-person and neighbors to help forecast social actions. #### 7.1.1 Hierarchical Point Feature Extraction To capture both global features and local features with dynamically changing receptive fields, we use \(R\) parallel branches to extract multi-resolution features. Serial Set Abstractions [39] are applied to process the features of different scales, where each branch undergoes \(L\) times with fixed sampling cores and branch-specific sampling range. Finally, these features are up-sampled to the same dimension and fused together with pooling to generate the hierarchical fusion feature \(F_{HF}\). #### 7.1.2 Ego-Neighbour Feature Interaction Like Figure 5 shows, we first enhance the ego person feature by self-attention and get \(F_{ego}\). Then, we select features of \(k\) neighbours around the target as \(K_{neigh}\) and \(V_{neigh}\) and take the ego-feature as queries \(Q_{ego}\). The distances from neighbours to the target are used for position encoding. We apply cross-attention to extract the ego-neighbour interaction information and gain the final interaction enhanced ego feature by \(F_{IE}=F_{ego}\bigoplus\text{CrossAttention}(Q_{ego},K_{neigh},V_{neigh})\), where \(\bigoplus\) denotes concatenation. In this way, we model the relationships of a group to benefit the social action recognition. ### Experiments #### 7.2.1 Baselines and Evaluation Metrics We take pre-trained CenterPoint as the 3D Detector for all the experiments for fair comparison in this section. Because no existing methods can be directly used for solving the new 3D action recognition task. As Table 6 shows, we provide benchmarks and comparisons from four aspects. The first is Figure 5: Pipeline of our method for human-centric action recognition. We first utilize 3D detector to obtain a set of bounding boxes of persons. Then, for each person, we extract multi-resolution features and get a hierarchical fusion feature \(F_{HF}\). Next, we leverage the relationship with neighbors to enhance the ego-feature and obtain a comprehensive feature \(F_{IE}\) for the final action classification. to directly adapt the 3D detector to predict multi-class persons with different action labels, which is the "Baseline" in Table 6. The second is to add a feature extractor for cropped individual point cloud for the second-stage action classification, and we tried several popular point-feature extractors, including PVT, PointNet, PointNet++, PointMLP, and PointNext. In particular, to verify the performance of input modalities, we also use ViT to extract image features for image-based action recognition by projecting the 3D bounding box to calibrated images. At last, we provide the results of our solution with ablation for ENFI module. We use the mean Average Precision (mAP) obtained by averaging AP through thresholds \(D=\{0.25,0.5,1\}\) and classes to evaluate the performance. \[\mathrm{mAP}=\frac{1}{|\mathbb{C}||\mathbb{D}|}\sum_{c\in\mathbb{C}}\sum_{d \in\mathbb{D}}\mathrm{AP}_{c,d}\] where \(|\mathbb{C}|\) is the number of action category. In addition, we also utilize Mean Recall (mRecall) and Mean Precision (mPrecision) by averaging recall and precision through thresholds and classes. #### 7.2.2 Results and Discussion We show the overall performance in Table 6, and detailed evaluation values of all categories of actions and visualization results are in the supplementary material. It can be seen from the results that our method outperforms others with an obvious margin, mainly due to the multi-level feature extraction and multi-person interaction modeling, which are more suitable for understanding human-centric complex scenarios. However, our method has its own limitations and there are several potential improvement directions. First, current two-stage framework strongly relies on the detector performance and the one-stage method for action recognition in large-scale scenes is worth exploring. Moreover, human action is time-dependent and how to extract valuable temporal information in consecutive data to eliminate the ambiguity of actions is also promising. ## 8 More Tasks on HuCenLife In this paper, we provide benchmarks on HuCenLife for three main tasks, including 3D segmentation, 3D detection, and action recognition in human-centric scenarios. However, benefiting from the rich annotations in HuCenLife dataset, there are many other tasks deserving explored. ### Human-Object Interaction Detection Recently, the task of Human-Object Interaction (HOI) detection [70, 66] attracts more and more attention, which targets for detecting the person and the interacted object and meanwhile classifying the interaction category. Current studies and datasets are limited to the interaction between single person and single object in one scene and they are all based on the image modality. 3D HOI tasks in large-scale free environments with multiple persons and multiple objects can be formulated and evaluated on HuCenLife. ### Tracking and Trajectory Prediction HuCenLife contains sequential frames of data with the tracking ID annotation for all instances, which can facilitate the time-related tasks, such as 3D tracking [73, 72] and trajectory prediction [35, 16]. It is challenging for these tasks due to the occlusions in crowded scenes, but it is significant to study consecutive behaviors and interactions in real world to provide valuable guidance for robots. ### 3D Scene Generation With the success of Diffusion model [25] in image generation, many works try to achieve high-quality 3D data generation for single objects [33] or scenes [77]. HuCenLife provides rich material for daily-life scenarios, and it is interesting to generate more human-centric scene data with semantic information to facilitate learning-based methods. ### Multi-modal Feature Fusion Apart from point cloud, HuCenLife also provides corresponding images. The complementary information of multi-modal features will definitely benefit all tasks mentioned above, which deserves in-depth research. ## 9 Conclusion We fully discuss the challenges, significance, and potential research directions of 3D human-centric scene understanding in this paper. Specifically, we propose the first related large-scale dataset with rich fine-grained annotations, which can facilitate the research for many 3D tasks and has the potential to boost the development of assistive robots, surveillance, etc. Moreover, we provide benchmarks for various tasks and propose novel methods for human-centric 3D segmentation and human-centric action recognition to facilitate further research. \begin{table} \begin{tabular}{l|c c c} \hline Methods & mAP & mRecall & mPrecision \\ \hline Baseline & 7.3 & 14.6 & 19.9 \\ \hline + ViT[17] & 9.4 & 23.1 & 19.9 \\ \hline + PVT[68] & 13.2 & 30.5 & 19.8 \\ + PointNet[39] & 8.4 & 26.3 & 15.5 \\ + PointNet++[40] & 15.6 & 34.2 & 22.7 \\ + PointMLP[34] & 11.3 & 28.0 & 19.4 \\ + PointNeXt[41] & 15.0 & 33.0 & 21.2 \\ \hline Ours & **21.0** & **40.0** & **26.9** \\ Ours(w/o ENFI) & 15.4 & 37.1 & 24.7 \\ \hline \end{tabular} \end{table} Table 6: Comparison results of action recognition on HuCenLife. All methods are based on the same 3D detector for fair evaluation.
2303.06261
Interpretable Outlier Summarization
Outlier detection is critical in real applications to prevent financial fraud, defend network intrusions, or detecting imminent device failures. To reduce the human effort in evaluating outlier detection results and effectively turn the outliers into actionable insights, the users often expect a system to automatically produce interpretable summarizations of subgroups of outlier detection results. Unfortunately, to date no such systems exist. To fill this gap, we propose STAIR which learns a compact set of human understandable rules to summarize and explain the anomaly detection results. Rather than use the classical decision tree algorithms to produce these rules, STAIR proposes a new optimization objective to produce a small number of rules with least complexity, hence strong interpretability, to accurately summarize the detection results. The learning algorithm of STAIR produces a rule set by iteratively splitting the large rules and is optimal in maximizing this objective in each iteration. Moreover, to effectively handle high dimensional, highly complex data sets which are hard to summarize with simple rules, we propose a localized STAIR approach, called L-STAIR. Taking data locality into consideration, it simultaneously partitions data and learns a set of localized rules for each partition. Our experimental study on many outlier benchmark datasets shows that STAIR significantly reduces the complexity of the rules required to summarize the outlier detection results, thus more amenable for humans to understand and evaluate, compared to the decision tree methods.
Yu Wang, Lei Cao, Yizhou Yan, Samuel Madden
2023-03-11T00:53:49Z
http://arxiv.org/abs/2303.06261v3
# Interpretable Outlier Summarization ###### Abstract. Outlier detection is critical in real applications to prevent financial fraud, defend network intrusions, or detecting imminent device failures. To reduce the human effort in evaluating outlier detection results and effectively turn the outliers into actionable insights, the users often expect a system to automatically produce interpretable summarizations of subgroups of outlier detection results. Unfortunately, to date no such systems exist. To fill this gap, we propose STAIR which learns a compact set of human understandable _rules_ to summarize and explain the anomaly detection results. Rather than use the classical decision tree algorithms to produce these rules, STAIR proposes a new optimization objective to produce a small number of rules with least complexity, hence strong interpretability, to accurately summarize the detection results. The learning algorithm of STAIR produces a rule set by iteratively splitting the large rules and is optimal in maximizing this objective in each iteration. Moreover, to effectively handle high dimensional, highly complex data sets which are hard to summarize with simple rules, we propose a _localized_ STAIR approach, called L-STAIR. Taking data locality into consideration, it simultaneously partitions data and learns a set of localized rules for each partition. Our experimental study on many outlier benchmark datasets shows that STAIR significantly reduces the complexity of the rules required to summarize the outlier detection results, thus more amenable for humans to understand and evaluate, compared to the decision tree methods. Outlier detection, Decision trees, Data Partitioning + Footnote †: ccs: Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information from Information Information from Information from Information from Information from Information Information from Information from Information from Information from Information from Information Information from Information from Information Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information from Information from Information Information from Information from Information from Information from Information Information from Information from Information from Information Information from Information from Information Information from Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information Information from Information from Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information from Information from Information Information from Information Information from Information from Information from Information Information Information from Information from Information from Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information Information from Information from Information Information from Information from Information Information from Information Information from Information from Information from Information Information from Information Information from Information from Information Information Information from Information from Information Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information Information from Information from Information Information Information from Information from Information from Information Information from Information from Information Information from Information Information from Information from Information Information from Information from Information Information from Information Information from Information Information from Information from Information from Information Information from Information from Information Information from Information from Information from Information Information from Information from Information from Information from Information Information from Information from Information Information from Information from Information Information from Information from Information from Information from Information from Information from Information from Information from Information deviate from the normal phenomena. In most cases the outliers tend to be very different from each other in their features. Therefore, they cannot be simply summarized based on their similarity in the feature space measured by some similarity function. **Proposed Approach.** To meet the need of an effective outlier summarization and interpretation tool, we have developed STAIR. It produces a set of human understandable abstractions, each describing the common properties of a group of detection results. This allows the users to efficiently verify a large number of anomaly detection results and diagnose the root causes of the potential outliers by only examining a small set of interpretable abstractions. Rule-based Outlier Summarization and Interpretation. STAIR leverages classical decision tree classification to learn a compact set of human understandable _rules_ to summarize and explain the anomaly detection results. Using the results produced by an anomaly detection method as training data, STAIR learns a decision tree to accurately separate outliers and inliers in the training set. Each branch of the decision tree is composed of a set of data attributes with associated values that iteratively split the data. Therefore, it can be thought of as a _rule_ that represents a subset of data sharing the same class (outlier or inlier) and that is easy to understand by humans. Outlier Summarization and Interpretation-aware Objective. However, decision tree algorithms target maximizing the classification accuracy. Rules learned in this way do not necessarily have the properties desired by outlier summarization and interpretation. This is because when handling highly complex data sets, to minimize the classification errors, decision trees often have to be _deep_ trees with _many_ branches and hence produce a lot of complex rules which are hard for humans to understand. Although some methods like CART (CART, 2017) have been proposed to prune a learned decision tree in a post-processing step, they target avoiding overfitting and thus lifting the classification accuracy. They do not guarantee the simplicity of each rule. To solve the above issues, we propose a new optimization objective customized to outlier summarization and interpretation. It targets producing the minimal number of rules that are as simple as possible, while still assuring the classification accuracy. However, the simplicity requirement of outlier summarization and interpretation conflicts with the accuracy requirement, while it is hard for the users to manually set an appropriate regularization term to balance the two requirements. STAIR thus introduces a learnable regularization parameter into the objective and relies on the leaning algorithm to automatically make the trade-off. Rule Generation Algorithm. We then design an optimization algorithm to generate the summarization and interpretation-aware rules. Similar to the classic decision tree algorithms (Liu et al., 2017), STAIR produces a rule set by iteratively splitting the decision node. In each iteration, STAIR dynamically adjusts the regularization parameter to ensure that it is always able to produce a valid split which increases the objective. We prove that the regularization parameter and the rule split that STAIR learns in each iteration as a combination is _optimal_ in maximizing the objective. Localized Outlier Summarization and Interpretation. To solve the problem that one single decision tree with a small number of simple rules is not adequate to satisfy the accuracy requirement when handling high dimensional, highly complex data sets, we propose a _localized_ STAIR approach, called L-STAIR. Taking data locality into consideration, L-STAIR divides the whole data set into multiple partitions and learns a localized tree for each partition. Rather than first partition the data and then learn the localized tree in two disjoint steps, L-STAIR jointly solves the two sub-problems. In each iteration, it optimizes the data partitioning and rule generation objectives alternatively and is guaranteed to converge to a partitioning that can be summarized with simple rules. **Contributions.** The key contributions of this work include: * To the best of our knowledge, STAIR is the first approach that summarizes the outlier detection results with human interpretable rules. * We define an outlier summarization and interpretation-aware optimization objective which targets producing the minimal number of rules with least complexity, while still guaranteeing the classification accuracy. * We design a rule generation method which is optimal in optimizing the STAIR objective in each iteration. * We propose a localized STAIR approach which jointly partitions the data and produces rules for each local partition, thus scaling STAIR to high dimensional, highly complex data. * Our extensive experimental study confirms that compared to other decision tree methods, STAIR significantly reduces the complexity and the number of rules required to summarize outlier detection results. ## 2. Preliminary: Decision Tree In this section, we overview the decision tree classification problem and its classical learning algorithms. **Decision Tree Overview.** Decision tree learning is a classical classification technique where the learned function could be represented by a decision tree. It classifies instances by sorting them down the tree from root to the leaf node, which could predict the label of this instance. Each node in the tree denotes the test of the specific attribute, and the instance is classified by moving down the tree branch from this node according to the value of the attribute in the given example. **Learning Algorithms.** Most algorithms learn the decision trees in a top-town, greedy search manner such as ID3 (Zhou et al., 2017) and its successor C4.5 (Zhou et al., 2017). The basic algorithm, ID3, will run a _statistical test_ on choosing the instance attribute to determine how well it could classify the data points. From the root node, the algorithm will find the best attribute to form branches and then put all the training examples into the corresponding child nodes. It then repeats this entire process using the training data associated with the child nodes to select the appropriate attribute and value for the current node and form new branches from the child nodes. **Information Gain-based Statistical Test.** There are several strategies for the statistical test in each step. One of the most popular tests is _information gain_, which measures how well a given attribute could separate the training examples. Before giving the precise definition of information gain, we need to give the definition of _entropy_ first. Given a data collection \(S\), containing positive and negative examples, the entropy of \(S\) is: \[Entropy(S)=-p_{+}\log_{2}p_{+}-p_{-}\log_{2}p_{-} \tag{1}\] where \(p_{+}\) and \(p_{-}\) are the proportion of positive and negative examples in \(S\), respectively. Next, we give the formulation of the information gain of an attribute \(A\) with split value \(v\), relative to a collection of examples \(S\): \[Gain(S,A,v)=Entropy(S)-\sum_{b\in Branches}\frac{|S_{b}|}{|S|}Entropy(S_{b}) \tag{2}\] where _Branches_ contains two branches, each of which has the training examples with attribute \(A\) smaller or larger than the value \(v\), respectively. \(S_{b}\) refers to the collection of examples from branch \(b\). The learning algorithm iteratively splits nodes and forms branches by maximizing Eq. 2 at each step. Learning the decision tree in this way is equivalent to maximizing the global objective: \[\max\sum_{l\in L}n_{l}(1-Entropy(S_{l})) \tag{3}\] where \(S_{l}\) represents the collection of training examples in the leaf node \(l\) and \(n_{l}\) represents the number of examples falling into node \(l\). ## 3. Rule-based Summarization and Interpretation In this section, we first give the definition of **rule** and then explain why rules are good at summarizing and interpreting outlier detection results. **Definition 3.1**.: Given a data set \(\mathbb{D}\) in a N-dimensional feature space \([x_{1},x_{2},\cdots,x_{N}]\), a **Rule**\(R_{i}\) is defined as \(R_{i}=(a_{1}\leq x_{i}^{i}\leq b_{1})\wedge(a_{2}\leq x_{2}^{i}\leq b_{2})\), \(\cdots\wedge(a_{j}\leq x_{j}^{i}\leq b_{j})\), \(\cdots\wedge(a_{L}\leq x_{L}^{i}\leq b_{L})\). \(\forall\) clause \((a_{j}\leq x_{j}^{i}\leq b_{j})\) of \(R_{i}\), \(x_{j}^{i}\) corresponds to one attribute \(x_{j}\in\{x_{1},x_{2},\cdots,x_{N}\}\); \(a_{j}\) and \(b_{j}\) (\(a_{j}<b_{j}\)) fall in the domain range of attribute \(x_{j}\). \(L\) indicates the number of attributes in rule \(R_{i}\), or the **length** of \(R_{i}\). By Def. 3.1, a rule \(R_{i}\) corresponds to a conjunction of domain value intervals, each with respect to some attribute \(x_{j}\). Rule \(R_{i}\) covers a data subset \(\mathbb{D}_{i}\subseteq\), where \(\forall\) object \(d_{i}\in\mathbb{D}_{i}\), the attributes of object \(d_{i}\) fall into the corresponding interval. In the decision tree model (Srivastava et al., 2017), each branch corresponds to one rule. Fig. 1 shows a toy decision tree \(T_{i}\) learned from a 2-dimensional data set \(\mathbb{D}\). \(T_{i}\) classifies the objects in \(\mathbb{D}\) into outliers and inliers. It has three branches, corresponding to 3 rules: \(R_{1}=(-2\leq x_{1}\leq 2)\), \(R_{2}=(x_{1}>2)\), and \(R_{3}=(x_{1}<-2)\). All rules only contain one attribute \(x_{1}\). Rules \(R_{2}\) and \(R_{3}\) are lower bounded or upper bounded only. Thus the length of these rules is one. Note the length of a rule is not equivalent to the depth of the tree. The depth of the decision tree in Figure 1 is two, while the lengths of the three rules are all one. Even if the decision tree gets deeper, the lengths of the rules could still be small. This is because a decision tree could use one attribute multiple times on one single branch (rule). These rules classify the whole data set into three different partitions. Rule \(R_{1}\) covers all inliers in \(\mathbb{D}\), while both \(R_{2}\) and \(R_{3}\) represent outliers. Rules effectively summarize and interpret the outliers and inliers in the data. The merit is twofold. First, each rule covers a set of inliers or outliers. Therefore, rather than exhaustively evaluating the large number of outliers or inliers one by one, the users now only have to evaluate a small number of rules, thus saving huge amount of human efforts. Second, the rules are human interpretable, helping the users easily understand why an object is considered as outlier or inlier and identify the root cause of the outliers. For example, rules \(R_{2}\) and \(R_{3}\) intuitively tell users that some objects are abnormal because their \(x_{1}\) values are too large or too small. ## 4. The Optimization Objective of Rules Generation ### The Insufficiency of Classic Decision Trees Intuitively, to produce rules effectively summarizing and interpreting the outlier detection results, we could directly apply the classical decision tree algorithms such as ID3 (Dong et al., 2018). That is, we use the output of the outlier detection method as ground truth labels to train a decision tree model and then extract rules from the learned decision tree. However, decision tree algorithms target producing rules that maximize the classification accuracy. The rules learned in this way do not necessarily have the desired properties when used in outlier summarization and interpretation, for the following reasons: First, they may produce rules that contain many attributes and thus are too complicated for humans to evaluate. For examples, humans can easily understand and reason on a rule with a couple of attributes such as the rules in Fig. 1, while it will be much harder for the humans to obtain any meaningful information from a complicated rule with many attributes. For instance, the rule with 20 attributes \(a_{1}\leq x_{1}\leq b_{1},\cdots,a_{20}\leq x_{20}\leq b_{20}\) will be almost impossible for human to understand. Second, to maximize the classification accuracy they may produce many rules. However, to reduce the human evaluation efforts, ideally we want to produce as few rules as possible. The above situations could happen when handling highly complex data sets which often require a _deep_ tree with _many_ branches. ### Summarization and Interpretation-aware Objective To address the above concerns, we design an optimization objective customized to outlier summarization and interpretation. It Figure 1. Example of rules and decision tree. The blue partition covered by \(Rule_{1}\) represents inliers, while the brown partitions covered by \(Rule_{2}\) and \(Rule_{3}\) represent outliers. targets producing the minimal number of rules that are as simple as possible, while still guaranteeing the classification accuracy. The objective is composed of two sub-objectives, namely _length objective_ and _entropy objective_. **Length Objective.** To minimize the number of the rules as well as bounding the complexity of each rule, we first introduce an objective with respect to the lengths of the rules in rule set \(\mathcal{R}\): \[\min_{\mathcal{R}}\mathcal{L}(\mathcal{R}),\,\text{where}\,\, \mathcal{L}(\mathcal{R})=\sum_{r_{i}\in\mathcal{R}}L(r_{i})\] \[\text{s.t.}\,\,L(r_{i})\leq L_{m} \tag{4}\] In Eq. 4, \(\mathcal{R}\) denotes a rule set. \(L(r_{i})\) denotes the length of a rule \(r_{i}\) in \(\mathcal{R}\). \(L_{m}\) is the predefined maximal length of each rule that the users allow. Essentially, the total length of all rules represent the complexity of the learn model. Minimizing it will effectively reduce the number of rules, while at the same time simplifying each rule. **Entropy Objective.** To maximize the classification accuracy of the derived model, we adopt the entropy-based optimization objective from the classical decision tree algorithms (Han et al., 2015), i.e. ID3 and C4.5, as illustrated in Sec. 2. \[\max_{\mathcal{R}}\mathcal{S}(\mathcal{R}),\,\,\text{where}\,\, \mathcal{S}(\mathcal{R})=\sum_{r_{i}\in\mathcal{R}}n_{r_{i}}E(r_{i}) \tag{5}\] In Equation 5, \(E(r_{i})\) corresponds to \(1-Entropy(r)\). Maximizing Eq. 5 effectively maximizes the classification accuracy. Combing Eq. 5) and Eq. 4), our summarization and interpretation-aware objective (Eq. 7) maximizes the classification accuracy, while at the same time minimizing the total length of the rules. \[\max_{\mathcal{R}}\mathcal{S}(\mathcal{R})=\frac{\sum_{r_{i}\in \mathcal{R}}n_{r_{i}}E(r_{i})}{\sum_{r_{i}\in\mathcal{R}}L(r_{i})}\] \[\text{s.t.}\,\,L(r_{i})\leq L_{m},F1(\mathcal{R})>F1_{m} \tag{6}\] where \(L_{m}\) corresponds to the _maximal_ length of a rule that the users allow, while \(F1_{m}\) is a predefined requirement on classification accuracy which is measured by F1 score in the case of outlier detection. **Optimization Issue.** However, in practice we observed that this objective caused issues in the optimization process. Maximizing the entropy objective typically will lead to more complex rules and in turn the increase of the length objective. However, the length objective often increases faster than the entropy objective. Therefore, the overall objective (Eq. 6) tends to stop increasing in a few iterations. **Final Objective: Introducing a Stabilizer.** To solve this problem, we introduce a stabilizer \(M\) into the length objective - the denominator of Eq. 6: \[\max_{\mathcal{R},M}\mathcal{S}(\mathcal{R},\mathcal{M})=\frac{ \sum_{r_{i}\in\mathcal{R}}n_{r_{i}}E(r_{i})}{\sum_{r_{i}\in\mathcal{R}}L(r_{i}) +M}\] \[\text{s.t.}\,\,L(r_{i})\leq L_{m},F1(\mathcal{R})>F1_{m} \tag{7}\] The stabilizer \(M\) mitigates the impact of the quickly increasing length objective. It ensures that the length objective does not dominate our summarization and interpretation-aware objective. Intuitively, in the extreme case of setting \(M\) to an infinite large value, the increase of the total rule length is negligible to the objective. Now maximizing Eq. 7 in fact is equivalent to the traditional entropy-based decision tree. **Auto-learning Stabilizer M.** An appropriate value of \(M\) is critical to the quality of the learned rules. However, relying on the users to manually tune it is difficult. First, \(M\) can be any positive value and thus has infinite number of options. Second, ideally \(M\) should dynamically change to best fit the evolving rule set produced in the iterative learning process. Therefore, rather than make it a hyperparameter, \(M\) is a learnable parameter in our objective function Eq. 7. ## 5. STAIR: Rule Generation Method This section introduces our Summarization And Interpretation-aware Rule generation method (STAIR). Similar to the classic decision tree algorithms (Han et al., 2015), STAIR produces a rule set by iteratively splitting the decision node. We prove that in each iteration STAIR is _optimal_ in maximizing our objective in Eq. 7. Below we first give the overall process of STAIR: 1. Initialize the stabilizer \(M\) in Eq. 7 to zero; 2. Increase the value of \(M\); 3. Find a node to split that could increase the objective in Eq. 7; go to step 2. In short, STAIR iteratively increases the value of \(M\) and splits the nodes. Next, we first show that the value of \(M\) is critical to the performance of STAIR and then introduce a method to calculate the optimal value of \(M\) at each iteration. ### The Value of M Matters Given a rule set \(\mathcal{R}\), splitting a node \(n\) is equivalent to dividing one rule \(r\) in \(\mathcal{R}\) into two rules \(r_{1}\) and \(r_{2}\), where \(r_{1}\) and \(r_{2}\) end at the two child nodes of node \(n\) correspondingly. Given an \(M\) and a rule set \(\mathcal{R}\), we say a split \(sp(\mathcal{R},M)\) is **valid** if \(\mathcal{S}(\mathcal{R}\backslash\{r\}\cup\{r_{1},r_{2}\},M)>\mathcal{S}( \mathcal{R},M)\). That is, a valid split will increase the objective defined in Eq. 7. For the case of presentation, we use \(\mathcal{S}(\mathcal{R}^{\prime},M)\) to denote \(\mathcal{S}(\mathcal{R}\backslash\{r\}\cup\{r_{1},r_{2}\},M)\) Next, we show the smallest \(M\) that could produce a valid split is optimal in maximizing Eq. 7. **Monotonicity Theorem**.: _Given a rule set \(\mathcal{R}\), if \(M_{a}>M_{b}\), then \(\mathcal{S}(\mathcal{R}^{\prime}_{a},M_{a})\) is guaranteed to be **smaller** than \(\mathcal{S}(\mathcal{R}^{\prime}_{b},M_{b})\), where \(\mathcal{R}^{\prime}_{a},\mathcal{R}^{\prime}_{b}\) denotes the rule set produced by a valid split on \(\mathcal{R}\) that maximizes the objective given \(M_{a}\) or \(M_{b}\)._ Proof.: Because \(M_{a}>M_{b}\), we have: \[S(\mathcal{R}^{\prime}_{a},M_{a}) =\frac{\sum_{r\in\mathcal{R}^{\prime}_{a}}n_{r}E(r)}{\sum_{r\in \mathcal{R}^{\prime}_{a}}L(r)+M_{a}}\] \[<\frac{\sum_{r\in\mathcal{R}^{\prime}_{a}}n_{r}E(r)}{\sum_{r\in \mathcal{R}^{\prime}_{a}}L(r)+M_{b}}=S(\mathcal{R}^{\prime}_{a},M_{b})\] Because \(\mathcal{R}^{\prime}_{b}\) corresponds to the best split given \(M_{b}\), we obtain: \[S(\mathcal{R}^{\prime}_{a},M_{b})\leq S(\mathcal{R}^{\prime}_{b},M_{b}) \tag{9}\] From Eq. 8 and Eq. 9, we have: \[\mathcal{S}(\mathcal{R}^{\prime}_{a},M_{a})<\mathcal{S}(\mathcal{R}^{\prime}_{b},M_{b}) \tag{10}\] This concludes our proof. ### Calculating the Optimal \(M\) By Theorem 5.1, to maximize the objective at each iteration, it is necessary to search for the smallest value of \(M\) that could produce a valid split. Intuitively we could find the optimal \(M\) by gradually increasing the value of \(M\) at a fixed step size. However, this is neither effective nor efficient, because it is hard to set an appropriate step size. If it is too large, STAIR might miss the optimal \(M\). On the other hand, if the step size is too small, STAIR risks to incur many unnecessary iterations not producing any valid splits. To solve the above problem, we introduce a method which uses the concept of _boundary stabilizer_ to directly calculate the optimal \(M\). Moreover, the best splitting is discovered as the by-product of this step. We use \(M_{\text{o}}\) to denote the optimal \(M\). Because \(M_{\text{o}}\) is the smallest \(M\) that could produce a valid split, then \(\forall r_{0}\) and \(\forall r_{1},r_{2}\), where \(r_{1}\) and \(r_{2}\) represent the rules produced by splitting rule \(r_{0}\), Eq. 11 holds: \[\mathcal{S}(\mathcal{R},M)>\mathcal{S}(\mathcal{R})\{r_{0}\}\cup\{r_{1},r_{2} \},M),\forall M<M_{\text{o}} \tag{11}\] **Boundary Stabilizer \(\mathbf{M}\).** To compute \(M_{\text{o}}\), we first define a boundary \(M\) denoted as \(M_{\text{b}}\) which makes Equation 12 hold: \[\mathcal{S}(\mathcal{R},M_{\text{b}})=\mathcal{S}(\mathcal{R}\setminus\{r_{0 }\}\cup\{r_{1},r_{2}\},M_{\text{b}}) \tag{12}\] By Eq. 12, setting the \(M\) to \(M_{\text{b}}\) will produce a split that does not change the objective. That is, under \(M_{\text{b}}\) no valid split will increase the objective. But there exists a split that does not decrease the objective. So \(M_{\text{b}}\) is called the boundary \(M\), We then expand Eq. 12 as follows: \[\frac{\sum_{r\in\mathcal{R}\setminus\{r_{0}\}}n_{r}E(r)+n_{r_{0} }E(r_{0})}{\sum_{r\in\mathcal{R}\setminus\{r_{0}\}}L(r)+L(r_{0})+M_{\text{b}}}\] \[=\frac{\sum_{r\in\mathcal{R}\setminus\{r_{0}\}}n_{r}E(r)+n_{r_{1} }E(r_{1})+n_{r_{2}}E(r_{2})}{\sum_{r\in\mathcal{R}\setminus\{r_{0}\}}L(r)+L(r _{1})+L(r_{2})+M_{\text{b}}} \tag{13}\] We define \(A=\sum_{r\in\mathcal{R}\setminus\{r_{0}\}}n_{r}E(r),B=\sum_{r\in\mathcal{R} \setminus\{r_{0}\}}L(r)\), and \(A_{0}=\sum_{r\in\mathcal{R}}n_{r}E(r),B_{0}=\sum_{r\in\mathcal{R}}L(r)\), then Eq. 13 could be rewritten as: \[\frac{A+n_{r_{0}}E(r_{0})}{B+L(r_{0})+M_{\text{b}}}=\frac{A+n_{r_{1}}E(r_{1})+ n_{r_{2}}E(r_{2})}{B+L(r_{1})+L(r_{2})+M_{\text{b}}} \tag{14}\] Then after some mathematical transformation, we obtain: \[M_{\text{b}} (n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0}))\] \[=n_{r_{0}}E(r_{0})(L(r_{1})+L(r_{2}))+A(L(r_{1})+L(r_{2})-L(r_{0}))\] \[-B(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0}))\] \[-(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2}))L(r_{0}) \tag{15}\] Denoting \(\Delta L=L(r_{1})+L(r_{2})-L(r_{0})\) and \(\Delta E=n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0})\), we simplify Eq. 15 to: \[M_{\text{b}} \Delta E=n_{r_{0}}E(r_{0})(L(r_{1})+L(r_{2}))+A\Delta L-B\Delta E\] \[-(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2}))L(r_{0})\] \[=A\Delta L-B\Delta E+n_{r_{0}}E(r_{0})\Delta L-L(r_{0})\Delta E\] \[=(A+n_{r_{0}}E(r_{0}))\Delta L-(B+L(r_{0}))\Delta E\] \[M_{\text{b}} =A_{0}\frac{\Delta L}{\Delta E}-B_{0},\forall r_{0}\in\mathcal{R },\forall r_{1},r_{2} \tag{16}\] (17) \(\forall M>M_{\text{b}}\), with the same \(r_{0}\) and \(r_{1},r_{2}\) in Eq. 12, Eq. 15 becomes: \[M(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0}))\] \[>n_{r_{0}}E(r_{0})(L(r_{1})+L(r_{2}))+A(L(r_{1})+L(r_{2})-L(r_{0}))\] \[-B(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0}))\] \[-(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2}))L(r_{0}) \tag{18}\] Note that with expanding \(r_{0}\) to \(r_{1},r_{2}\), the entropy of the rules must be lower, which means \(n_{r_{1}}E(r_{1})+n_{r_{2}}E(r_{2})-n_{r_{0}}E(r_{0})>0\). Then from Eq. 15 to Eq. 12, we may easily obtain Eq. 19 from Eq. 18: \[\mathcal{S}(\mathcal{R},M)<\mathcal{S}(\mathcal{R})\{r_{0}\}\cup\{r_{1},r_{2} \},M),\forall M>M_{\text{b}} \tag{19}\] That is, an \(M\) larger than \(M_{\text{b}}\) is guaranteed to produce a valid split - splitting rule \(r_{0}\) to \(r_{1}\) and \(r_{2}\). **Calculating Optimal \(\mathbf{M}\).** According to the Monotonicity theorem (Theorem 5.1), a smallest \(M\) is the best in maximizing the objective. Therefore, STAIR can directly calculate \(M_{\text{o}}\) using Eq. 20: \[M_{\text{o}}>\min_{\Delta L/\Delta E}A_{0}\frac{\Delta L}{\Delta E}-B_{0},\forall r _{0}\in\mathcal{R},\forall r_{1},r_{2} \tag{20}\] That is, STAIR first finds a rule \(r_{0}\) from \(\mathcal{R}\) that after split into two rules, produces the smallest \(\frac{\Delta L}{\Delta E}\). STAIR then sets \(M_{\text{o}}\) as a value larger than \(\frac{\Delta L}{\Delta E}-B_{0}\). In this way, STAIR successfully calculates the optimal \(M\) and finds the best split in one step, making its learning process effective yet efficient. ### STAIR Learning Algorithm Algorithm 1 shows the learning process of STAIR. It starts with initializing \(M\) as 0 (Line 1) and uses a min heap structure \(H\) to keep all nodes. Similar to the decision tree algorithms, it initializes to contain only the root node (Line 2). It then sets the rule set \(\mathcal{R}\) to contain only one rule \(r_{0}\) corresponding to the root node (Line 3). By default, rule \(r_{0}\) classifies all training samples as inliers. Then based on Eq. 20, STAIR iteratively extracts a rule \(r_{0}\), calculates \((\frac{\Delta I}{\Delta E})\), updates M, and splits \(r_{0}\) into two rules \(r_{1}\) and \(r_{2}\). After each split, it calculates \((\frac{\Delta I}{\Delta E})\) with respect to \(r_{1}/r_{2}\), refreshes the rule set \(\mathcal{R}\) and min heap \(H\), and updates \(A_{0}\) and \(B_{0}\) accordingly. The learning process will terminate when the following conditions hold: (1) the accuracy reaches the requirement specified by users; and (2) the \(\mathcal{S}(\mathcal{R},M)\) does not increase in a few iterations. **Complexity Analysis.** Compared to the classical decision tree algorithms, the additional overhead that STAIR introduces is negligible. In each iteration, STAIR extracts the rule \(r_{0}\) from min heap \(H\) and inserts into \(H\) the new rules. Assume there are \(n\) nodes in the tree. Because the complexity of min heap's retrieve and insert operations is \(O(\log n)\), the additional complexity is \(O(n\log n)\). ## 6. Localized STAIR: Data Partitioning & Rule Generation As shown in our experiments (Sec. 7), although in general STAIR performs much better than the classical decision tree algorithms in producing summarization and interpretation friendly rules, its performance degrades quickly on high dimensional, highly complex data sets, for example on the _SpamBase_ data set which has 57 attributes. This is because a single decision tree with a small number of simple rules is not powerful enough to model the complex distribution properties underlying these data sets. To solve this problem, we propose a _localized_ STAIR approach, so called L-STAIR. L-STAIR divides the whole data set into multiple partitions and learns a tree model for each partition. Taking the data locality into consideration, L-STAIR produces data partitions where the data in each partition share the similar statistical properties, while different partitions show distinct properties. L-STAIR thus is able to produce localized, simple rules that effective summarize and explain each data partition. Next, we first introduce the objective of L-STAIR in Sec. 6.1 and then give the learning algorithm in Sec. 6.2. ### Joint Optimization of Data Partitioning and Rule Generalization Intuitively, L-STAIR could produce the localized rules in two disjoint steps: (1) partitioning data using the existing clustering algorithms such as k-means (Kipf and Welling, 2007) or density-based clustering (Kipf and Welling, 2007); (2) directly applying STAIR on each data partition one by one. However, this two steps solution is sub-optimal in satisfying our objective, namely producing minimal number of interpretable rules that are as simple as possible to summarize the outlier detection results. This is because the problems of data partitioning and rule generation are highly dependent on each other. Clearly, rule generation relies on data partitioning. To generate localized rules, the data has to be partitioned first. However, on the other hand, without taking the objective of rule generation into consideration, clustering algorithm does not necessarily yield data partitions that are easy to summarize with simple thus interpretable rules. Therefore, L-STAIR solves the two sub-problems of data partitioning and rule generation jointly. To achieve this goal, in addition to the summarization and interpretation-aware objective (Eq. 6) defined in Sec. 4.2, L-STAIR introduces a partitioning objective composed of _error objective_ and _locality objective_. **Error Objective.** We denote the partitions of a dataset as \(\mathcal{C}=\{C_{i}\}_{i=1}^{n}\), where \(n\) is the number of partitions and \(C_{i}\) represents the \(i\)th partition. \(DT_{i}\) denotes the decision tree learned for a data partition \(C_{i}\). Decision tree \(DT_{i}\) produces a prediction with respect to each object \(x\) in data partition \(C_{i}\), denoted as \(DT_{i}(x)\). Next, in Eq. 21 we define an _error metric_ to measure how good a decision tree \(DT_{i}\) fits the data in \(C_{i}\): \[\sum_{x\in C_{i}}||DT_{i}(x)-y_{i}||_{2}^{2} \tag{21}\] To ensure the classification accuracy, L-STAIR targets minimizing this error metric with respect to all data partitions, which yields the **error objective**: \[\min_{\mathcal{C}}\sum_{C_{i}\in\mathcal{C}}\sum_{x\in C_{i}}||DT_{i}(x)-y||_{2 }^{2} \tag{22}\] where \(y\) indicates the ground truth label of object \(x\). **Locality Objective.** Although using the above error objective to learn the data partitioning and the corresponding decision trees will effectively minimize the overall classification with respect to the whole dataset, the data partitions produced in this way do not preserve the locality of each data partition. Potentially one rule could cover a set of data objects that are scattered across the whole data space, thus is not amenable for human to understand. As shown in Figure 3, when the locality is preserved, the rules are constrained within each cluster. This means there is no overlapping between the rules. Then the generated rules will be easier to understand. Figure 3. The intuition of locality: The black points are inliers and the red ones are outliers. The backgrounds with different colors refer to three clusters, while the straight lines in each cluster represent the rules. Figure 2. The Localized STAIR Therefore, to ensure the data locality of each partition, we introduce the **locality objective**: \[\min_{C}\sum_{i\in C}||x-center(C_{i})||_{2}^{2} \tag{23}\] Optimizing on the locality objective enforces the objects within each partition to be close to each other, similar to the objective of clustering such as k-means. **The Final L-STAIR Objective.** Combining Eq. 23 and Eq. 22 together leads to the final partitioning objective: \[\min_{C}\mathcal{L}_{L-STAIR}(C)=\sum_{C_{i}\in C}\sum_{x\in C_{i}}||DT_{i}(x)- y||_{2}^{2}+\lambda||x-center(C_{i})||_{2}^{2} \tag{24}\] Eq. 24 uses \(\lambda\) (\(0<\lambda<1\)) to balance these two objectives. Setting the \(\lambda\) to a small value will give error objective higher priority. ### L-STAIR Learning Algorithm L-STAIR jointly optimizes the partitioning objective (Eq. 24) and the summarization and interpretation-aware objective (Eq. 6) in an iterative manner. As shown in Figure 4, L-STAIR starts with initializing \(n\) data partitions by using some clustering algorithms such as k-means in our implementation. Then we apply the STAIR algorithm introduced in Sec. 5.3 to learn one summarization and interpretation-aware decision tree for each initial partition. Next, it iteratively updates the partitions and thereafter builds the decision trees correspondingly. During this process, L-STAIR dynamically modifies the number of partitions based on the classification accuracy of each individual decision tree, making the number of partitions self-adaptive to the data, as further illustrated in Sec. 6.3. Similar to our original STAIR approach, the learning process of L-STAIR terminates after the overall classification accuracy with respect to the whole data set is above the threshold \(F1_{m}\) and the optimization objective does not improve anymore in a few iterations. ### Dynamically Adjusting the Number of Partitions As shown in Algorithm 2, L-STAIR uses the hyperparmeter \(n\) to specify the number of partitions and initialize each data partition accordingly. It is well known that in many clustering algorithms such as k-means the number of clusters is a critical hyper-parameter which determines the quality of data partitioning; and it is hard to tune in many cases (Kang et al., 2018). L-STAIR does not rely on an appropriate \(n\) to achieve good performance, because L-STAIR allows the users to set a small \(n\) initially and then dynamically adjusts it in the learning process. ``` Input: Training data \(X\), cluster number \(n\) for initialization, \(F_{1}\) score threshold \(F_{1m}\) Output: Clusters \(\mathcal{C}\) and decision tree for each cluster \(DT_{i}\), \(i\in\{1,\cdots,|\mathcal{C}|\}\) 1 Initialize \(n\) clusters using K-means; 2while True do 3 Build \(n\) new decision trees \(DT_{i}\), \(i=\{1,\cdots,n\}\) using the algorithm introduced in Section 5.3 for \(n\) clusters; 4 Update the clusters \(\mathcal{C}\) according to the objective Eq.(24); 5 Remove empty clusters; 6 Calculate the \(F_{1}\) score of the predictions made by the MDTs(the rules), and denote it as \(f_{1}\); 7if\(f_{1}>F_{1m}\)then 8 break; 9 Check the \(F_{1}\) score within each cluster, split each of the clusters with too small \(F_{1}\) scores to \(n\) new clusters using K-means algorithm. 10 11 end while ``` **Algorithm 2**L-STAIR learning algorithm **Producing New Partitions.** L-STAIR will produce new partitions by splitting some partitions that are too complicated to summarize and explain with simple rules. The partition is said to be too complicated when the obtained \(F1\)-score on it is not good enough, more specifically lower than \(F1_{m}\). This indicates that simple rules could not fully explain this partition. After identifying a complicated partition, L-STAIR uses k-means again to split it into two partitions, then build one decision tree for each new partition. **Removing Partitions.** L-STAIR identifies the redundant partitions as those bearing large similarity to others such that merging them into other partitions do not degrade the partitioning objective. After identifying redundant partitions, L-STAIR will discard them and reassign their data points to other partitions. Our experiments (Table 3, Sec. 7.5) on 10 datasets show that by starting with a small \(n\) L-STAIR is always able to produce good results. ### Convergence Analysis We theoretically show that L-STAIR could converge. We establish this conclusion by showing that each step in Algorithm 2 would never make the objective larger. In Alg. 2 there are four steps which could potentially update the objective. We analyze each step one by one. **Step 1** (Line 3): Given a decision tree corresponding to one specific partition, if its F-1-score is below \(F1_{m}\), L-STAIR will replace it with a new tree which has higher F-1. Therefore, the error objective Eq.(22) has also been improved approximately. Then since the locality objective Eq. (23) will not be affected by the current step, the whole objective Eq.(24) will also get improved. **Step 2** (Line 4): Assume L-STAIR reassigns data point \(x_{j}\) which used to belong to partition \(P_{j}\) to partition \(P_{j}^{\prime}\) when updating the partitioning according to the following formula: \[P_{j}^{\prime}=arg\min_{k}\mathcal{L}_{j}(DT_{k},C_{k})=||DT_{k}(x_{j})-y_{j}|| _{2}^{2}+\lambda||x_{j}-center(C_{k})||_{2}^{2} \tag{25}\] Figure 4. The correlation between our algorithm and the K-means algorithm This leads to: \[\mathcal{L}_{j}(DT_{P^{\prime}_{j}},C_{P^{\prime}_{j}})\leq\mathcal{L}_{j}(DT_{P _{j}},C_{P_{j}}) \tag{26}\] Therefore, in the first four steps, L-STAIR always gets the objective Eq.(24) smaller and will converge eventually. Similarly, denoting the existing partitioning as \(\mathcal{C}\) and the new partitioning as \(\mathcal{C}^{\prime}\), by Eq. 24, we get: \[\mathcal{L}_{L-STAIR}(\mathcal{C}^{\prime}) =\sum_{j=1}^{n}||DT_{P^{\prime}_{j}}(x_{j})-y_{j}||_{2}^{2}+ \lambda||x_{j}-center(C_{P^{\prime}_{j}})||\] \[=\sum_{j=1}^{n}\mathcal{L}_{j}(DT_{P^{\prime}_{j}},C_{P^{\prime}_ {j}})\leq\sum_{j=1}^{n}\mathcal{L}_{j}(DT_{P_{j}},C_{P_{j}})\] \[=\mathcal{L}_{L-STAIR}(\mathcal{C}) \tag{27}\] Thus, this step gets the objective \(\mathcal{L}_{L-STAIR}\) smaller. **Step 3** (Line 5): By Eq.(24), the empty set contributes nothing to the objective. Therefore, directly removing empty partitions has no impact to the objective. **Step 4** (Line 8): Assume L-STAIR splits partition \(C_{s}\) into \(n\) new partitions \(C_{si},i\in\{1,\cdots,n\}\) and builds \(n\) decision trees. Denoting the decision tree w.r.t \(C_{s}\) and \(C_{si}\) as \(DT_{s}\) and \(DT_{si}\), we have: \[\sum_{x,y\in C_{s}}||DT_{s}(x)-y||_{2}^{2}\geq\sum_{i=1}^{n}\sum_ {x,y\in C_{si}}||DT_{si}(x)-y||_{2}^{2}\] \[\sum_{x\in C_{s}}||x-center(C_{s})||>\sum_{i=1}^{n}\sum_{x\in C_ {si}}||x-center(C_{si})||_{2}^{2} \tag{28}\] Because L-STAIR always makes both the error objective and locality objective smaller after updating the decision trees, L-STAIR is guaranteed to minimize the final L-STAIR objective Eq. 24 and thus converge eventually. ## 7. Experiments Our experimental study aims to answer the following questions: * **Q1**: How do STAIR and L-STAIR compare against other methods in the total rule lengths given a \(F_{1}\) threshold? * **Q2**: How do STAIR and L-STAIR compare against other methods in \(F_{1}\) score when producing rules with the similar complexity? * **Q3**: How do the parameters \(L_{m}\) and \(F_{1m}\) affect the performance of STAIR? * **Q4**: How does the number of partition \(n\) affect the performance of L-STAIR? * **Q5**: How good is L-STAIR at preserving the locality of the data? * **Q6**: How does STAIR dynamically adjust the value of stabilizer M introduced in our summarization and interpretation-aware optimization objective? * **Q7**: How does STAIR perform in multi-class classification? ### Experimental Settings **Datasets.** We evaluate the effectiveness of STAIR and L-STAIR on ten benchmarks outlier detection datasets. Table 1 summarizes their key statistics. **Hardware Settings.** We implement our algorithm with python 3.7. We use the decision tree algorithms in scikit-learn and implement STAIR with numpy. We train all models on AMD Ryzen Threadripper 3960X 24-Core Processor with 136GB RAM. **Baselines.** We compare against two decision-tree methods: * **ID3**[(28)]: The classic decision tree algorithm. To find the simplest decision tree that satisfies the accuracy threshold \(F_{1m}\), we start with a small tree (depth 3) and iteratively increases its depth until the obtained tree could yield a \(F1\) score larger than \(F1_{m}\). * **CART**[(6)]: CART uses a post-processing to prune a learned decision tree. The goal is to minimize the complexity of the decision tree, while still preserving the accuracy. We first use ID3 to build a decision tree that is as accurate as possible and then continue to prune it until it is right above the F-1 score threshold. ### Comparison Against Baselines (Q1): Total Rule Length In this set of experiments, we measure the total length of the rules produced by each algorithm when they produce trees with similar \(F_{1}\) score. We set the maximal length of the rules \(L_{m}\) to 10 and the \(F_{1}\) score threshold \(F1_{m}\) to 0.8. Because ID3 and CART do not use the \(F_{1}\) score threshold in their algorithms, we tune their hyper-parameters to produce the simplest tree with a \(F_{1}\) score slightly higher than 0.8. This ensures that all algorithms have the **similar**\(F_{1}\) score. L-STAIR automatically determines the number of data partitions with initial partition number picked from [(2; 4; 8)]. We set the maximal iteration to 10. \begin{table} \begin{tabular}{c|c c c} \hline \hline Dataset & \# Instances & Outlier Fract. & \# of Dims \\ \hline PageBlock & 5473 & 10\% & 10 \\ Pendigits & 6870 & 2.3\% & 16 \\ Shuttle & 49097 & 7\% & 9 \\ Pima & 768 & 35\% & 8 \\ Mammography & 11873 & 2.3\% & 6 \\ Satimage-2 & 5803 & 1.2\% & 36 \\ Satellite & 6435 & 32\% & 36 \\ SpamBase & 4601 & 40\% & 57 \\ Cover & 286048 & 0.9\% & 10 \\ Thursday-01-03 & 33110 & 28\% & 68 \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the 10 Datasets. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Dataset & ID3 & CART & STAIR & L-STAIR \\ \hline PageBlock & 97 & 88 & 50 & **25** \\ Pendigits & 290 & 328 & 187 & **60** \\ Shuttle & 1520 & 863 & 697 & **125** \\ Pima & 20 & 12 & 12 & **10** \\ Mammography & 79 & 65 & 66 & **24** \\ Satimage-2 & 151 & 117 & 93 & **38** \\ Satellite & 1263 & 471 & 442 & **70** \\ SpamBase & 1546 & 1043 & 1017 & **150** \\ Cover & 6616 & 4869 & 4657 & **402** \\ Thursday-01-03 & 4032 & 1393 & 957 & **440** \\ \hline \hline \end{tabular} \end{table} Table 2. Total rule length under similar \(F_{1}\) score (Q1). From the results shown in Table 2, we draw the following conclusions: (1) Compared to ID3 and CART, STAIR is able to produce much simpler rules which are amenable for humans to evaluate, significantly reducing the total length of the rules by up to 76.3% as shown in the results of the dataset _Thursday-01-03_. This is because the summarization and interpretation-aware optimization objective (Eq. 7) of STAIR simultaneously minimizes the complexity of the tree and maximizes the classification accuracy; (2) The performance of STAIR on the SpamBase dataset is not satisfying potentially due to its large dimensionality. SpamBase has 57 attributes. It thus might be over-complicated to use a single small tree to summarize the whole dataset; (3) L-STAIR which partitions the data and produces one tree for each data partition solves the problem mentioned in (2) and outperforms the basic STAIR by up to 91.37% as shown in the results of the dataset _Cover_. ### Comparison Against Baselines (Q2): \(F_{1}\) Score In this section, we evaluate the \(F_{1}\) score of each algorithm when they produce a rule set with similar total length. For each dataset, we vary the total rule length by selecting 10 numbers within a range from 0 to the total rule length result with respect to the ID3 algorithm in Table 2. For instance, in Table 2 the total rule length of ID3 on the dataset Pendigits is 290. We thus select ten numbers: 29, 58, \(\cdots\), 290 as the candidate total lengths. Then given one total length \(l\), we run ID3, CART, and STAIR in the following way to obtain the corresponding \(F_{1}\) score: (1) For ID3, we gradually increase the depth of the tree until it generates a rule set with a total length slightly higher than \(l\); (2) For CART, we first build a decision tree that is as accurate as possible and then prune it until it has a length close to \(l\); (3) For STAIR, we update the breaking condition of Algorithm 1 such that it will terminate after reaching the length \(l\). We run this experiment on the Pendigits and Thursday-01-03 datasets. As shown in Figure 5, STAIR is more accurate than ID3 and CART when they produce a rule set with the similar total length, indicating that given the same budget on the total rule length, STAIR produces rules with higher accuracy. ### Effect of Hyper-parameters \(L_{m}\) and \(F1_{m}\) (Q3) In this set of experiments, we first study how the maximal length \(L_{m}\) affects STAIR and L-STAIR. We fix the F1 score threshold \(F1_{m}\) as 80% and then vary \(L_{m}\) from 2 to 12 and measure the how the total rule length changes. Note in some cases when \(L_{m}\) is too small, e.g. 2, the learned tree cannot meet the F1 score requirement. As shown in Figure 6, as \(L_{m}\) gets larger, the total rule length will get smaller. This is because with a looser constraint, STAIR gets a larger search space and hence better chance to find a simple tree. When STAIR gets better, L-STAIR will also gets better. Besides, we observe that L-STAIR could reach the minimal total rule length with smaller \(L_{m}\). This shows the power and benefits of localization. Next, we investigate how the F1 score threshold \(F1_{m}\) affects STAIR and L-STAIR. We fix \(L_{m}\) to 10 and vary \(F1_{m}\) from 0.70 to 0.95. As shown in Figure 7, in most of the cases STAIR outperforms ID3 and CART, while L-STAIR consistently outperforms all other methods in all scenarios by up to 94.0% as shown in the results on the _Cover_ dataset when the threshold is set as 0.95. The larger the \(F1_{m}\) threshold is, the more L-STAIR outperforms other baselines. This is because partitioning allows L-STAIR to get a set of localized trees, each of which produces high accurate classification results on the corresponding data subset. ### Number of partitions in L-STAIR (Q4) We study how the initial number of the partitions \(n\) affects L-STAIR. In this set of experiments, \(n\) is selected from {2, 4, 8}. In addition to the total rule length, we also report the number of rules in the final ruleset. From the results shown in Table 3, we have the following observations: (1) Compared to the results in Table 2, no matter what \(n\) L-STAIR starts with, it consistently outperforms other methods; (2) L-STAIR always performs well when starting with a small \(n\) compared to other initial \(n\) values, indicating that \(n\) is not a hyper-parameter that requires careful tuning. ### L-STAIR: Locality-Preserving (Q5) Next, we evaluate if the partitioning of L-STAIR is able to preserve the locality of the data. We show this by visualizing its data partitioning. Before visualization, We apply T-SNE to embed the data into 2D. We plot different partitions in different colors. Due to space limit, we only plot the partitioning of 6 datasets. As shown in Figure 8, on all datasets the partitioning of L-STAIR preserves the locality. This thus guarantees the interpretability of each localized tree. ### Dynamically Adjusting the Value of M (Q6) In this set of experiment we show how STAIR automatically adjusts the value of stabilizer \(M\) introduced in our summarization and interpretation-aware optimization objective (Sec. 4.2). To better understand the influence of a dynamically adjusting \(M\), we use the number of rules produced in the training process as the reference variable, corresponding to the x-axis. From Figure 9, we observe: (1) The value of \(M\) continuously increases during the training process to split nodes and thus produce valid rules; (2) The values of \(M\) are different across different datasets, indicating that it is hard to get an appropriate \(M\) by manual tuning. ### Multi-class classification Problems (Q7) We use this set of experiments to show that STAIR and L-STAIR are generally applicable to the more complicated multi-class classification problems. We use one of the most popular classification Figure 5. \(F_{1}\) score with varying total rule lengths (Q2). datasets _Wine Quality_1[(11)], which contains 4898 instances and 12 attributes. We regard the attribute "score" as the target which corresponds to integers within the range from 0 to 10 and run a classification task on it. Our STAIR and L-STAIR could be easily extended to multi-class settings by replacing the F1 score with the _classification accuracy_. Footnote 1: [https://archive.ics.uci.edu/ml/datasets/wine-quality](https://archive.ics.uci.edu/ml/datasets/wine-quality) As shown in Table 4, we report the total rule length, same to the outlier detection scenario. We observe from the results: (1) L-STAIR and STAIR significantly outperform ID3 and Cart by up to 70.0%; (2) As shown in Table-5, the initial number \(n\) of the partitions make little difference to the resulted lengths, indicating L-STAIR is not sensitive to the hyper-parameter \(n\); (3) As illustrated in Figure 10(a) and Figure 10(b), STAIR and L-STAIR always outperform the baselines no matter how the accuracy threshold and the maximal rule length threshold vary; (4) Figure 10(c) visualizes the partitions produced by L-STAIR. The locality of the data partitions is well-preserved; (5) As shown in Figure 10(d), the dynamic update of the value of stabilizer \(M\) is important in splitting the nodes and producing valid rules, similar to the case of outlier detection. ## 8. Related Work **Outlier Summarization and Interpretation.** To the best of our knowledge, the problem of summarizing and interpreting outlier Figure 6. The effects of the maximal length \(L_{m}\) on the total rule length (Q3). Figure 7. The effect of the threshold \(F1_{m}\) on all methods (Q3). detection results has not been studied. Focused on a special type of outliers, Scorpion (Srivastava et al., 2017) produces explanations for outliers in aggregation queries by looking at the provenance of each outlier. If removing some objects from the aggregation significantly reduces the abnormality of a given outlier, these objects will be considered as the cause of this outlier. Similar to Scorpion, Cape (Carpes et al., 2019) targets explaining the outliers in aggregation queries. But rather than rely on provenance, Cape uses the objects that counterbalance the outliers as the explanation. More specifically, if including some additional data objects into the aggregation query could neutralize an outlier, these objects effectively explain the outlier. Both works do not tackle the problems of summarizing outliers. Macrobase (Macrobase, 2018) explains outliers by correlating them to some external attributes such as location or occurring time using associate rule mining. These external attributes are not used to detect anomalies. In many applications, however, such external attributes do not exist. Further, Macrobase only explains the outliers detected by its default density-based outlier detector customized to streaming data and does not easily generalize to other outlier detection methods. **Interpretable AI**. Some works (Kal generally applicable to different types of data including numerical, categorical, and text data. **Decision Tree Algorithms.** Because a simple tree tends to avoid overfitting and have better generalization ability, CART (CART, 2017) proposed to prune a learned decision tree to lift its performance in a post-processing step. It will remove a node in the tree if the cross-validation error rate does not increases. However, unlike our STAIR which treats producing simple hence human interpretable rules as the first class citizen in its objective, the post-processing of CART is not very effective in minimizing the complexity of the rules, as confirmed in our experiments. Other decision tree algorithms (CART, 2017; DBL, 2018; DBL, 2018) mostly suffer from the same problem that the simplicity of the rules is not considered to be as important as the classification accuracy. ## 9. Conclusion This work targets reducing the human effort in evaluating outlier detection results. To achieve this goal, we propose STAIR to learn a compact set of human understandable _rules_ which summarizes anomaly detection results into groups and explains why each group of objects is considered to be abnormal. It features an outlier summarization and interpretation-ware optimization objective, a learning algorithm which optimally maximizes this objective in each iteration, and a partitioning driven STAIR approach which simultaneously divides the data and produces localized rules for each data partition. Experiment results show that STAIR effectively summarize the outlier detection results with human interpretable rules, where the complexity of the rules is much lower than those produced by other decision tree methods. Figure 10. Multi-class dataset _Wine Quality_: ablation study \begin{table} \begin{tabular}{c|c c c|c c c|c c} \hline \hline \multicolumn{5}{c}{L-STAIR (\(n\)=2)} & \multicolumn{3}{c}{L-STAIR (\(n\)=4)} & \multicolumn{3}{c}{L-STAIR (\(n\)=8)} \\ \hline Dataset & Length & \# of R & \# of C & Length & \# of R & \# of C & Length & \# of R & \# of C \\ \hline Wine Quality & 1642 & 614 & 11 & 1692 & 620 & 13 & 1538 & 635 & 17 \\ \hline \hline \end{tabular} \end{table} Table 5. Multi-class classification: the number of partitions \(n\) in L-STAIR Figure 9. The dynamic \(M\) during training (Q6).
2305.02389
Fast Generalized Functional Principal Components Analysis
We propose a new fast generalized functional principal components analysis (fast-GFPCA) algorithm for dimension reduction of non-Gaussian functional data. The method consists of: (1) binning the data within the functional domain; (2) fitting local random intercept generalized linear mixed models in every bin to obtain the initial estimates of the person-specific functional linear predictors; (3) using fast functional principal component analysis to smooth the linear predictors and obtain their eigenfunctions; and (4) estimating the global model conditional on the eigenfunctions of the linear predictors. An extensive simulation study shows that fast-GFPCA performs as well or better than existing state-of-the-art approaches, it is orders of magnitude faster than existing general purpose GFPCA methods, and scales up well with both the number of observed curves and observations per curve. Methods were motivated by and applied to a study of active/inactive physical activity profiles obtained from wearable accelerometers in the NHANES 2011-2014 study. The method can be implemented by any user familiar with mixed model software, though the R package fastGFPCA is provided for convenience.
Andrew Leroux, Ciprian Crainiceanu, Julia Wrobel
2023-05-03T19:16:52Z
http://arxiv.org/abs/2305.02389v2
# Fast Generalized Functional Principal Components Analysis ###### Abstract We propose a new fast generalized functional principal components analysis (fast-GFPCA) algorithm for dimension reduction of non-Gaussian functional data. The method consists of: (1) binning the data within the functional domain; (2) fitting local random intercept generalized linear mixed models in every bin to obtain the initial estimates of the person-specific functional linear predictors; (3) using fast functional principal component analysis to smooth the linear predictors and obtain their eigenfunctions; and (4) estimating the global model conditional on the eigenfunctions of the linear predictors. An extensive simulation study shows that fast-GFPCA performs as well or better than existing state-of-the-art approaches, it is orders of magnitude faster than existing general purpose GFPCA methods, and scales up well with both the number of observed curves and observations per curve. Methods were motivated by and applied to a study of active/inactive physical activity profiles obtained from wearable accelerometers in the NHANES 2011-2014 study. The method can be implemented by any user familiar with mixed model software, though the R package fastGFPCA is provided for convenience. _Keywords:_ functional data, FPCA, generalized FPCA Introduction Functional data analysis (FDA) Ramsay and Silverman (2005) provides a wide range of analysis techniques for data with complex structures such as time series or images. These data are often high dimensional (many repeated observations per function) and exhibit complex and non-stationary patterns of variation. Functional principal components analysis (FPCA) Jones and Rice (1992); Rice and Silverman (1991); Staniswalis and Lee (1998), the analog of principal components analysis (PCA) Hotelling (1933); Jolliffe (1982); Pearson (1901) for functional data, is a first-line dimension reduction technique for the analysis for such data. Key differences between FPCA and PCA are that functional data: (1) may be observed with substantial measurement error; (2) is expressed in the same unit of measurement at every point in the domain; and (3) has a functional domain that is ordered and has a natural distance (e.g., time ordering and distance). There is a rich literature on FPCA focusing on both estimation and inference. Broadly, FPCA methods for Gaussian data involve either smoothing the covariance function Staniswalis and Lee (1998); Xiao et al. (2016); Yao et al. (2003) or estimation via model-based approaches with explicit likelihood assumptions van der Linde (2008); James et al. (2000). Extensions to sparse or irregularly observed data J. and Paul (2009); Yao et al. (2005); Xiao et al. (2018), multivariate Happ and Greven (2015); Chiou et al. (2014); Li and Xiao (2021) and multilevel functional Di et al. (2009); Cui et al. (2022) data exist. Although the FPCA literature is quite extensive, there are few high quality, open source, software implementations. The covariance smoothing FACE approach of Xiao et al. (2021, 2016), implemented in the refund package Goldsmith et al. (2020) in R, is by far the fastest approach for estimating FPCA for regularly observed data. The likelihood-based methods require substantially longer computation times for large data. Here we focus on extensions of FPCA methods to non-Gaussian outcomes (e.g., binary or count data), which we refer to as Generalized FPCA (GFPCA). More precisely, we focus on methods that decompose the variability of the person-specific latent functional linear predictors along their main directions of variation. In contrast to the relatively large number of published papers on FPCA, there are far fewer GFPCA papers. A few exceptions are Hall et al. (2008); van der Linde (2009); Gertheiss et al. (2017); Weishampel et al. (2023) for single-level and Chen et al. (2013); Goldsmith et al. (2015) for multilevel GFPCA. Unfortunately, the software implementation problem is even more acute for GFPCA compared to FPCA. Indeed, most published methods either lack accompanying software or are extremely slow for large data. Moreover, current methods require pre-specifying both the number of principal components and basis functions used to estimate the principal components. Assessing sensitivity to these key input parameters is critical when applying GFPCA to a new dataset. The ever increasing number of studies that collect non-Gaussian functional data of increasing size and complexity require methods that are fast and scalable. Consider, for example, the minute level physical activity data obtained from wearable accelerometers deployed in large epidemiologic studies, such as the National Health and Nutrition Survey (NHANES) and UK Biobank Doherty et al. (2017). In many applications one is interested in the pattern of being active (coded as 1) or inactive (coded as 0) at every minute of the day. Thus, the data at the study participant level is a function observed at 1440 minutes (number of minutes in a day) where the value of the function is either active or inactive at every time point. NHANES contains such data for tens of thousands of study participants, while the UK Biobank for close to \(100,000\) study participants. Our goal is to provide a scalable GFPCA method that aids in interpretation and analysis of large-scale non-Gaussian functional data from the NHANES accelerometry study by extracting orthogonal directions of variation in the linear predictor space of these functions. Very recently, Weishampel et al. (2023) proposed an approach which allows for estimation of GFPCA for general exponential family outcomes. Their approach is fast, could be viewed as an alternative, but differs in key ways from fast-GFPCA. Importantly, in our data application, the approach of Weishampel et al. (2023) has the potential to lead to infinite bias in estimated latent functions. We discuss this point further in Section 2.5, but do not focus on their approach because it was not appropriate for our data application. Aside from Weishampel et al. (2023), general purpose software for GFPCA that accommodates multiple types of exponential family data are slow. However, there is also good news about specific types of outcomes. For example, Wrobel et al. (2019) developed a very fast and efficient binary FPCA procedure that uses an EM algorithm to optimize a variational approximation to the binomial FPCA likelihood. The paper is accompanied by the registr package Wrobel (2018); Wrobel and Bauer (2021). To our knowledge this is the only publicly available GFPCA implementation that is fast and viable for large datasets, and thus we compare our fast-GFPCA approach to the registr::bfpca() function. In addition, for simulation scenarios with smaller data sets, we compare fast-GFPCA to the two-step GFPCA implementation described in Gertheiss et al. (2017), which is available in the registr package as the registr::gfpca_twoStep() function. This two-step approach can be used to model several exponential family outcomes but is prohibitively slow for large datasets. The current work adds substantially to the literature by providing a general approach to GFPCA which is readily generalizable to functional regression models of interest for which there are currently no fast implementations for large scale data. Specifically, we propose a new method, fast-GFPCA, with an entirely different philosophy and implementation strategy than most other approaches. The advantages of fast-GFPCA are that: (1) it can be used for any type of non-Gaussian outcome, not just binary; (2) it is orders of magnitude faster than most other all-purpose GFPCA approaches; (3) the method readily handles missing data; (4) it can easily be extended to account for covariates; and (5) it can be generalized to multilevel, longitudinal or structured functional data. We will briefly discuss these extensions, but leave the details for future work. The remainder of this manuscript is organized as follows. In Section 2 we present the fast GFPCA (fast-GFPCA) approach. Next, in Section 3, we apply the fast-GFPCA to active/inactive profiles obtained from wearable accelerometers using data from the National Health and Nutrition Survey (NHANES) 2011-2014 waves. We then illustrate the utility of the fast-GFPCA approach in a simulation study in Section 4. We conclude with a discussion in Section 5. ## 2 Methods The observed data structure is of the type \(\{s_{j},Z_{i}(s_{j})\}\), where \(Z_{i}(s_{j})\) is a non-Gaussian functional observation for subject \(i\in 1,\ldots,N\) at the point \(s_{j}\in S\) for \(j\in 1,\ldots,J\). We assume that these \(\{s_{j},Z_{i}(s_{j})\}\) pairs are discrete realizations from a continuous process \(\{Z_{i}(s):s\in S\}\) such that: (1) \(g[E\{Z_{i}(s)\}]=\beta(s)+b_{i}(s)\), where \(g(\cdot)\) is an appropriate link function, \(\beta(s)\) is the population mean function in the linear predictor space, and \(b_{i}(s)\) is the individual deviation from the population mean in the linear predictor space; (2) \(b_{i}(s)\sim\mbox{GP}(0,K_{b})\) is a zero mean Gaussian process with covariance operator \(K_{b}\). Our goals are to decompose the variability of the latent process \(b_{i}(s)\) along its main directions of variation (obtain the FPCA decomposition) and estimate \(b_{i}(s)\) conditional on these directions of variation. Even though \(b_{i}(s)\) are not directly observed, we can use the Karhunen-Loeve (KL) expansion \(b_{i}(s)=\sum_{k=1}^{\infty}\xi_{ik}\phi_{k}(s)\) where \(\phi_{k}:S\rightarrow\mathbb{R}\) are orthonormal eigenfunctions such that \(\int_{S}\phi_{k}^{2}(s)ds=\lambda_{k}\), \(\lambda_{1}\geq\lambda_{2}\geq\ldots\) are the eigenvalues, and \(\xi_{ik}{\sim}N(0,\lambda_{k})\) are mutually independent subject-specific scores over study participants, \(i\), and eigenvalues, \(k\). Together this leads to the GFPCA model, \[g\left(E[Z_{i}(s)]\right)=\eta_{i}(s)=\beta_{0}(s)+\sum_{k=1}^{\infty}\xi_{ik} \phi_{k}(s). \tag{1}\] This is very similar to the classical FPCA model with the exception that the noise is not necessarily Gaussian. ### Fast GFPCA We propose the following fast-GFPCA algorithm to conduct FPCA on latent processes when observed data are non-Gaussian: (1) bin the data along the observed functional domain \(\mathcal{S}=1,\ldots,J\) into \(L\) bins which may be overlapping; (2) estimate separate local GLMMs with a random intercept in each bin to obtain subject-specific estimates on the linear predictor scale; (3) estimate FPCA on the subject-specific estimates obtained from step (2); (4) re-estimate GFPCA using the estimated eigenfunctions obtained from step (3) at the resolution of the original data. Each of these steps is straightforward, though we provide the R package fastGFPCA for convenience, described in Section 2.3. Details on each step are provided in Section 2.2 below. ### Estimation Algorithm We now provide more details on each of the four steps of the fast-GFPCA algorithm, while the algorithmic presentation is provided in Section S-1 of the supplementary material. Step 1:Choose how to bin the data. The choice of the number of bins (\(L\)) and the bin widths (\(w_{l}\)) will be informed by identifiability and assumed complexity of the underlying latent process. Specifically, suppose we choose \(L\) bins, where \(m_{l}\) is the midpoint of the \(l=1,\ldots,L\) bin. We use symmetric bins, except on the boundary of the domain \(S\). For data observed on a regularly spaced grid over the domain, the \(l^{\text{th}}\) bin contains the data at domain values \(\mathcal{S}_{l}=\{s_{m_{l}-w_{l}/2},\ldots,s_{m_{l}},\ldots,s_{m_{l}+w_{l}/2}\}\), resulting in \(w_{l}+1\) points. The binned data are then \([\{Z_{i}(s_{j}),l\},1\leq i\leq N,j\in\mathcal{S}_{l},1\leq l\leq L]\). If the data are cyclic, as in our data application, the bins may cross the boundary. For example, with minute level accelerometry data, if we let \(m_{1}=1\) (activity 00:00-00:01) and \(w_{l}=6\), then \(\mathcal{S}_{1}=\{1438,1439,1440,1,2,3,4\}\) (activity 23:57-00:05). When the data are non-cyclic we recommend constructing bins as \(\mathcal{S}_{l}=\{\min(s_{m_{l}-w_{l}/2},s_{1}),\ldots,s_{m_{l}},\ldots,\)\(\max(s_{m_{l}+w_{l}/2},s_{J})\}\), resulting in bins with as few as \(w_{l}/2+1\) data points. Step 2:Fit a local GLMM in every bin. Specifically, in each bin \(l=1,\ldots,L\) we estimate separate models of the form \(g[E\{Z_{i}(s_{j})\}]=\beta_{0}(s_{m_{l}})+b_{i}(s_{m_{l}})=\eta_{i}(s_{m_{l}})\) for \(s_{j}\in\mathcal{S}_{l}\). Here \(\beta_{0}(s_{m_{l}})\) is a fixed effect mean and \(b_{i}(s_{m_{l}})\) is a random intercept evaluated at the center of the bin, \(s_{m_{l}}\). From these models we obtain estimates of the global mean, \(\widehat{\beta}_{0}(s_{m_{l}})\), and predictions of the subject-specific random effects \(\widehat{b}_{i}(s_{m_{l}})\). These predictions are _not_ on the original grid the functions were observed on, but rather the midpoints \(\{s_{m_{1}},\ldots,s_{m_{L}}\}\subset\mathcal{S}\). Importantly, this model is misspecified because it assumes a constant effect in each bin while, in reality, both \(\beta_{0}(\cdot)\) and \(b_{i}(\cdot)\) vary smoothly over \(\mathcal{S}\). This can lead to biased estimates for \(\beta_{0}(s_{m_{l}})\), \(b_{i}(s_{m_{l}})\), which has the potential to induce bias in our estimator for \(K_{b}\). This bias, under reasonable choice of bin width, is largely absorbed in the eigenvalues of the estimated covariance operator, with eigenfunctions being well estimated. As the method hinges on estimating the eigenfunctions well in step 3, but not necessarily the eigenvalues, this is not a problem for the method. Below, we deviate briefly from the description of the fast-GFPCA algorithm to discuss the issue of bias in more detail as we believe such discussion provides critical insights into why the proposed method works in practice and is reasonable. Discussion of Binning Induced Bias in Estimation of \(\mathbf{K_{b}}\)We argue that the binning procedure induces bias in the estimated latent functions evaluated at the midpoints of each bin which in turn induces bias in the estimated covariance function \(K_{b}\). The point is most readily shown when we assume that the distribution of observed data are continuous uniform along the domain. For ease of presentation, assume that the domain is \(S=[0,1]\). It follows that the distribution of points follows density \(f_{s}(s)=1\), \(s\in[0,1]\). Further, conditional on \(s\in S_{l}\) (\(s\) is in the \(l^{\text{th}}\) bin), the density of points is iid uniform with conditional density \(f_{s|l}(s)=|S_{l}|^{-1}\) (inverse of the interval width). Let the superscript \({}^{\text{bin}}\) notation denote the estimand under the misspecified model to differentiate between the "true" latent process. The misspecified model in Step 2 estimates \(\eta_{i}^{\text{bin}}(s_{m_{l}})=E[\beta_{0}(s)+b_{i}(s)|\{\xi_{ik}:1\leq k\leq K\}, s\in\mathcal{S}_{l}]=E[\beta_{0}(s)|s\in\mathcal{S}_{l}]+E[b_{i}(s)|\{\xi_{ik}:1\leq k \leq K\},s\in\mathcal{S}_{l}]\), with bias which can be split into population \(\text{Bias}\left[\beta_{0}^{\text{bin}}(s_{m_{l}})\right]=\beta_{0}(s_{m_{l}}) -E[\beta_{0}(s)|s\in\mathcal{S}_{l}]\) and subject-specific \(\text{Bias}\left[b_{i}^{\text{bin}}(s_{m_{l}})\right]=b_{i}(s_{m_{l}})-E[b_{i} (s)|\xi_{ik},s\in\mathcal{S}_{l}]\) components. The key to fast-GFPCA is the ability to obtain reasonable estimates for the covariance operator \(K_{b}=\text{Var}\left[b_{i}(s)\right]\) in Step 3. The additive bias in \(\beta_{0}^{\text{bin}}(s_{m_{l}})\) does not affect the estimator of the covariance operator in Step 3 as the data (i.e. \(\text{Cov}(a+X,Y)=\text{Cov}(X,Y)\)), so we focus on the effect of the subject specific bias, given by \[\text{Bias}\left[b_{i}^{\text{bin}}(s_{m_{l}})\right] = b_{i}(s_{m_{l}})-E[b_{i}(s)|\xi_{ik},s\in\mathcal{S}_{l}]\] \[= \sum_{k=1}^{K}\phi_{k}(s_{m_{l}})\xi_{ik}-E[\sum_{k=1}^{K}\phi_{k }(s)\xi_{ik}|\xi_{ik},s\in\mathcal{S}_{l}]\] \[= \sum_{k=1}^{K}\phi_{k}(s_{m_{l}})\xi_{ik}-\sum_{k=1}^{K}\xi_{ik} \int_{s\in S_{l}}\phi_{k}(s)f_{s|l}(s)ds\] \[= \sum_{k=1}^{K}\phi_{k}(s_{m_{l}})\xi_{ik}-\sum_{k=1}^{K}\xi_{ik}| S_{l}|^{-1}\int_{s\in\mathcal{S}_{l}}\phi_{k}(s)ds\] with \(|S_{l}|^{-1}\) operating as a normalizing constant so that \(\lim_{w_{l}\to 0}|S_{l}|^{-1}\int_{s\in\mathcal{S}_{l}}\phi_{k}(s)ds=\phi_{k}(s_{m_{l}})\). That is, as bin width tends to \(0\) we recover the eigenfunction evaluated at the midpoint of the current bin. Then \[\text{Cov}\left[b_{i}^{\text{bin}}(s_{m_{u}}),b_{i}^{\text{bin}}( s_{m_{v}})\right] =\sum_{k=1}^{K}\lambda_{k}\left[|S_{u}|^{-1}\int_{\mathcal{S}_{u}} \phi_{k}(s)ds\right]\left[|S_{v}|^{-1}\int_{\mathcal{S}_{v}}\phi_{k}(s)ds\right]\] \[\approx\sum_{k=1}^{K}\{\lambda_{k}|S_{u}|^{-1}|S_{v}|^{-1}\}\phi_ {k}(s_{m_{u}})\phi_{k}(s_{m_{v}})\;.\] The last approximation is related to the mean value theorem, which states that \(\frac{1}{|S_{u}|}\int_{S_{u}}\phi_{k}(s)ds\)\(\approx\phi_{k}(s_{m_{u}})\) and \(\frac{1}{|S_{v}|}\int_{S_{v}}\phi_{k}(s)ds\approx\phi_{k}(s_{m_{v}})\) so long as \(|S_{u}|\), \(|S_{v}|\) are not large relative to the curvature in \(\phi_{k}\). If the function \(\phi_{k}(\cdot)\) is constant or linear in any of these intervals, the approximation is actually an equality. This equation shows that by diagonalizing \(\text{Cov}\left[b_{i}^{\text{bin}}(s_{m_{u}}),b_{i}^{\text{bin}}(s_{m_{v}})\right]\) we obtain close approximations of the eigenvectors \(\phi_{k}(\cdot)\), but not of the eigenvalues \(\lambda_{k}\). Indeed, the eigenvalues are re-scaled by \(|S_{u}||S_{v}|\), which is the square of the length of the approximating interval when using constant bin width. This also explains why all the bias in the covariance estimation is absorbed by the eigenvalues and passed on, implicitly, to the scores. Therefore, if we diagonalize the linear predictors obtained in this step we obtain unbiased eigenfunctions but biased covariance, eigenvectors, scores, and subject-specific trajectories. Step 3:Use the predicted responses on the linear predictor scale \([\{\widehat{\eta}_{i}(s_{j}),l\},1\leq i\leq N,j\in\mathcal{S}_{l}]\) to estimate \(\widehat{K}_{b}(u,v)=\text{Cov}\{\widehat{\eta}_{i}(u),\widehat{\eta}_{i}(v)\}\) for \(u,v\in\{s_{w_{1}},\ldots,s_{w_{L}}\}\) via the fast covariance estimation (FACE) method implemented in the refund::fpca.face() function. Obtain the estimated eigenfunctions, \([\{\hat{\phi}_{k}(s_{w_{l}})\},1\leq l\leq L,1\leq k\leq K]\) of the covariance operator \(\widehat{K}_{b}\) where the number of eigenfunctions, \(K\), is selected using, for example, the percent variance explained (e.g., 95%, 99%,). Step 4:Estimating the GFPCA model conditional on the basis functions obtained in Step 3. Because the basis functions are estimated on a different grid from the one of the observed data, we first project each eigenfunction on the rich B-spline basis used in the FACE component of the algorithm in Step 3. This provides an estimate of the eigenfunctions at every point where data was originally sampled. After these projections, fast-GFPCA becomes the following generalized linear mixed model (GLMM) \[g(E[Z_{i}(s)]|\{\hat{\phi}_{k}(s):1\leq s\leq J,1\leq k\leq K\})=\beta_{0}(s)+ \sum_{k=1}^{K}\xi_{ik}\hat{\phi}_{k}(s)\;, \tag{2}\] where \(\xi_{ik}\sim N(0,\sigma_{k}^{2})\) are mutually independent and \(\beta_{0}(s)\) is an unspecified smooth function. That is, given the estimates of \(\hat{\phi}_{k}(\cdot)\), this is a GLMM with \(K\) uncorrelated random slopes. Using a principal components decomposition with uncorrelated random slopes simplifies the random effects covariance structure such that only \(K\) variance parameters need to be estimated, which contributes to computational efficiency. If we assume a parametric form for \(\beta_{0}(s)\), any generalized linear mixed model software can be used. For the case when \(\beta_{0}(s)\) is modeled nonparametrically, we estimate Model 2 using the mgcv::bam() function Wood et al. (2017) with fast REML smoothing parameter selection and the argument discrete=TRUE Li and Wood (2020). This approach is highly computation efficient; see example code in Section 2.3. Alternative software for estimating additive generalized linear mixed models are available and may be faster and more memory efficient in certain situations; specifically, the mgcv::gamm() Wood (2011) and gamm4::gamm4() Wood et al. (2013) functions, which provide interfaces between the mgcv and nlme Pinheiro and Bates (2000) and lme4 Bates et al. (2015) packages, respectively. ### Fast GFPCA: Example Code A key appeal of fast-GFPCA is that it may be implemented by anyone familiar with mixed model software. The code below illustrates how fast-GFPCA can be implemented on a subset of the binary NHANES active/inactive profiles data used in the application described in Section 3. For illustration we use overlapping windows and bin width \(w_{l}=10\) for \(l=1,\ldots,1440\) (both \(w_{l}\) and \(l\) are expressed in minutes). The general organization of the code follows the four steps of the fast-GFPCA algorithm. This code can also be run using a one-line implementation with the accompanying fastGFPCA package. library("refund"); library("tidyverse"); library("lme4") df <- read.rds("NHANES_example.rds") # read in the data J <- length(unique(df$index)) # dimension ## Step 1: Binning decisions bin_len <- 10 # bin width (w) s_m <- 1:J # bin midpoints s_{m_l} Step 2: Fit local binned GLMMs fit_ls <- vector(mode="list", length=J) # empty list to store results for(j in s_m){ # loop over bins # get indices associated with current bin S_l # Note: this data is cyclic, so we look across the domain # using the modulo (%%) function sind_j<-(j-bin_len_w/2):(j+bin_len_w/2)%%1440 sind_j[sind_j==0]<-1440 #subsettothecurrentindices df_j<-df%>% filter(index%in%sind[sind_j]) #fitthelocal,binnedGLMM: #g(E[Z_i(s)]|s_j\inS_l)={beta_0(s_{m_l})+b_i(s_{m_l})} fit_j<-glmer(value~1+(1|id),data=df_j,family=binomial) #storeresults(id,\hat{eta}_i(s_{m_l}),s_{m_l}) fit_ls[[j]]<-data.frame("id"=1:N, "eta_i"=coef(fit_j)$id[[1]], "s_m"=j) } ##bindelementsofthelistrow-wise fit_df<-bind_rows(fit_ls) ##Step3:FPCAonthebinnedlatentestimates fpca_latent<-fpca.face(matrix(fit_df$eta_i,N,J,byrow=FALSE), pve=0.95,argvals=sind,knots=20,lower=0) ##Step4:Re-fittheGLMMonthefulldata ##Notethathereweodnotneedtointerpolatetheeigenfunctions ##asthebinmidpointscontainedtheoriginalobservationspoints #dataframeofeigenfunctions(takethefirstfour) phi_mat<-data.frame("index"=s_m,fpca_latent$efunctions[,1:4]) colnames(phi_mat)[2:5]<-paste0("Phi",1:4) #mergethedata df<-df%>%left_join(phi_mat,by="index")%>%mutate(id_fac=factor(id)) #re-fitthemodelusingmgcv::bam() fit<-bam(value~s(index,bs="cc",k=20)+ s(id_fac,by=Phi1,bs="re")+s(id_fac,by=Phi2,bs="re")+s(id_fac,by=Phi3,bs="re")+s(id_fac,by=Phi4,bs="re"), data=df,family=binomial,discrete=TRUE,method="fREML") For presentation simplicity, the code assumes that data are ordered by subject and then by the domain of the function. If the data are not sorted like this, slight modifications are needed in Steps 3 and 4 or data can be re-ordered before the code is run. Moreover, the subject identifier needs to be a factor variable to fit the correct model, which is why the variable id_fac was created in Step 4. Users unfamiliar with the mgcv package may be confused by the syntax in the call to mgcv::bam() in Step 4. The expression s(id_fac,by=Phi1,bs="re") constructs independent normally distributed random slopes. The same random effects specification in lme4 would be fit <- glmer(value~1 + (0+Phi1|id) + (0+Phi2|id) + (0+Phi3|id)+(0+Phi4|id), data=df, family=binomial), however, lme4::glmer() cannot model \(\beta_{0}(t)\) nonparametrically and a parametric form for \(\beta_{0}(s)\) would need to be specified. We hope that seeing the complete code will: (1) show how easy it is to implement the proposed methods; (2) provide a modular and modifiable platform that can be used in similar situations which require specific adjustments; and (3) support the philosophy that analytic methods are not really methods without supporting software. For user convenience we have developed fastGFPCA, an R package available on GitHub at [https://github.com/julia-wrobel/fastGFPCA](https://github.com/julia-wrobel/fastGFPCA), which wraps the code for implementing fast-GFPCA. The detailed description of the package functionality, including examples for both binomial and Poisson distributed functional data, is contained in the fastGFPCA vignette associated with the package. Briefly, the primary function in the package is fast_gfpca(). Function arguments overlap and binwidth allow the user to select a bin width and choose whether or not to construct overlapping windows for their data. The argument family is fed to the functions lme4::glmer() and mgcv::bam() and is used to specify an exponential family distribution and link function using the syntax, for example, family = binomial(link = "logit"). The fast_gfpca() function returns an object of class "fpca" and results can be easily visualized using the refund.shiny package Wrobel et al. (2016). ### Practical Considerations Several practical considerations must be taken into account when applying the fast-GFPCA method. We discuss these considerations in detail below. The Need for Step 4.The need for Step 4 stems from the fact that Step 3 produces unbiased estimators of the eigenfunctions but biased estimators of the subject-specific random effects. This is driven primarily by bias in the estimated eigenvalues of the covariance operator, which are then passed onto the scores. This effect can be seen in Figure 1, which plots the estimated scores from step 3 and step 4 versus the true scores (Figure 1(A)), the estimated scores from step 3 versus those from step 4 (Figure 1(B)), and the estimated curves on the log odds scale using step 3 versus step 4. The data are generated as binary functional data according to our simulation study (See Section 4 for details), and fast-GFPCA estimates the true eigenfunctions well in this data. From Figure 1(A) and Figure 1(B), we see that not only are the scores obtained from step 3 biased (far away from the identity line, Figure 1(A)), but that the scores in step 4 are effectively a linear re-scaling of those from step 3 (near perfect correlation, but points away from the identity line, Figure 1(B)). This bias results in estimated log-odds which are substantially less correlated with the true latent functions (Figure 1(C)). Note that the slopes in Figure 1(B) are nearly identical which matches the result shown in Section 2 Step 2 (discussion on binning bias) that the eigenvalues are biased by a constant multiplicative factor (and thus the scores are as well). Identifiability.Local GLMMs may be non-identifiable or model fitting may not converge. An example is when data are binary and all (or nearly all) observations are either 0 or 1 for every study participant in a particular bin. In this case the local model is non-identifiable. Potential solutions include choosing a wider bin width, imposing a lower or upper bound on the linear predictor scale, or modifying the data. For example, Agresti and Caffo (2000) showed that adding two successes and two failures improves the performance of confidence intervals when estimating a probability from binary data. The idea can be used in our context by adding two successes and two failures in every bin. For our application and simulations, increasing the bin size was enough to ensure excellent performance of the methods. Bin width.As mentioned in Section 2, the choice of bin width is an important tuning parameter in the fast GFPCA algorithm. Our simulation study presented in Section 4 illustrates this point. There is a need to balance choosing a bin size that is small enough to estimate the curvature of the latent process but not too small to make GLMM fitting unstable. A possible approach is to consider bins of increased sizes to the level where GLMMs can fit the data. The bin sizes can be increased and the stability of estimators compared. An alternative would be to conduct smoothing at the study specific level and inspect the plots to explore the complexity of the underlying functions. Non-overlapping versus overlapping windows.The choice of whether to use non-overlapping versus overlapping windows will vary by application. Non-overlapping windows reduces the number of local GLMMs that need to be estimated, though computational gains are minor. In contrast, overlapping windows allows estimation at a finer grid of points \(s_{m_{1}},\ldots,s_{m_{L}}\), but may induce spurious correlation is the estimated latent processes. The effects of these auto correlations, if any, are not currently known in applications, though we illustrate that the method can result in under-smoothing in our data application. A possible technical solution could be to adapt the smoothing selection criteria of FACE to allow for auto correlated data, though this exceeds the scope of our current work. ### Alternative Approaches There are at least two alternatives to the proposed approach based on first smoothing individual curves. First, one may consider first smoothing the binary functional data on the response scale and ignoring the binary nature of the data. These smoothed data may then be transformed using a link function (e.g., logit for binary data), and then apply fPCA to the estimates on the latent scale. Alternatively, smoothing may be done by fitting generalized additive models \(g(E[Y_{i}(s)])=\beta_{0i}+f_{i}(s)\) separately for each individual, allowing the linear predictor to vary smoothly over the domain where \(f_{i}(s)\) is modelled using a rich basis. fPCA may then be applied to the resulting estimates on the latent scale. The latter approach was recently proposed by Weishampel et al. (2023). Both of these approaches are intuitive and may be faster in certain scenarios. While they could be used as quick exploratory tools, we will point out some of their hidden drawbacks. Consider the first approach. If smoothing is done on the response scale and we denote the smoothed response as \(\tilde{Y}_{i}(s)\), a fundamental requirement is for \(g\{\tilde{Y}_{i}(s)\}\) to be finite and defined. For example, for binary data one would need \(\tilde{Y}_{i}(s)\in(0,1)\) and preferably farther away from the boundaries to ensure that \(\text{logit}\{\Pr(\tilde{Y}_{i}(s)=1)\}\) is defined and well behaved. In areas of where there are many zeros (e.g., during the night for physical activity) and ones (during the morning for physical activity), this smooth estimators will be 0 or 1, respectively. Thus, the smooth estimates would need to be bounded away from 0 and 1 using artificial tuning parameters to inverse transform them. This makes the approach dependent on individual choices and may introduce infinite bias due to arbitrary choices of bounding parameters. Regarding the second approach, the individual models may not be identifiable. Indeed, consider binary functional data where large regions of the domain are all 0 or 1. In these regions, a generalized additive model is identified only due to the smoothness assumption on the coefficient function \(f_{i}(s)\), which may result in the divergence of the estimated log odds. Moreover, all smoothing assumptions are done at the individual level, without taking into account the information from the other subjects. In our data application, many participants are completely inactive during the very early morning hours (2AM-5AM). In contrast, the fast-GFPCA approach borrows strength from the other study participants to provide reasonable estimates at the subject level in areas where there is very little information for many, but not all, study participants. In addition to the issues mentioned above, both approaches involve tuning parameters related to the estimation method for individual model fits. For example, in the context of penalized splines, one must choose the number of splines, the basis, and a method for smoothing parameter selection. The latter point seems especially important given that the generative model implies the amount of "wiggliness" of each \(i=1,\ldots,N\) latent function is the same. Fitting separate models does not enforce this constraint and may lead to some functions being estimated to be perfectly smooth, while others are quite wiggly. Moreover, while fast-GFPCA can deal with substantial missing areas in a data set, the two alternative methods described here cannot, especially when combined with excess zeros or ones. However, the most important drawback of both these methods was that we were not able to successfully use them in our NHANES application; see an extensive discussion of these points in Section S-2 of the supplemental material. Moreover, fast-GFPCA can easily be extended to covariate adjusted and multilevel/structured generalized functional data. It is not immediately clear that either of the two approaches referenced above are readily extended to these scenarios. ## 3 Application ### NHANES Accelerometry Data The National Health and Nutrition Examination Survey (NHANES) is a large, ongoing study which provides a nationally representative sample of the non-institutionalized US population. NHANES is conducted by the Centers for Disease Control (CDC) and collects data in two-year waves with the goal of providing information on the health and nutrition of the US population. Wearable accelerometers were deployed in the 2003-2004, 2005-2006, 2011-2012, and 2013-2014 waves of NHANES. The 2003-2006 accelerometry component of the NHANES study involved participants wearing a waist-worn accelerometer during waking hours. A guide to analyzing these data is provided in Leroux et al. (2019) and an R data package, rnhanesdata Leroux (2022) is publicly available on Github at [https://github.com/andrew-leroux/rnhanesdata](https://github.com/andrew-leroux/rnhanesdata). The 2011-2014 accelerometer data, released in 2021, are provided at multiple resolutions: subject, minute, and sub-second level. The subject and minute level data summarize individuals' acceleration patterns based on the new Monitor Independent Movement Summary (MIMS) unit John et al. (2019). Here, we use the 2011-2014 minute level MIMS data to construct active/inactive profiles for participants. To obtain binary active/inactive profiles, we first threshold participants' daily MIMS data as \(Y_{ih}^{B}(s)=1\{Y_{ih}(s)\geq 10.558\}\), where \(Y_{ih}(s)\) corresponds to the \(i^{\text{th}}\) individual's MIMS unit on day \(h\) at minute \(s\). We then define their active/inactive profile as \(Z_{i}(s)=\text{median}\{Y_{ih}^{B}(s):h=1,\ldots,H_{i}\}\). For example, if \(H_{i}=7\), \(Z_{i}(s)\) is 0 if study participant \(i\) was inactive at time \(s\) for at least 4 days and 1 otherwise. When the number of good days is even, say \(\frac{H_{i}}{2}\), the median is defined as the \(\frac{H_{i}}{2}+1^{\text{th}}\) largest observation. The threshold for active/inactive on the MIMS unit scale is chosen to be 10.558, as suggested in Karas et al. (2022), though our methodology would apply similarly to other thresholds. The analytic sample contains data from \(N=4286\) participants with 1440 observations per person (minutes in a day). Although NHANES is a nationally representative sample, obtaining nationally representative estimates for population quantities and model parameters requires the use of survey weights and survey design Korn and Graubard (2011); Lumley (2004); Skinner et al. (2017). The intersection of survey statistics and functional data analysis is a relatively new area of research Cardot et al. (2013a,b, 2014); Parker and Holan (2022) and is beyond the scope of the current work. Thus we do not account for the NHANES survey design in our data application, though it is an important direction for future methodological development. ### Comparison methods and criteria We apply fast-GFPCA (labeled fastGFPCA) to the NHANES data using bin widths (\(w_{l}\)) of 6, 10, and 30 minutes, and both overlapping and non-overlapping intervals. We compared methods to the fast binary variational FPCA (labeled vbFPCA) implementation in the registr::bfpca() function Wrobel (2018); Wrobel et al. (2019). We also consider a modified version of fast-GFPCA (labeled modified fastGFPCA) which further speeds up fastGFPCA in Step 4 by fitting the model (2) to four sub-samples of the data. The modified fastGFPCA approach is described in more detail in Section S-3 of the supplemental material. The approach described in Gertheiss et al. (2017) was not computationally feasible for the NHANES data. To facilitate comparisons across models, we fix the number of principal components across all methods to \(K=4\). For each approach we compare model parameters and predictive performance. For model parameters we compare the first four estimated eigenfunctions and the population mean function. For predictive performance, we compare the estimated in-sample log-loss associated with each model fit and the estimated area under the receiver operating curve (AUC). Finally, we compare computation times across methods. Though Step 2 of fast-GFPCA could easily be parallelized, computation times are reported for serialized fitting of the models to provide an upper bound for this step. Substantial differences were identified in terms of estimated population means and eigenfunctions using the vbFPCA approach and the fast-GFPCA approach (see commentary in Section 3.3). To investigate these differences a brief simulation study based on the NHANES data was conducted, described and summarized in Section S-4.1 of the supplemental material. ### Results #### 3.3.1 Data Application Model Parameters.Figure 2 displays the estimated population means \(\hat{\beta}_{0}(s)\) (Figure 2A), and first four eigenfunctions \(\hat{\mathbf{\phi}}(s)\) (Figure 2B) of the latent process. Each column of Figure 2 corresponds to a different model fit where color indicates approach (vbFPCA in red, columns 1-2, and fast-GFPCA in blue, columns 3-8). Modified fast-GFPCA estimates a population mean function for each of four randomly selected sub-samples, which are then averaged to produce the overall \(\hat{\beta}_{0}(s)\). These sub-sample estimates of \(\beta_{0}(s)\) are shown as dashed lines in Figure 2A, columns 3-8. As the modified fast-GFPCA uses the population level \(\hat{\mathbf{\phi}}(s)\), there are no corresponding dashed plots for the eigenfunctions. The population mean is fairly stable across sub-samples for the modified fast-GFPCA approach (Figure 2A, columns 3-8). We also find excellent agreement between the estimated linear predictor of the modified and unmodified fast-GFPCA approaches; see Figure 3 and associated discussion. Moreover, \(\hat{\beta}_{0}(s)\) and \(\hat{\mathbf{\phi}}(s)\) are similar across the fast-GFPCA fits (overlapping vs non-overlapping windows, and bin widths), suggesting that the fast-GFPCA algorithm is fairly robust to the choice of input parameters in the NHANES data. However, there are substantial differences between vbFPCA and fast-GFPCA both in terms of the estimated population mean functions and first three eigenfunctions. The largest differences in \(\hat{\beta}_{0}(s)\) occur around 6AM-8AM and 9PM-12AM, where vbFPCA respectively over- and under-estimates \(\beta_{0}(s)\) relative to fast-GFPCA. The vbFPCA \(\hat{\beta}_{0}(s)\) suggests that, for participants with \(b_{i}(s)=0\), the probability of being active between 6AM-8AM is around 80-90% compared to around 50% from fast-GFPCA. This result is unexpected and does not match the results in the data, where we observe approximately 50% probability of being active during this time. One potential explanation could be that vbFPCA provides biased estimators of the mean and that the bias may be shifted into the latent random process variation. This hypothesis appears to be supported by our data driven simulation in Section S-4.1 of the supplemental material. It is unclear at this time what is driving the observed bias in the vbFPCA approach as the same behavior is not seen in simulations presented in Section 4, but it may be related to the high proportion of observed 0s (inactive periods) during the night time hours in our data application. Linear Predictor.Our goal in this section is to understand how differences in estimated model parameters \(\hat{\beta}_{0}(s)\) and \(\hat{\mathbf{\phi}}(s)\) across methods lead to differences in the estimated linear predictor \(\hat{\eta}_{i}(s);i\in 1,\ldots,N\). To capture this, for each minute of the day we regressed \(\hat{\eta}_{i}(s)\) from fast-GFPCA with non-overlapping windows and \(w=6\) on \(\hat{\eta}_{i}(s)\) from each of four other GFPCA models. Figure 3 shows results of these time-specific linear regressions. These have the form \(E[Y]=\mathcal{B}_{0}+\mathcal{B}_{1}X\), where \(Y\) is \(\hat{\eta}_{i}(s)\) from fast-GFPCA with no overlap windows and \(w=6\), and \(X\) is \(\hat{\eta_{i}}(s)\) estimated by: (1) fast-GFPCA with non-overlapping windows and \(w=30\) (orange lines); (2) modified fast-GFPCA with non-overlapping windows and \(w=6\) (blue lines); (3) vbFPCA with Kt=8 (green lines); and (4) vbFPCA with Kt=30 (yellow lines). The first panel and second panels display regression coefficients \(\hat{\mathcal{B}}_{0}\) and \(\hat{\mathcal{B}}_{1}\), respectively, and the third panel displays the percent variance explained (\(R^{2}\)). The black dashed line corresponds to perfect agreement between \(Y\) and \(X\). We observe almost perfect agreement between the reference model (fast-GFPCA with no overlap and \(w=6\)) and the corresponding modified fast-GFPCA model (blue lines, intercept \(\approx 0\), slope \(\approx 1\), \(R^{2}\approx 1\), left to right panels, respectively). Similarly, we observe near perfect linear association between the fast-GFPCA model with larger bin width \(w=30\) (orange lines) across the day (\(R^{2}\geq 0.95\), right panel), with a nearly 1-to-1 relationship (intercept \(\approx 0\), slope \(\approx 1\)) during the active hours of the day (\(\approx 10\)AM to 8PM). During the active hours of the day, the vbFPCA results suggest a similar 1-to-1 trend in the mean, though the lower \(R^{2}\) suggests less agreement between the fast-GFPCA and vbFPCA approaches than across fast-GFPCA estimates with different input parameters. Predictive Accuracy.The observed differences in the estimated linear predictor do not appear to translate into different predictive accuracy in terms of either AUC of log loss; see the two rightmost columns of Table 1. A possible explanation of the similar predictive accuracy but different estimation performance may be that the disagreement in predictions between models occurs primarily during the nighttime hours (Figure 3), when the probability of an individual being active is generally very low. Computation Time.Computation times for each model fit are presented in the middle of Table 1 for fast-GFPCA (top rows) and the vbFPCA approach (bottom rows). For fast-GFPCA, computation times are broken down by step of the algorithm. The fast-GFPCA algorithm (top rows, "Modified" = "No") requires about 3-4 hours while vbFPCA requires 7 minutes for Kt=8 and 20 minutes for Kt=30. These computation times are driven by Step 4, which is unavoidable if the model is fit on the entire NHANES data set. Indeed, the fast-GFPCA approach here is the simplest GLMM that can be fit while maintaining the functional structure of the data. When Step 4 is modified (top rows, "modified" = "Yes") computation times decrease substantially and becomes comparable to the vbFPCA approach. From a practical perspective we have not found any substantial differences between the results obtained from the fast-GFPCA and modified fast-GFPCA approaches. ## 4 Simulation Study Our simulations are designed to (1) quantify the computational efficiency and scalability of fast-GFPCA as sample and grid size increase, (2) evaluate the accuracy of our method in comparison with existing approaches for generalized and binary FPCA, and (3) understand the behavior of our method under different data binning strategies from Step 1 of our estimation algorithm in Section 2.2. For larger simulated datasets we focus on the binary functional data setting because the only existing competing approach that is computation ally feasible for large functional datasets is tailored specifically to binary data. For smaller datasets we also evaluate our method in comparison to an existing approach for GFPCA of Poisson functional data. ### Simulation Design We simulate binary functional data for \(N=100,500,1000\) subjects observed on a length \(J=100,500,2000\) grid in \(S=[0,1]\) that is equally spaced and shared across subjects. Poisson functional data are simulated from \(N=50,100\) and \(J=100,200\) due to the computational limitations of competing methods. Curves for \(i\in 1,\ldots,N\) subjects are drawn from model (2). We construct \(K=4\) principal components, with true eigenvalues \(\lambda_{k}=0.5^{k-1};k=1,2,3,4\). The true eigenfunctions are drawn from one of two scenarios intended to mimic real data settings. In the first setting, latent curves \(\eta_{i}(s)\) and eigenfunctions are periodic, with \(\mathbf{\phi}(s)=\{\sqrt{2}\sin(2\pi s),\sqrt{2}\cos(2\pi s),\sqrt{2}\sin(4\pi s), \sqrt{2}\cos(4\pi s)\}\). The second setting does not exhibit periodicity, with true eigenfunctions given by \(\mathbf{\phi}(s)=\{1,\sqrt{3}(2s-1),\sqrt{5}(6s^{2}-6s+1),\sqrt{7}(20s^{3}-30s^{2} +12s-1)\}\). For most simulation settings we assume \(\beta_{0}(s)=0\), however, for a subset of scenarios we construct nonzero \(\beta_{0}(s)\) using a B-spline basis, specifically for binary functional data with: (1) \(N,J=(1000,2000)\); and (2) \(N,J=(500,100)\). For binary and Poisson data, \(g(\cdot)\) is taken to be the the logit and log links, respectively. For each subject and time point, exponential family observations \(Z_{i}(s)\) are sampled independently from either a Bernoulli distribution with probability \(logit^{-1}\left[\eta_{i}(s)\right]\), or from a Poisson distribution with rate \(e^{\left[\eta_{i}(s)\right]}\). ### Comparison to Existing Approaches #### 4.2.1 Binary Functional Data We assess the performance of fast-GFPCA across different bin widths \(w_{l}\) and compare non-overlapping and overlapping windows. Specifically, for each simulation scenario we evaluate three different bin widths, \(w_{l}\in(6,10,50)\). We do not estimate fast-GFPCA with \(w_{l}=50\) when \(J<500\) as the large bin size does not make sense in this context. We compare fast-GFPCA with two different binary FPCA approaches, both of which are implemented in the registr package Wrobel (2018); Wrobel and Bauer (2021). The first method to which we compare is the two-step conditional GFPCA model introduced by Gertheiss et al. (2017), which is implemented using the registr::gfpca_twoStep() function and referred to as _tsGFPCA_ in text and figures below. While gfpca_twoStep() is a general purpose function that can accommodate multiple exponential family distributions, it is computationally intensive and thus impractical for most simulation settings. To reduce this computational burden we only implement tsGFPCA when \(N\in(100,500)\) and \(J=100\). We compare fast-GFPCA in all binary data simulation settings to the vbFPCA algorithm from Wrobel et al. (2019), which is implemented via the registr::bfpca() function, and is denoted vbFPCA. This method is highly computationally efficient, but designed for binary functional data with a logit link, and cannot be generalized to other link functions or exponential family distributions. Because the vbFPCA approach models population mean \(\beta_{0}(s)\) and eigenfunctions \(\mathbf{\phi(s)}\) using a B-spline expansion without a smoothness penalty, the number of basis functions must be manually tuned to obtain optimal smoothness. To address this, for each simulated dataset we implement vbFPCA with \(Kt=8\) basis functions (the package default) and \(Kt=30\). For both competing methods we implement a periodic B-spline basis using the registr option periodic = TRUE. By default in the registr package, eigenfunctions \(\mathbf{\phi(s)}\) are returned unscaled and on a grid of size 100. To enable comparison with results from fast GFPCA, we linearly interpolate eigenfunctions estimated using _tsGFPCA_ and vbFPCA to a grid of size \(J\) and scale by the square root of the grid length. #### 4.2.2 Poisson Functional Data In the Poisson setting we compare fast-GFPCA with registr::gfpca_twoStep(). Since the comparative method is highly computationally intensive, we only consider small data settings of sample sizes \(N\in(50,100)\), grid lengths \(J\in(100,200)\). We simulate 100 datasets for each of the six simulation scenarios arising from this combination of grid length and sample size. For fast-GFPCA we compare non-overlapping and overlapping windows, and consider bin widths \(w_{l}\in(6,10)\). ### Evaluation Criteria We compare the performance of the three methods (tsGFPCA, vbFPCA, and fastGFPCA) with respect to accuracy in recovering subject-specific latent means in the linear predictor space \(\eta_{i}(s)\), accuracy in recovering population-level mean \(\beta_{0}(s)\) and eigenfunctions \(\boldsymbol{\phi(s)}\), and computational efficiency. Accuracy of subject-specific log-odds across models is quantified using mean integrated squared error (MISE) across subjects, given by \(\frac{1}{N}\sum_{i-1}^{N}\int_{0}^{1}\left(\hat{\eta}_{i}(s)-\eta_{i}(s) \right)^{2}ds\). Accuracy of eigenfunction estimation is compared using MISE defined by \(\frac{1}{k}\sum_{k=1}^{4}\int_{0}^{1}\left(\hat{\phi}_{k}(s)-\phi_{k}(s)\right) ^{2}ds\), and population mean accuracy is measured using ISE. Computation times are reported in minutes. ### Simulation Results: Accuracy Tables 2, 3, and 4 summarize accuracy of key quantities across methods and simulation scenarios, where outcome data were generated from both binomial and Poisson distributions with periodic true eigenfunctions. Table 2 provides the MISE for \(\hat{\eta}_{i}(s)\), the estimated subject-specific latent means in the linear predictor space, which are log-odds for binomial data and log-rates for Poisson data. Similarly, Tables 3 and 4 summarize the MISE of eigenfunctions \(\mathbf{\phi}(s)\) and ISE of population-level mean \(\beta_{0}(s)\), respectively. #### 4.4.1 Binomial data Table 2 shows that all methods estimate the latent log-odds accurately. Our fastGFPCA approach performs equally well to the best competing method, vbFPCA with \(Kt=8\) spline bases in all but one scenario. Simulated data are periodic, and as a result fastGFPCA with overlapping bins outperforms fastGFPCA with non-overlapping bins. fastGFPCA performs best for the largest bin width, \(w_{l}=50\), except when the grid size \(J\) is smallest (\(J=100\)). However, fastGFPCA performs well across all chosen bin widths. Table 3 indicates that our fastGFPCA approach recovers true population eigenfunctions comparably to or better than the competing vbFPCA method in every scenario. For smaller grid sizes \(J\) the vbFPCA approach with with \(Kt=8\) performs slightly better, but fastGFPCA outperforms vbFPCA when \(J\) increases. Table 4 shows that fastGFPCA outperforms vbFPCA at recovering the population mean \(\beta_{0}(s)\) in all simulation scenarios. This is likely due to the fact that fastGFPCA penalizes spline coefficients to obtain a flat line estimate around the true value of \(\beta_{0}(s)=0\) while vbFPCA, which does not penalize smoothness, cannot estimate \(\beta_{0}(s)\) as well when the true function is linear. Similar results for non-periodic data are observed in Supplemental Table S1. Figure 4 highlights these results at a more granular level for one simulation scenario with \(N=500\) subjects and \(J=500\) time points and a nonzero, nonlinear population mean function \(\beta_{0}(t)\). Specifically, Figure 4 shows the estimated population mean function, \(\hat{\beta}_{0}(s)\), and the first four estimated eigenfunctions, \(\hat{\phi}_{k}(s);k\in 1,\ldots,4\), from 100 simulated datasets. Estimates are presented as red lines for models using the competing vbFPCA approach or blue lines for models using our proposed fastGFPCA method, with dotted black lines representing the true value. All methods provide reasonable results in this simulation setting. Our fastGFPCA approach with overlapping bins and bin width \(w_{l}=50\) provides the best results. The vbFPCA method with \(Kt=8\) spline basis functions also performs well, though overestimates \(\beta_{0}(t)\) at the beginning of the functional domain \(s\). The fastGFPCA approach with non-overlapping bins and \(w_{l}=50\) overestimates \(\phi_{4}(s)\) at the endpoints of the functional domain, which suggests that a smaller band width may be more appropriate when data is not periodic and only non-overlapping bins can be used for fastGFPCA. #### 4.4.2 Poisson data For Poisson distributed functional data the proposed fastGFPCA method was compared with the two-step approach tsGFPCA because there is no variational EM method for Poisson FPCA. Table 3 indicates that fastGFPCA estimates the latent log-rate far better than the competing tsGFPCA method in every simulation scenario. Similarly, Tables 3 and 4 indicate that fastGFPCA recovers the true population mean and eigenfunctions much more accurately than tsGFPCA in every scenario. Of the fastGFPCA approaches, fastGFPCA with overlapping bins and and \(w_{l}=10\) tends to perform best, but the different fastGFPCA models perform similarly regardless of window overlap or choice of bin width \(w_{l}\). Figure S2 in the supplemental material provides some intuition as to why tsGFPCA performs poorly in the Poisson setting. This figure shows the estimated population mean function and eigenfunctions from 25 simulated datasets with \(N=100\) subjects and \(J=100\) time points. Model estimates are presented as red lines (tsGFPCA) or blue lines (fastGFPCA). The proposed fastGFPCA method provides reasonable results for bin widths \(w_{l}=6\) and \(w_{l}=10\), but tsGFPCA clearly provides incorrect estimates. ### Simulation Results: Computational Efficiency Table 5 shows median computation time in minutes across methods and simulation scenarios for both binomial and Poisson functional data. Across all scenarios the non-overlapping fastGFPCA approach is more computationally efficient than the overlapping fastGFPCA approach. Bin widths \(w_{l}\) have a negligible effect on computation time for non-overlapping bins, but when bins are overlapping computation time increases with increasing bin width. fastGFPCA scales well when grid size \(J\) increases but more slowly when one increases the number of subjects, \(N\). Notably, at smaller sample sizes (\(N\in\{100,500\}\)), fastGFPCA is comparably efficient or faster than vbFPCA, a method custom-built for speed. For \(N\geq 1000\), fastGFPCA can be sped-up using techniques discussed in Supplemental Section S-3. The _tsGFPCA_ has a median time of 92 minutes for Poisson data with just 100 subjects and 200 time points. This indicates that _tsGFPCA_ is prohibitively slow for our data application of 4286 subjects with 1440 time points each. ## 5 Discussion The fast-GFPCA method proposed in this manuscript represents a simple, understandable, and computationally feasible solution to the complex task of estimating functional principal components analysis for non-Gaussian data. In addition, we have provided a mathematical justification for the principles that underlie fast-GFPCA and shown that the method compares favorably to the few existing approaches in terms of both estimation accuracy and computational efficiency. Moreover, existing methods, such as the vbFPCA approach for binary data presented here may provide biased estimates of model parameters (population mean function and covariance operator), suggesting that the fast-GFPCA approach is a reasonable method for comparison even when an alternative approach is more computationally efficient in a given application. Though the work here shows fast-GFPCA to be fast, accurate, and appropriate for analyzing the motivating NHANES data, methodologic work remains. Specifically, it is unclear at this time how to choose the optimal bin, both with regard to bin width and the decision to use overlapping versus non-overlapping windows for estimation. While a cross-validated prediction error criteria may be a viable option, subject-level cross-validation requires prediction of random effects in non-Gaussian models using participants' data not included in model fitting, a non-trivial problem in non-Bayesian contexts. Moreover, when using non-overlapping windows, automated smoothing parameter selection of FPCA on the latent process in Step 3 of the fast-GFPCA algorithm is unreliable. Here we propose an ad-hoc solution based on visual inspection of the eigenfunctions and/or estimated covariance function. While this is feasible due to the speed of the FACE method implemented in Step 3, we would prefer a fully automated approach. Deriving an appropriate variation of the GCV criteria used by FACE for smoothing parameter selection which accounts for autocorrelated data may improve the method proposed here. Nevertheless, the results of this work represent an encouraging step forward for estimation of GFPCA in very high dimensional data, specifically large \(N\), a key bottleneck for the application of functional data analysis methods in practice. An appealing feature of fast-GFPCA is that it can be extended to: (1) covariate dependent GFPCA; (2) multilevel, structured, and longitudinal GFPCA. Covariate dependent GFPCA.The fast-GFPCA method easily incorporates covariates into the model. Consider the case of one additional scalar predictor (e.g., age), denoted \(x_{i}\). Step 2 of fast-GFPCA is simply modified to fit local models of the form \(g(E[Z_{i}(s_{j})|s_{j}\in S_{l}])=\beta_{0}(s_{m_{l}})+\beta_{1}(s_{m_{l}})x_ {i}+b_{i}(s_{m_{l}})=\eta_{i}(s_{m_{l}})\). Then, in Step 4, the final GLMM includes the additive term associated with the proposed varying coefficient model. The effort required for this extension is minimal. Nested, longitudinal or crossed design GFPCA.Consider the case when multiple functions are observed per study participant. For example, in the NHANES data each study participant has multiple days of accelerometry data. For notation simplicity assume that there are \(K\) functions for every study participant. The extension to multilevel GFPCA follows naturally from the fast-GFPCA algorithm. Specifically, in Step 2 we fit the multilevel model \(g(E[Z_{ik}(s_{j})])=\beta_{0}(s_{m_{l}})+b_{i}(s_{m_{l}})+v_{ik}(s_{m_{l}})\). Step 3 can then proceed with MFPCA FACE Cui et al. (2022) to estimate the principal directions of variation at each level. If the functional data has longitudinal Greven et al. (2010) or crossed Shou et al. (2015) designs the local GLMM can be changed accordingly. ## References * Agresti and Caffo (2000) Agresti, A. and Caffo, B. (2000). Simple and effective confidence intervals for proportions and differences of proportions result from adding two successes and two failures. _The American Statistician_, 54(4):280-288. * Bates et al. (2015) Bates, D., Machler, M., Bolker, B., and Walker, S. (2015). Fitting linear mixed-effects models using lme4. _Journal of Statistical Software_, 67(1):1-48. * Cardot et al. (2013a) Cardot, H., Dessertaine, A., Goga, C., Josserand, E., and Lardin, P. (2013a). Comparison of different sample designs and construction of confidence bands to estimate the mean of functional data: An illustration on electricity consumption. _Survey Methodology_, 39(2):283-301. * Cardot et al. (2013b) Cardot, H., Goga, C., and Lardin, P. (2013b). Uniform convergence and asymptotic confidence bands for model-assisted estimators of the mean of sampled functional data. _Electronic journal of statistics_, 7:562-596. * Cardot et al. (2014) Cardot, H., Goga, C., and Lardin, P. (2014). Variance estimation and asymptotic confidence bands for the mean estimator of sampled functional data with high entropy unequal probability sampling designs. _Scandinavian Journal of Statistics_, 41(2):516-534. * Chen et al. (2013) Chen, H., Wang, Y., Paik, M. C., and Choi, H. A. (2013). A marginal approach to reduced-rank penalized spline smoothing with application to multilevel functional data. _Journal of the American Statistical Association_, 108(504):1216-1229. * Chiou et al. (2014) Chiou, J.-M., Chen, Y.-T., and Yang, Y.-F. (2014). Multivariate functional principal component analysis: A normalization approach. _Statistica Sinica_, 24(4):1571-1596. * Cui et al. (2022) Cui, E., Li, R., Crainiceanu, C. M., and Xiao, L. (2022). Fast multilevel functional principal component analysis. _Journal of Computational and Graphical Statistics_, 0(ja):1-33. * Di et al. (2009) Di, C., Crainiceanu, C., Caffo, B., and Punjabi, N. (2009). Multilevel functional principal component analysis. _Annals of Applied Statistics_, 3(1):458-488. * Doherty et al. (2017) Doherty, A., Jackson, D., Hammerla, N., Plotz, T., Olivier, P., Granat, M. H., White, T., van Hees, V. T., Trenell, M. I., Owen, C. G., Preece, S. J., Gillions, R., Sheard, S., Peakman, T., Brage, S., and Wareham, N. J. (2017). Large scale population assessment of physical activity using wrist worn accelerometers: The uk biobank study. _PLOS ONE_, 12(2):1-14. * D'Ariano et al. (2013) Gertheiss, J., Goldsmith, J., and Staicu, A.-M. (2017). A note on modeling sparse exponential-family functional response curves. _Computational Statistics & Data Analysis_, 105:46-52. * Goldsmith et al. (2020) Goldsmith, J., Scheipl, F., Huang, L., Wrobel, J., Di, C., Gellar, J., Harezlak, J., McLean, M., Swihart, B., Xiao, L., Crainiceanu, C., and Reiss, P. (2020). _refund: Regression with Functional Data_. R package version 0.1-23. * Goldsmith et al. (2015) Goldsmith, J., Zipunnikov, V., and Schrack, J. (2015). Generalized multilevel function-on-scalar regression and principal component analysis. _Biometrics_, 71(2):344-353. * Greven et al. (2010) Greven, S., Crainiceanu, C., Caffo, B., and Reich, D. (2010). Longitudinal functional principal component analysis. _Electronic Journal of Statistics_, pages 1022-1054. * Hall et al. (2008) Hall, P., Muller, H.-G., and Yao, F. (2008). Modelling sparse generalized longitudinal observations with latent gaussian processes. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 70(4):703-723. * Hotelling (1933) Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. _Journal of Educational Psychology_, 24:498-520. * J. P. and Paul (2009) J., P. and Paul, D. (2009). A geometric approach to maximum likelihood estimation of the functional principal components from sparse longitudinal data. _Journal of Computational and Graphical Statistics_, 18(4):995-1015. * James et al. (2000) James, G. M., Hastie, T. J., and Sugar, C. A. (2000). Principal component models for sparse functional data. _Biometrika_, 87(3):587-602. * Jolliffe (1982) Jolliffe, I. (1982). A note on the use of principal components in regression. _Journal of the Royal Statistical Society, Series C_, 31(3):300-303. * Jones and Rice (1992) Jones, M. C. and Rice, J. A. (1992). Displaying the important features of large collections of similar curves. _The American Statistician_, 46(2):140-145. * Karas et al. (2022) Karas, M., Muschelli, J., Leroux, A., Urbanek, J. K., Wanigatunga, A. A., Bai, J., Crainiceanu, C. M., and Schrack, J. A. (2022). Comparison of accelerometry-based measures of physical activity: Retrospective observational data analysis study. _JMIR Mhealth Uhealth_, 10(7):e38077. * Korn and Graubard (2011) Korn, E. L. and Graubard, B. I. (2011). _Analysis of health surveys_, volume 323. John Wiley & Sons. * Kemp and Grechen (2015) Leroux, A. (2022). _rnhanesdata: NHANES Accelerometry Data Pipeline_. R package version 1.02. * Leroux et al. (2019) Leroux, A., Di, J., Smirnova, E., McGuffey, E. J., Cao, Q., Bayatmokhtari, E., Tabacu, L., Zipunnikov, V., Urbanek, J. K., and Crainiceanu, C. (2019). Organizing and analyzing the activity data in nhanes. _Statistics in Biosciences_, 11(2):262-287. * Li and Xiao (2021) Li, C. and Xiao, L. (2021). _mfaces: Fast Covariance Estimation for Multivariate Sparse Functional Data_. R package version 0.1-3. * Li and Wood (2020) Li, Z. and Wood, S. N. (2020). Faster model matrix crossproducts for large generalized linear models with discretized covariates. _Statistics and Computing_, 30(1):19-25. * Lumley (2004) Lumley, T. (2004). Analysis of complex survey samples. _Journal of Statistical Software_, 9(1):1-19. * Parker and Holan (2022) Parker, P. A. and Holan, S. H. (2022). A bayesian functional data model for surveys collected under informative sampling with application to mortality estimation using nhanes. _Biometrics_, n/a(n/a). * Pearson (1901) Pearson, K. (1901). LIII. on lines and planes of closest fit to systems of points in space. _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_, 2(11):559-572. * Pinheiro and Bates (2000) Pinheiro, J. C. and Bates, D. M. (2000). _Mixed-Effects Models in S and S-PLUS_. Springer, New York. * Ramsay and Silverman (2005) Ramsay, J. and Silverman, B. (2005). _Functional Data Analysis_. Springer New York, NY, USA. * Rice and Silverman (1991) Rice, J. and Silverman, B. (1991). Estimating the mean and covariance structure nonparametrically when the data are curves. _Journal of the Royal Statistical Society. Series B (Methodological)_, 53(1):233-243. * Shou et al. (2015) Shou, H., Zipunnikov, V., Crainiceanu, C., and Greven, S. (2015). Structured functional principal component analysis. _Biometrics_, 71(1):247-257. * Skinner et al. (2017) Skinner, C., Wakefield, J., et al. (2017). Introduction to the design and analysis of complex survey data. _Statistical Science_, 32(2):165-175. * Staniswalis and Lee (1998) Staniswalis, J. and Lee, J. (1998). Nonparametric regression analysis of longitudinal data. _Journal of the American Statistical Association_, 93(444):1403-1418. * van der Linde (2008) van der Linde, A. (2008). Variational Bayesian functional PCA. _Computational Statistics & Data Analysis_, 53(2):517-533. * van der Linde (2009) van der Linde, A. (2009). A Bayesian latent variable approach to functional principal components analysis with binary and count data. _AStA Advances in Statistical Analysis_, 93(3):307-333. * van der Linde (2014) Weishampel, A., Staicu, A.-M., and Rand, W. (2023). Classification of social media users with generalized functional data analysis. _Computational Statistics & Data Analysis_, 179:107647. * Wood (2011) Wood, S. N. (2011). Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 73(1):3-36. * Wood et al. (2017) Wood, S. N., Li, Z., Shaddick, G., and Augustin, N. H. (2017). Generalized additive models for gigadata: Modeling the u.k. black smoke network daily data. _Journal of the American Statistical Association_, 112(519):1199-1210. * Wood et al. (2013) Wood, S. N., Scheipl, F., and Faraway, J. J. (2013). Straightforward intermediate rank tensor product smoothing in mixed models. _Statistics and Computing_, 23(3):341-360. * Wrobel (2018) Wrobel, J. (2018). register: Registration for exponential family functional data. _Journal of Open Source Software_, 3(22):557. * Wrobel and Bauer (2021) Wrobel, J. and Bauer, A. (2021). registr 2.0: Incomplete curve registration for exponential family functional data. _Journal of Open Source Software_, 6(61):2964. * Wrobel et al. (2016) Wrobel, J., Park, S. Y., Staicu, A. M., and Goldsmith, J. (2016). Interactive graphics for functional data analyses. _Stat_, 5(1):108-118. * Wrobel et al. (2019) Wrobel, J., Zipunnikov, V., Schrack, J., and Goldsmith, J. (2019). Registration for exponential family functional data. _Biometrics_, 75(1):48-57. * Xiao et al. (2018) Xiao, L., Li, C., Checkley, W., and Crainiceanu, C. (2018). Fast covariance estimation for sparse functional data. _Statistics and Computing_, 28(3):511-522. * Xiao et al. (2021) Xiao, L., Li, C., Checkley, W., and Crainiceanu, C. (2021). _face: Fast Covariance Estimation for Sparse Functional Data_. R package version 0.1-6. * Xiao et al. (2016) Xiao, L., Zipunnikov, V., Ruppert, D., and Crainiceanu, C. (2016). Fast covariance estimation for high-dimensional functional data. _Statistics and computing_, 26(1):409-421. * Yao et al. (2003) Yao, F., Muller, H., Clifford, A., Dueker, S., Follett, J., Lin, Y., Buchholz, B., and Vogel, J. (2003). Shrinkage estimation for functional principal component scores with application to the population kinetics of plasma folate. _Biometrics_, 59(3):676-685. * Yao et al. (2005) Yao, F., Muller, H.-G., and Wang, J.-L. (2005). Functional data analysis for sparse longitudinal data. _Journal of the American Statistical Association_, 28(100):577-590. Figure 1: Illustration of the need for Step 4 in the fast-GFPCA method using one simulated dataset from the simulation study (\(N=200\), \(J=200\), \(w_{l}=10\), \(\boldsymbol{\phi}(s)=\{\sqrt{2}\sin(2\pi s),\sqrt{2}\cos(2\pi s),\sqrt{2}\sin(4 \pi s),\sqrt{2}\cos(4\pi s)\}\)). (A) Plot of the estimated scores from step 3 (golden points) and step 4 (red points) on the x-axis versus the true scores on the y-axis separately for each of the first four eigenfunctions. The black line represents the identity line. (B) Plot of the scores estimated from step 3 (x-axis) versus those from step 4 (y-axis). (C) Estimated curves on the linear predictor scale Figure 2: Estimated population mean function (first row) and the first four estimated eigenfunctions (rows \(2-5\)) in NHANES. Model estimates are presented as red (vbFPCA) or blue lines (fast-GFPCA). The two leftmost columns correspond to vbFPCA (Kt=8 in column 1 and Kt=30 in column 2). The six rightmost columns correspond to fast-GFPCA with different input parameters (overlapping versus non-overlapping windows) and window sizes (\(w=6,10,30\)). Estimates of the population mean function based on the modified fast-GFPCA are displayed as dashed lines. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline & & \multicolumn{6}{c}{fastGFPCA} \\ \hline Parameters & Modified & \multicolumn{6}{c}{Computation Time (mins)} & AUC & Log-loss \\ \hline & & Step 1 & Step 2 & Step 3 & Step 4 & Total & \\ \cline{3-8} Overlap, \(w=6\) & No & 16.42 & 0.04 & 174.20 & 190.63 & 0.909 & 0.328 \\ & Yes & 16.42 & 0.04 & 10.90 & 27.33 & 0.909 & 0.328 \\ & No & 23.28 & 0.04 & 184.82 & 208.12 & 0.909 & 0.327 \\ & Yes & 23.28 & 0.04 & 10.74 & 34.03 & 0.909 & 0.327 \\ & No & 55.90 & 0.04 & 184.66 & 240.57 & 0.909 & 0.327 \\ & Yes & 55.90 & 0.04 & 10.48 & 66.38 & 0.909 & 0.327 \\ & No & 2.48 & 0.01 & 188.90 & 191.43 & 0.909 & 0.328 \\ & Yes & 2.48 & 0.01 & 11.34 & 13.13 & 0.909 & 0.328 \\ & No & 2.19 & 0.01 & 185.04 & 187.28 & 0.909 & 0.327 \\ & Yes & 2.19 & 0.01 & 10.89 & 13.13 & 0.909 & 0.327 \\ & No & 1.97 & 0.01 & 197.35 & 199.36 & 0.909 & 0.327 \\ & Yes & 1.97 & 0.01 & 11.60 & 13.61 & 0.909 & 0.327 \\ \hline & \multicolumn{6}{c}{vbFPCA (registr::bfpca())} \\ \hline Parameters & \multicolumn{6}{c}{Computation Time (mins)} & AUC & Log-loss \\ \hline Kt=8 & & & 7.11 & 0.909 & 0.328 \\ Kt=30 & & & 20.86 & 0.910 & 0.326 \\ \hline \hline \end{tabular} \end{table} Table 1: Computation times and in-sample predictive performance summaries for fastGFPCA (top rows) estimated under different parameter settings (column 1, overlapping versus non-overlapping windows, window size of \(w=6,10,30\)) using both the primary and modified algorithm (column 2, Modified = No or Yes, respectively), and variational Bayes (vbFPCA, bottom rows) estimated with either \(Kt=8\) or \(Kt=30\). For fastGFPCA, computation times are presented using both the total time and separately by each step of the procedure (step 1 is left blank as computationally this step is effectively instantaneous). AUC and log-loss, presented in the rightmost two columns are calculated over all minutes of the day. Figure 3: Results from time-specific regression of the estimated log odds of being active obtained from fast-GFPCA using \(w_{l}=6\) with non-overlapping windows and: (1) fast-GFPCA using \(w_{l}=30\) with non-overlapping windows (red lines); (2) vbFPCA with Kt = 8 (green lines); and (3) vbFPCA with Kt = 30 (blue lines). Regressions of the form \(E[Y]=\mathcal{B}_{0}+\mathcal{B}_{1}X\) were fit separately for each minute, with the resulting estimates \(\tilde{\mathcal{B}_{0}}\), \(\tilde{\mathcal{B}_{1}}\), and the percent variance explained (\(R^{2}\)) plotted separately in each panel (left to right). Deviations from the black dashed line in each panel denote a reduced rate of agreement between the regressor and the results from fast-GFPCA using \(w_{l}=6\) with non-overlapping windows. Figure 4: The estimated population mean function \(\hat{\beta}_{0}(s)\) and eigenfunctions \(\hat{\mathbf{\phi}}(s)\) from 100 simulated binary functional datasets with \(N=500\) subjects and \(J=500\) time points. Model estimates are red lines (vbFPCA) or blue lines (fastGFPCA). vbFPCA estimates from models using either \(Kt=8\) (left column) or \(Kt=30\) (second column) basis functions are compared with six fastGFPCA models. From left to right, the fastGFPCA models (blue columns) are estimated with non-overlapping windows with bin sizes \(w_{l}=10\) and \(w_{l}=50\), and overlapping windows with bin sizes \(w_{l}=10\) and \(w_{l}=50\). \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & & \multicolumn{6}{c}{Mean Integrated Squared Error of Eigenfunctions \(\mathbf{\phi}(s)\)} \\ \hline \hline \multirow{3}{*}{Family} & \multirow{3}{*}{N} & \multirow{3}{*}{J} & \multicolumn{3}{c}{Fast-GFPCA} & \multicolumn{3}{c}{vbFPCA} & tsGFPCA \\ \cline{3-10} & & & \multicolumn{3}{c}{Overlapping Bins} & \multicolumn{3}{c}{Non-Overlapping Bins} & K = 8 & K = 30 & K = 30 \\ & & & \multicolumn{3}{c}{\(w_{l}=6\)} & \multicolumn{1}{l}{\(w_{l}=10\)} & \multicolumn{1}{l}{\(w_{l}=50\)} & \multicolumn{1}{l}{\(w_{l}=6\)} & \multicolumn{1}{l}{\(w_{l}=10\)} & \multicolumn{1}{l}{\(w_{l}=50\)} & \\ \hline Binomial & 100 & 100 & 2.19 & **2.14** & - & 2.39 & 2.44 & - & 2.15 & 2.87 & 2.18 \\ & & 500 & 0.55 & 0.52 & **0.49** & 0.53 & 0.53 & 0.58 & **0.49** & 0.63 & - \\ & & 2000 & 0.17 & 0.15 & **0.13** & 0.15 & 0.15 & **0.13** & **0.13** & 0.16 & - \\ & 500 & 100 & 2.05 & **2.02** & - & 2.33 & 2.43 & - & **2.02** & 2.15 & 2.03 \\ & & 500 & 0.5 & 0.49 & **0.47** & 0.5 & 0.5 & 0.58 & **0.47** & 0.5 & - \\ & & 2000 & 0.15 & 0.14 & **0.12** & 0.14 & 0.14 & 0.13 & **0.12** & 0.13 & - \\ & 1000 & 100 & 2.05 & 2.04 & - & 2.33 & 2.41 & - & **2.01** & 2.08 & - \\ & & 500 & 0.5 & 0.48 & **0.47** & 0.51 & 0.5 & 0.58 & **0.47** & 0.48 & - \\ & & 2000 & 0.14 & 0.13 & **0.12** & 0.14 & 0.13 & 0.13 & **0.12** & **0.12** & - \\ \hline Poisson & 50 & 100 & 0.52 & **0.49** & - & 0.56 & 0.65 & - & - & - & 13.0 \\ & & 200 & 0.32 & **0.28** & - & 0.32 & 0.29 & - & - & - & 15.0 \\ & 100 & 100 & 0.45 & **0.43** & - & 0.5 & 0.59 & - & - & - & 15.9 \\ & & 200 & 0.25 & **0.24** & - & 0.27 & 0.25 & - & - & - & 20.5 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean integrated squared error (MISE) for \(\hat{\eta}_{i}(s)\), the estimated subject-specific latent means in the linear predictor space across methods and simulation scenarios. In each row the method(s) with the lowest MISE for that simulation scenario is in **bold**. All data summarized in this table were simulated with population mean \(\beta_{0}(s)=0\). An “-” indicates that that model was not evaluated for a given simulation scenario. All values are multiplied by a factor of 10. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & & \multicolumn{6}{c}{Mean Integrated Squared Error of Eigenfunctions \(\mathbf{\phi}(s)\)} \\ \hline \hline \multirow{3}{*}{Family} & \multirow{3}{*}{N} & \multirow{3}{*}{J} & \multicolumn{3}{c}{Fast-GFPCA} & \multicolumn{3}{c}{vbFPCA} & tsGFPCA \\ \cline{3-10} & & & \multicolumn{3}{c}{Overlapping Bins} & \multicolumn{3}{c}{Non-Overlapping Bins} & K = 8 & K = 30 & K = 30 \\ & & & \multicolumn{3}{c}{\(w_{l}=6\)} & \multicolumn{1}{l}{\(w_{l}=10\)} & \multicolumn{1}{l}{\(w_{l}=50\)} & \multicolumn{1}{l}{\(w_{l}=6\)} & \multicolumn{1}{l}{\(w_{l}=10\)} & \multicolumn{1}{l}{\(w_{l}=50\)} & \\ \hline Binomial & 100 & 100 & 0.91 & 0.66 & - & 1.13 & 1.56 & - & **0.47** & 1.13 & 0.48 \\ & & 500 & 0.66 & 0.55 & 0.46 & 0.57 & 0.59 & 0.58 & **0.28** & 0.39 & - \\ & & 2000 & 0.64 & 0.54 & **0.42** & 0.55 & 0.52 & **0.42** & 0.72 & 0.88 & - \\ & 500 & 100 & 0.19 & 0.18 & - & 0.58 & 0.8 & - & **0.12** & 0.26 & 0.13 \\ & & 500 & 0.13 & 0.11 & **0.08** & 0.13 & 0.11 & 0.32 & **0.08** & 0.11 & - \\ & & 2000 & 0.14 & 0.12 & **0.1** & 0.13 & 0.13 & **0.1** & 0.56 & 0.62 & - \\ & 1000 & 100 & 0.11 & 0.08 & - & 0.5 & 0.7 & - & **0.05** & 0.11 & - \\ & & 500 & 0.06 & 0.04 & **0.03** & 0.07 & 0.07 & 0.26 & 0.05 & 0.07 & - \\ & & 2000 & 0.05 & 0.04 & **0.03** & 0.05 & 0.04 & 0.04 & 0.52 & 0.66 & - \\ \hline Poisson & 50 & 100 & 1.03 & **0.94** & - & 1.16 & 1.24 & - & - & - & 10.97 \\ & & 200 & 0.96 & **0.88** & - & 0.91 & 0.98 & - & - & - & 8.8 \\ & 100 & 100 & 0.4 & **0.34** & - & 0.52 & 0.66 & - & - & - & 10.79 \\ & & 200 & 0.37 & **0.34** & - & 0.4 & 0.41 & - & - & - & 8.7 \\ \hline \hline \end{tabular} \end{table} Table 3: Mean integrated squared error (MISE) for estimated population-level latent eigenfunctions, \(\phi_{k}(s);k=1,\ldots,4\). In each row the method(s) with the lowest MISE for that simulation scenario is in **bold**. All data summarized in this table were simulated with population mean \(\beta_{0}(s)=0\). An “-” indicates that that model was not evaluated for a given simulation scenario. All values are multiplied by a factor of 10. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & & \multicolumn{6}{c}{Median Computation Times} \\ \hline \hline \multirow{3}{*}{Family} & \multirow{3}{*}{N} & \multirow{3}{*}{J} & \multicolumn{6}{c}{Fast-GFPCA} & \multicolumn{6}{c}{vbFPCA} & tsGFPCA \\ \cline{3-11} & & & \multicolumn{2}{c}{Overlapping Bins} & \multicolumn{2}{c}{Non-Overlapping Bins} & K = 8 & K = 30 & K = 30 \\ & & & \multicolumn{2}{c}{\(w_{l}=6\)} & \multicolumn{2}{c}{\(w_{l}=10\)} & \multicolumn{2}{c}{\(w_{l}=50\)} & \multicolumn{2}{c}{\(w_{l}=6\)} & \multicolumn{2}{c}{\(w_{l}=10\)} & \multicolumn{2}{c}{\(w_{l}=50\)} \\ \hline Binomial & 100 & 100 & 0.1 & 0.1 & - & 0.1 & **0** & - & **0** & 0.2 & 16.9 \\ & & 500 & 0.5 & 0.5 & 1.6 & **0.1** & **0.1** & **0.1** & **0.1** & 0.7 & - \\ & & 2000 & 2 & 2.2 & 6.5 & 0.3 & **0.2** & **0.2** & 0.9 & 4.7 & - \\ & & 100 & 2.3 & 2.3 & - & 2.2 & 2.2 & - & **0.1** & 0.7 & 71.3 \\ & & 500 & 2.7 & 3.2 & 7 & 2 & 1.9 & 1.9 & **0.6** & 2.8 & - \\ & & 2000 & 7.4 & 9.5 & 27 & **3.1** & **3.1** & **3.1** & 5.7 & 24.4 & - \\ & 1000 & 100 & 20.4 & 20.1 & - & 20.2 & 20.4 & - & **0.4** & 1.3 & - \\ & & 500 & 18.6 & 20 & 29.9 & 17.2 & 17.4 & 17.2 & **1.7** & 6.4 & - \\ & & 2000 & 30.1 & 35.1 & 75.9 & 22.3 & 22.6 & 22.4 & **21** & 71.4 & - \\ \hline Poisson & 50 & 100 & 0.1 & 0.1 & - & **0.02** & **0.02** & - & - & - & 11.2 \\ & & 200 & 0.2 & 0.21 & - & 0.04 & **0.03** & - & - & - & 40.4 \\ & & 100 & 0.15 & 0.16 & - & **0.06** & **0.06** & - & - & - & 34.6 \\ & & 200 & 0.25 & 0.27 & - & 0.07 & **0.06** & - & - & - & 92.0 \\ \hline \hline \end{tabular} \end{table} Table 4: Integrated squared error (ISE) for estimated population-level latent mean, \(\beta_{0}(s)\). In each row the method(s) with the lowest ISE for that simulation scenario is in **bold**. All data summarized in this table were simulated with population mean \(\beta_{0}(s)=0\). An “-” indicates that that model was not evaluated for a given simulation scenario. All values are multiplied by a factor of \(10^{3}\). \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline & & & \multicolumn{6}{c}{Median Computation Times} \\ \hline \hline \multirow{3}{*}{Family} & \multirow{3}{*}{N} & \multirow{3}{*}{J} & \multicolumn{6}{c}{Fast-GFPCA} & \multicolumn{6}{c}{vbFPCA} & tsGFPCA \\ \cline{3-11} & & & \multicolumn{2}{c}{Overlapping Bins} & \multicolumn{2}{c}{Non-Overlapping Bins} & K = 8 & K = 30 & K = 30 \\ & & & \multicolumn{2}{c}{\(w_{l}=6\)} & \multicolumn{2}{c}{\(w_{l}=10\)} & \multicolumn{2}{c}{\(w_{l}=50\)} & \multicolumn{2}{c}{\(w_{l}=6\)} & \multicolumn{2}{c}{\(w_{l}=10\)} & \multicolumn{2}{c}{\(w_{l}=50\)} \\ \hline Binomial & 100 & 100 & 0.1 & 0.1 & - & 0.1 & **0** & - & **0** & 0.2 & 16.9 \\ & & 500 & 0.5 & 0.5 & 1.6 & **0.1** & **0.1** & **0.1** & **0.1** & 0.7 & - \\ & & 2000 & 2 & 2.2 & 6.5 & 0.3 & **0.2** & **0.2** & 0.9 & 4.7 & - \\ & & 500 & 100 & 2.3 & 2.3 & - & 2.2 & 2.2 & - & **0.1** & 0.7 & 71.3 \\ & & 500 & 2.7 & 3.2 & 7 & 2 & 1.9 & 1.9 & **0.6** & 2.8 & - \\ & & 2000 & 7.4 & 9.5 & 27 & **3.1** & **3.1** & **3.1** & 5.7 & 24.4 & - \\ & & 1000 & 100 & 20.4 & 20.1 & - & 20.2 & 20.4 & - & **0.4** & 1.3 & - \\ & & 500 & 18.6 & 20 & 29.9 & 17.2 & 17.4 & 17.2 & **1.7** & 6.4 & - \\ & & 2000 & 30.1 & 35.1 & 75.9 & 22.3 & 22.6 & 22.4 & **21** & 71.4 & - \\ \hline Poisson & 50 & 100 & 0.1 & 0.1 & - & **0.02** & **0.02** & - & - & - & 11.2 \\ & & 200 & 0.2 & 0.21 & - & 0.04 & **0.03** & - & - & - & 40.4 \\ & & 100 & 100 & 0.15 & 0.16 & - & **0.06** & **0.06** & - & - & - & 34.6 \\ & & 200 & 0.25 & 0.27 & - & 0.07 & **0.06** & - & - & - & 92.0 \\ \hline \hline \end{tabular} \end{table} Table 5: Median computation time in minutes across methods and simulation scenarios. In each row the method(s) with the fastest computation time for that simulation scenario is in **bold**. All data summarized in this table were simulated with population mean \(\beta_{0}(s)=0\). An “-” indicates that that model was not evaluated for a given simulation scenario.
2308.04704
A Feature Set of Small Size for the PDF Malware Detection
Machine learning (ML)-based malware detection systems are becoming increasingly important as malware threats increase and get more sophisticated. PDF files are often used as vectors for phishing attacks because they are widely regarded as trustworthy data resources, and are accessible across different platforms. Therefore, researchers have developed many different PDF malware detection methods. Performance in detecting PDF malware is greatly influenced by feature selection. In this research, we propose a small features set that don't require too much domain knowledge of the PDF file. We evaluate proposed features with six different machine learning models. We report the best accuracy of 99.75% when using Random Forest model. Our proposed feature set, which consists of just 12 features, is one of the most conciseness in the field of PDF malware detection. Despite its modest size, we obtain comparable results to state-of-the-art that employ a much larger set of features.
Ran Liu, Charles Nicholas
2023-08-09T04:51:28Z
http://arxiv.org/abs/2308.04704v2
# A Feature Set of Small Size for the PDF Malware Detection ###### Abstract. Machine learning (ML)-based malware detection systems are becoming increasingly important as malware threats increase and get more sophisticated. PDF files are often used as vectors for phishing attacks because they are widely regarded as trustworthy data resources, and are accessible across different platforms. Therefore, researchers have developed many different PDF malware detection methods. Performance in detecting PDF malware is greatly influenced by feature selection. In this research, we propose a small features set that don't require too much domain knowledge of the PDF file. We evaluate proposed features with six different machine learning models. We report the best accuracy of 99.75% when using Random Forest model. Our proposed feature set, which consists of just \(12\) features, is one of the most conciseness in the field of PDF malware detection. Despite its modest size, we obtain comparable results to state-of-the-art that employ a much larger set of features. ## 1. Introduction The flexibility and portability of PDF files makes them a popular target for malware attacks. Over time, different approaches have been proposed to detect PDF malware. Machine learning and neural network based models have particularly shown promise in these detection tasks. However, the performance of the model relies on the quality of the feature set chosen[(5)]. Features used in malware detection are grouped into two categories: dynamic and static. Dynamic features are obtained from monitoring program execution, such as APIs called, instructions executed, or IP addresses accessed. Conversely, static features are obtained through static analysis. Both categories have some limitations. Dynamic features need to be executed in a sandbox environment, where some sophisticated malware can detect the sandbox environment and consequently alter their behaviors. Static features, on the other hand, can be obfuscated by attackers using evasion techniques, making the detection challenging. This raises concerns that some commonly used features have been thoroughly investigated by attackers. If attackers have exploited these features to perform a successful evasive attack, PDF malware detection systems built on the same or similar features set might become vulnerable. This highlights the importance of the usage of PDF-specific features, which may reduce the attack surface. Earlier research, including PDFRate, have employed some PDF-specific features such as the number and the occurrence of a specific PDF objects for model training, which obtained promising accuracy in PDF malware detection[(9)][(7)]. Nevertheless, most of these features requires a large amount of domain knowledge to extract. Moreover, their feature sets are large and complex, which may potentially lead to over-fitting. Consequently, it's desirable to have a simple and small PDF-specific features set that may achieve detection accuracy comparable to more complex features. In this paper, we limit the scope of PDF-specific features to those that are unique to PDF files, hence excluding most dynamic features such as system call sequences, API call sequences, and some static features such as binary code. Furthermore, we exclude features that need extensive domain knowledge for PDF files, which means that most keyword-based features, including JavaScript code in PDF files, are not used in our research. PDF files can be viewed as a set of interconnected objects. Some work, such as Hidost's, extracts the tree structure of the PDF and uses the binary counts for these paths as features[(15)]. Our previous work showed that such tree structures contain sequential relationships and can be used to train the Time Series Model for PDF malware detection. In this paper, we propose a novel set of graph features to accurately detect PDF malware. We investigated multiple types of graph tree features by parsing a PDF file into tree representative features. For specific feature types, our research demonstrated statistical differences between benign and malicious PDFs. Using the proposed feature set to train a machine learning model, we show empirically that the model can successfully detect 99.75% of PDF malware samples with only 12 features. The primary contributions of our study are the introduction of one of the smallest PDF specific feature sets. We have conducted a thorough investigation and performance analysis of the ML - models based on these proposed features. Furthermore, we benchmarked our results against state-of-the-arts, indicating that our feature set is promising. ### PDF file Structure A PDF file is structured using interconnected modules known as objects, which are made up of four parts: a header, a body, a cross-reference table, and a trailer, as shown in Figure 1. * Header: The header contains information about the PDF version and is marked with the '%' symbol. * Body: The body, as the primary section of the PDF file, consists of objects that define all the operations performed by the file. These objects, which include both indirect and direct objects, characterize the functionality of each object using keywords marked with '/'. For example, '/Length' and '/Filter' are such keywords. Indirect objects, which start with a numeric identifier like '4 0 R', contain information in a directory and can be referenced by other objects. For example, an object starting with '1 0 R' can be referred to by other objects using '1', the sequence number. This structure allows for the interconnection of objects. The generation number, usually set to 0, is represented by the second digit, although it can be other number in some cases. Objects are usually end with the 'endobj' marker. A specific type of object, known as a stream, starts with the keyword'stream' and ends with 'endstream' and 'endobj'. The content of the stream object, which includes elements like images and texts, is encoded using filters. * Cross-reference table: This table contains the location references for each object. A PDF parser uses this table to find the object reference in the memory for parsing. The cross-reference table, marked by 'Xref' followed by numbers, indicates the total number of objects in the references with its last number. For instance, '0 16' indicates a total of 16 objects in the cross-reference table. * Trailer: trailer contains information about the file such as the number of objects using keyword '/Size'. It also contains a reference to the root object using keyword /Root and metadata using keyword /Info. The file structure organizes the logical access order for the PDF file. When a PDF reader application accesses a PDF file, it first locates the trailer to find the root object. Then the parser uses a cross-reference table to parse each indirect object to decompress all data. In this way, the content of the PDF is made visible to the user. When a modification happens in the PDF file, like inserting a page into the PDF, a new body, trailer, and cross-reference table will be appended to the original file accordingly, and a new version number will be generated. However, because the cross-reference table sets a strict boundary for each object, removing objects from the previous version can cause errors. Thus, an attacker is more likely to add features instead of removing features. Each object is labeled with a number, allowing it to be referenced by other objects. Object information is stored in the cross-reference table, while the trailer indicates the root object's number and the cross-reference table's location. By querying the cross-reference table, catalog objects can be found. The catalog object serves as the entire document's root object, containing the PDF document's outline and the page group object's reference. ### Related Work In the field of machine learning-based PDF malware detection, two primary types of features are commonly used - static and dynamic features. The dynamic features are obtained by running the PDF in a controlled environment, which allows for the collection of PDF running behaviors such as sequences of system calls and API calls(Ghez et al., 2017)(Ghez et al., 2017). However, most dynamic features are typically not distinctive to PDF files, and the building of a robust sandbox environment increases the complexity of the detection process. Despite the promising results in malware detection tasks using dynamic features, our work primarily focuses on the use of static features. Previously used dynamic and static features require significant amounts of domain knowledge for feature extraction. In contrast, our goal is to employ features that demand minimal domain knowledge and are unique to PDF files. There are three categories of static features. The first type of static features are obtained from a keyword-based analysis, which involves searching for predefined keywords such as '/Javascript', '/OpenAction', '/GoTo', '/URI', and '/RichMedia'. These keywords are often associated with malicious code injection. Features can include the number of keywords or simply their presence. The malicious payloads are usually inserted into objects associated with such keywords, making them useful features for PDF malware detection. The second type of static features are obtained through a tree structure-based analysis. This method constructs the object tree representation to capture the connections between items. The tree structure can provide insights into the hierarchical connections between the objects of the PDF file, which Figure 1. PDF Structure may reveal malware-related behaviors. Lastly, static features can be obtained using code-based analysis, which focuses on malicious strings and functions in Javascript code. As PDF malware often manipulates Javascript code to execute malicious activities, the presence of specific code strings or functions may indicate the presence of malware. PDF malware detectors may employ one or more of the features described above. One such example is Hidost[15], a system that uses the Poppler PDF parser[2] to extract tree structural paths of objects in a PDF file, which are then used as features in the classification process. Hidost is implemented with two different models: Support Vector Machine (SVM) and Random Forest (RF)[14][15]. SVM is a supervised learning model that creates an optimal hyperplane to separate different labels. RF is a meta-estimator that integrates different decision trees to improve classification accuracy. The researchers trained their model using a dataset of 10,000 randomly selected files, maintaining a malicious-to-benign ratio of 1:1. The complete PDF dataset comprised 407,037 benign and 32,567 malicious files. The Hidost system has 99.8% accuracy and less than 0.06% false positive rate for both models. The PDFrate classifier is implemented using an RF algorithm with 99% accuracy and 0.2% false positive rate over the Contagio malware dataset[9]. PDFrate uses the metadata, which includes the names of the files' authors, the size of the file, its location, and the amount of certain keywords, and the content of the PDF files as features. The authors manually define the feature set, which has 202 features in all, including counts for different keywords and specific fields in the PDF. Examples include the number of characters in the author field, the quantity of "endobj" keywords, the total number of pixels in all the photos, the quantity of JavaScript markers, etc. The Mimicus implementation of PDFrate, claiming to get a close approximation, only makes use of 135 of these features. The two versions of PDFrate, PDFrate-v1 and PDFrate-v2, each use a different machine learning model[18][12]. The classifiers in PDFrate-v2 use mutual agreement to implement an ensemble technique. The term "uncertain" is introduced into the classifier voting, where rates of 25-50% are regarded as benign uncertainty and rates of 50-75% as malicious uncertainty. PjScan is a tool that concentrates on examining JavaScript code[4]. It uses Poppler[2] as a parser to extract tokens from JavaScript code, and a one-class SVM as a classifier. PjScan achieved 85% detection accuracy. Malware Slayer is available in two variants[8][7]. The original Slayer extracts keyword features from PDF files using a pattern recognition method and labels samples using a random forest algorithm. Slayer NEO uses the PeePDF[16] and Origami[13] parsers to extract structural data as features and the AdaBoost algorithm for classification. Maryam et al. proposed a PDF malware detection system based on stacking learning[3]. Their approach is based on the idea that combining different classifiers could produce improved accuracy as each classifier operates based on unique data assumptions. Their feature set included ten general features, such as PDF size and title character count, as well as structural features such as the amount of keywords and objects. These extracted features were initially fed into a base layer consisting of SVM, Random Forest, MLP, and AdaBoost. The prediction outputs from this layer were subsequently fed into a meta-layer featuring Logistic Regression, K-Nearest Neighbors, and Decision Trees. Their reported metrics on the hybrid dataset including Contagio dataset were impressive: an accuracy of 99.98%, precision of 99.84%, recall of 99.89%, and an F1 score of 99.86%. ## 2. Statistical Analysis of a Feature Set We now introduce our proposed feature set. We use the Contagio dataset, which includes 9,000 benign and 10,982 malicious PDF samples, to extract features[9]. This dataset was chosen due to its accessibility and its large number of labeled samples. We used the pdfrw library to extract tree structure paths for each file, and while some samples were corrupted, we were able to successfully extract path objects from 7,396 benign PDF files and 10,814 malicious PDF files[17]. Our feature selection strategy is to minimize required domain knowledge during feature extraction. Consequently, despite the promising results achieved by keyword-based features in other research, we chose not to use them. We have already noted that a PDF can be represented as a tree structure of objects, prompting our investigation into graph features. Our selection of features was facilitated by comparing mean values, standard deviations with 95% CI and Quantiles. This led us to select the following features: * Distribution of children per node: the average (avg children), median (median children) and variance of children per node (var children). * Number of leaves in the tree (num leaves). * Number of nodes (num nodes). * The depth of the tree (depth). * Average degree (avg degree). * Degree assortativity coefficient (degree assortativity).1 Footnote 1: Degree assortativity refers to the tendency for nodes of high or low degree to be connected to other nodes of high or low degree, respectively. * The average shortest path length (avg shortest path). * How nodes in a graph tend to cluster together (avg clustering coefficient). * Graph density (density). We applied statistical analysis to investigate the proposed features of benign and malicious PDF files. The key statistical metrics in our investigation were: the 75% quartile, median 50%, 25% quartile, 95% CI mean and 95% CI standard deviation. Note that Table 1, Table 3, Table 2 and Table 4 show a significant difference in the proposed feature set between benign and malicious PDFs. The statistical difference between benign and malicious PDFs is further visualized in the box plots ( Figure 2 and Figure 3), providing clear graphical representations of these differences. We excluded the median degree, depth, and median children from the box plots since they are discrete values. 95% Confidence Interval (95% CI) of the Precision, Recall in Table 6 and 95% CI F1 score in Table 7. The recall value indicates the ratio of samples have been correctly classified. We report the best recall value for the malicious class with Random Forest, indicating that 99.73% to 99.77% of the malicious samples are correctly detected, while the benign class's recall value is 0.9987 - 0.9991, which means that 99.87% to 99.91% of benign samples are correctly classified. We report the Accuracy, True Positive Rate (TPR), False Positive Rate (FPR), False Negative Rate (FNR) and True Negative Rate (TNR) in Table 8. The result indicates our proposed features have good overall performance. We report the best accuracy of 0.9975 when using Random Forest model. ### Comparison with Other Works For model performance comparison, we select models with reported results in the literature for direct comparison. \begin{table} \begin{tabular}{l c c c} \hline \hline Classifier & Label & Precision & Recall \\ \hline XGBoost & Malicious 0.9939 - 0.9977 & 0.9958 - 0.9966 \\ & Benign 0.9982 - 0.9987 & 0.9973 - 0.9991 \\ Naive Bayes & Malicious 0.8769 - 0.9564 & 0.8000 - 0.9750 \\ & Benign 0.8698 - 0.9683 & 0.7850 - 0.9824 \\ Multi-layer Perceptron & Malicious 0.9871 - 0.9911 & 0.9811 - 0.9940 \\ & Benign 0.9946 - 0.9963 & 0.9946 - 0.9963 \\ Decision Tree (J48) & Malicious 0.9963 - 0.9973 & 0.9946 - 0.9982 \\ & Benign 0.9972 - 0.9980 & 0.9959 - 0.9986 \\ Random Forest & Malicious 0.9966 - 0.9982 & 0.9973 - 0.9977 \\ & Benign 0.9987 - 0.9991 & 0.9987 - 0.9991 \\ Simple Logistic & Malicious 0.9739 - 0.9879 & 0.9820 - 0.9824 \\ & Benign 0.9878 - 0.9885 & 0.9831 - 0.9917 \\ \hline \hline \end{tabular} \end{table} Table 6. 95% CI Precision and Recall for each label. \begin{table} \begin{tabular}{l c c} \hline \hline Classifier & Label & F1 Score \\ \hline XGBoost & Malicious 0.9953 - 0.9968 \\ & Benign 0.9980 - 0.9986 \\ Naive Bayes & Malicious 0.8712 - 0.9234 \\ & Benign 0.8671 - 0.9227 \\ Multi-layer Perceptron & Malicious 0.9861 - 0.9906 \\ & Benign 0.9946 - 0.9963 \\ Decision Tree (J48) & Malicious 0.9959 - 0.9972 \\ & Benign 0.9970 - 0.9979 \\ Random Forest & Malicious 0.9970 - 0.9979 \\ & Benign 0.9987 - 0.9991 \\ Simple Logistic & Malicious 0.9781 - 0.9849 \\ & Benign 0.9854 - 0.9901 \\ \hline \hline \end{tabular} \end{table} Table 7. 95% CI F1 Score for each label. Figure 3. Box plot of the malicious PDFs. Features are on the y-axis. Figure 2. Box plot of the benign PDFs. Features are on the y-axis. \begin{table} \begin{tabular}{l c c c c} \hline \hline Classifier & Accuracy & TPR & FPR & FNR & TNR \\ \hline XGBoost & 0.9964 & 0.9980 & 0.0042 & 0.0020 & 0.9958 \\ Naive Bayes & 0.9006 & 0.7850 & 0.0268 & 0.2150 & 0.9732 \\ Multi-layer Perceptron & 0.9926 & 0.9919 & 0.0088 & 0.0081 & 0.9912 \\ Decision Tree & 0.9967 & 0.9959 & 0.0019 & 0.0041 & 0.9981 \\ Random Forest & 0.9975 & 0.9980 & 0.0023 & 0.0020 & 0.9977 \\ Simple Logistic & 0.9822 & 0.9817 & 0.0180 & 0.0183 & 0.9820 \\ \hline \hline \end{tabular} \end{table} Table 8. Accuracy, True Positive Rate (TPR), False Positive Rate (FPR), False Negative Rate (FNR) and True Negative Rate (TNR) with the Proposed Features Set. We report the best result obtained by applying the Random Forest model. Table 9 presents the comparison of the evaluation metrics to other work. The results show that our work beats other work while being significantly smaller than the feature set they use. The main weaknesses of the feature set we propose is that it is vulnerable to some evasive attacks. Detectors that employ this feature set can be compromised through the insertion, deletion, or alteration of a subtree. In order to enhance the robustness of the detector, one potential strategy is to enrich the feature set and diversify the feature types employed. We observe that the parser did not successfully parse all objects for some malware specimens. The effectiveness of the our approach is influenced by the quality of parsed PDF objects. ## 4. Conclusion In this work, we introduced a new features set for PDF malware detection that based on PDF tree structure. Our work aimed to address the need of finding a small features set without needing too much domain knowledge of the PDF file. Our work might serve as a baseline for the future investigation. We do not expect our work can replace the current used static and dynamic features now or in the future, but our work might inspire researchers with an alternative way to build a malware detection model. In the future task, we plan to explore the other features to improve the overall performance and enhance the robustness.
2305.04920
Shedding Light on Microscopic Details: 2D Spectroscopy of 1D Quantum Ising Magnets
The identification of microscopic models describing the low-energy properties of correlated materials has been a central goal of spectroscopic measurements. We demonstrate how 2D non-linear spectroscopy can be used to distinguish effective spin models whose linear responses show similar behavior. Motivated by recent experiments on the quasi-1D Ising magnet CoNb$_2$O$_6$, we focus on two proposed models, the ferromagnetic twisted Kitaev chain with bond dependent interactions and the transverse field Ising model. The dynamical spin structure factor probed in linear response displays similar broad spectra for both models from their fermionic domain wall excitations. In sharp contrast, the 2D non-linear spectra of the two models show clear qualitative differences: those of the twisted Kitaev model contain off-diagonal peaks originating from the bond dependent interactions and transitions between different fermion bands absent in the transverse field Ising model. We discuss the different signatures of spin fractionalization in integrable and non-integrable regimes of the models and their connection to experiments.
GiBaik Sim, Frank Pollmann, Johannes Knolle
2023-05-08T17:56:06Z
http://arxiv.org/abs/2305.04920v1
# Shedding Light on Microscopic Details: ###### Abstract The identification of microscopic models describing the low-energy properties of correlated materials has been a central goal of spectroscopic measurements. We demonstrate how 2D non-linear spectroscopy can be used to distinguish effective spin models whose linear responses show similar behavior. Motivated by recent experiments on the quasi-1D Ising magnet CoNb\({}_{2}\)O\({}_{6}\), we focus on two proposed models, the ferromagnetic twisted Kitaev chain with bond dependent interactions and the transverse field Ising model. The dynamical spin structure factor probed in linear response displays similar broad spectra for both models from their fermionic domain wall excitations. In sharp contrast, the 2D non-linear spectra of the two models show clear qualitative differences: those of the twisted Kitaev model contain off-diagonal peaks originating from the bond dependent interactions and transitions between different fermion bands absent in the transverse field Ising model. We discuss the different signatures of spin fractionalization in integrable and non-integrable regimes of the models and their connection to experiments. ## I Introduction The possibility to understand the microscopics of correlated quantum materials is closely connected to advances in spectroscopic techniques [1]. In addition to the traditional use of linear response probes, _two-dimensional coherent spectroscopy_ (2DCS) [2; 3] promises to provide additional information because of its ability to access multi-time correlation functions sensitive to interactions between excitations. Probing the non-linear optical response of the target system, 2DCS has been used to study vibrational and electronic excitations in molecules [4] and exciton resonances in quantum wells [5; 6]. Additionally, recent advances with terahertz sources put the technique in the proper energy ranges for studying optical excitations of magnetic materials [7]. Unlike conventional one dimensional (1D) spectroscopy and standard inelastic neutron scattering, 2DCS reveals not only the linear response of spin flips but has more direct access to the interplay of intrinsic excitations of magnets. Along this line, it was theoretically proposed that such interplay can be used to identify the presence of fractionalized particles [8; 9; 10; 11], their self-energies [12] and the effect of interactions between them [13; 14]. In this paper, we show that 2DCS can be a powerful tool for quantifying the microscopic model parameters of quantum magnets. Concretely, we consider 2DCS as a means for distinguishing between two alternative model descriptions of Ising chain magnets, the ferromagnetic twisted Kitaev model (TKM) with bond dependent spin exchange terms and the transverse field Ising model (TFIM). In both cases, spin flip excitations fractionalize into domain wall excitations leading to very similar linear response spectra but distinct qualitative differences in 2DCS. Our study is motivated by previous works [15; 16], which proposed that the field dependent behavior of CoNb\({}_{2}\)O\({}_{6}\), long believed the best example of an Ising chain magnet [17; 18; 19; 20; 21], is in fact well captured by the TKM. We first confirm that the linear response of the TKM and TFIM is indeed similar which complicates the identification of the microscopic description. Second, as our main result, we establish that there are significant differences between the 2D spectra of the TKM and TFIM: (i) the magnetic second order susceptibility \(\chi^{(2)}_{xxx}\) vanishes for the TKM due to the presence of a \(\hat{z}\)-glide symmetry [15] while it is finite for the TFIM; (ii) the third order susceptibility \(\chi^{(3)}_{xxxx}\) contains off-diagonal peaks from inter-band fermion transitions for the TKM which are absent for the TFIM. Taking into account the experimentally relevant canting angle [22] between the crystal axis \(\hat{a}\) and local axis \(\hat{x}\) in CoNb\({}_{2}\)O\({}_{6}\), we also compute the easier accessible 2D spectrum, \(\chi^{(2)}_{yyy}\), of the TKM. We find that \(\chi^{(2)}_{yyy}\) becomes finite and contains off-diagonal signals with an external transverse field along \(\hat{y}\), which breaks the \(\hat{z}\)-glide symmetry and the integrability of the model, using infinite matrix-product state (MPS) techniques [23; 24; 25]. We also confirm that such peaks persist in the presence of additional \(XX\)-type interactions, which can be relevant in CoNb\({}_{2}\)O\({}_{6}\)[15; 26; 27]. Our paper is structured as follows. We first briefly review the TKM and TFIM in Sec. II and confirm the similarity of their linear response spectra in Sec. III. We then discuss their non-linear response and compare the differences of the 2DCS spectra in Sec. IV. In Sec. V, we investigate the effect of glide symmetry breaking on the 2DCS spectra of the TKM and discuss the relevance of our findings for CoNb\({}_{2}\)O\({}_{6}\). We conclude with a discussion and outlook in Sec. VI. Two microscopic models We first introduce the TKM [16] described by the following Hamiltonian \[H_{\text{TKM}}=-J\sum_{i=1}^{L^{\prime}}\big{[}\tilde{\sigma}_{2i-1 }(\theta)\tilde{\sigma}_{2i}(\theta)+\tilde{\sigma}_{2i}(-\theta)\tilde{\sigma}_ {2i+1}(-\theta)\big{]}. \tag{1}\] Here, \(J>0\) is the ferromagnetic exchange parameter, \(L^{\prime}=L/2\) is the number of unit cells, each containing two sites, and \(\tilde{\sigma}_{i}(\theta)\!\equiv\!\cos(\theta)\,\sigma_{i}^{z}+\sin(\theta) \,\sigma_{i}^{y}\). Such linear combinations imply that the interaction on each odd (even) bond is characterized by the Ising easy axis with an angle \(\pm\theta\)[28]. The TKM respects two different glide symmetries, \(G_{y}\equiv T_{c}e^{(i\pi/2)\sum_{i}^{L}\sigma_{i}^{y}}\) and \(G_{z}\equiv T_{c}e^{(i\pi/2)\sum_{i}^{L}\sigma_{i}^{z}}\), where \(T_{c}\) is a translation operator by half a unit cell [15]. When \(0<\theta<\pi/4\), the TKM admits a doubly degenerate ferromagnetic ground state, polarized along the easy axis \(\hat{z}\). In this regime, the ground state spontaneously breaks \(G_{y}\), but still preserves one global symmetry \(G_{z}\). Below we fix \(\theta\!=\!\pi/12\) for the TKM which is close to the value used in Ref. [16] to describe CoNb\({}_{2}\)O\({}_{6}\). In this case, the elementary excitations of the TKM are domain walls between the two degenerate ground states, similar to the ferromagnetic TFIM with interactions given by \[H_{\text{TFIM}}\!=\!-J\sum_{i=1}^{L}\sigma_{i}^{z}\sigma_{i+1}^{ z}\!-\!h_{x}\sum_{i=1}^{L}\sigma_{i}^{x}. \tag{2}\] Below, we fix \(h_{x}/J\!=\!1/2\) at which the model also stabilizes a doubly degenerate ferromagnetic ground state. Performing the Jordan-Wigner transformation, which maps the Pauli operators to fermion operators, and Bogoliubov transformation, we can rewrite both the TKM and TFIM as non-interacting fermionic models. The TKM then reads, \[H_{\text{TKM}}\!=\!\sum_{k>0}l_{k}(\alpha_{k}^{\dagger}\alpha_{ k}-\alpha_{-k}\alpha_{-k}^{\dagger})+\lambda_{k}(\beta_{k}^{\dagger}\beta_{k}- \beta_{-k}\beta_{-k}^{\dagger}). \tag{3}\] Here, \(\alpha_{k}\) and \(\beta_{k}\) represent the two different bands with dispersion relations \(2l_{k}\) and \(2\lambda_{k}\) respectively for the TKM in momentum space representation (See Appendix A for details). The Hamiltonian in Eq. (3) can be interpreted as a four level system with a momentum pair \(\pm k\), where energies of states are \(-\lambda_{k}\), \(-l_{k}\), \(l_{k}\), and \(\lambda_{k}\). In the following, we denote such states by \(|0\rangle\), \(|1\rangle\), \(|2\rangle\), and \(|3\rangle\). Thus the TKM corresponds to an ensemble of decoupled four level systems. The TFIM in fermionic formulation reads, \[H_{\text{TFIM}}\!=\!\sum_{k>0}\epsilon_{k}(\gamma_{k}^{\dagger} \gamma_{k}-\gamma_{-k}\gamma_{-k}^{\dagger}), \tag{4}\] where \(\gamma_{k}\) represents a single band with dispersion \(2\epsilon_{k}\). The TFIM corresponds to an ensemble of decoupled two level systems with the energy gap \(2\epsilon_{k}\) and is clearly distinct from the TKM. ## III Linear response structure factor We first briefly compare the linear response, i.e., the dynamical structure factor \[S_{xx}(k,\omega)=\frac{1}{4}\int\mathrm{d}t\sum_{j}e^{i\omega t -ik(r_{j}-r_{L/2})}\langle\sigma_{j}^{x}(t)\sigma_{j}^{x}(0)\rangle \tag{5}\] of the TKM and TFIM [29]. In both systems, a spin flip excites a pair of domain walls (fermions) with net momentum \(k\). Such fractionalization of the excitations only yields a broad continuous spectrum. In Fig. 1, we plot \(S_{xx}(k,\omega)\) computed using MPS simulations with open boundary conditions (See Appendix B for details) [30]. Remarkably, the two models show a qualitatively similar spectrum, indicating the difficulty of using a conventional probe like inelastic neutron scattering for distinguishing between the two model descriptions. ## IV Non-linear response Next, we focus on the non-linear 2DCS response and introduce a two-pulse setup, as previously considered in Ref. [8] for the TFIM. In this setup, two magnetic pulses \(B_{0}\) and \(B_{\tau}\) both polarized along \(\hat{\alpha}\) direction, \[\mathbf{B}(T)=\mathcal{B}_{0}\delta(T)\hat{\alpha}+\mathcal{B}_{\tau} \delta(T-\tau)\hat{\alpha}, \tag{6}\] arrive at the target system successively at time \(T\!=\!0\) and \(T\!=\!\tau\). Here, \(\mathcal{B}_{0,\tau}\) are the strength of the pulse over the spatial areas where the pulses \(B_{0,\tau}\) reach the system. The two pulses induce magnetization \(M_{0,\tau}^{\alpha}(T)\) of the system measured at time \(T\!=\!\tau+t\). To subtract the induced magnetization from the linear response, Figure 1: (color online) (a,b) Dynamical spin structure factor \(S_{xx}(k,\omega)\) of the ferromagnetic TKM and TFIM. \(k\) and \(\omega\) represent the momentum and frequency, respectively. The MPS simulations are done for an open chain using time evolving block decimation method [31; 32], which provides an efficient way to perform a real time evolution in 1D spin systems. two additional experiments each with a single pulse \(B_{0}\) or \(B_{\tau}\) are performed which measure \(M_{0}^{\alpha}(T)\) or \(M_{\tau}^{\alpha}(T)\), respectively. The non-linear magnetization \(M_{\mathrm{NL}}^{\alpha}(T)\equiv M_{0,\tau}^{\alpha}(T)-M_{0}^{\alpha}(T)-M_{ \tau}^{\alpha}(T)\) can be expanded as \[M_{\mathrm{NL}}^{\alpha}(T) = \mathcal{B}_{0}\mathcal{B}_{\tau}\chi_{\alpha\alpha\alpha}^{(2)}( t,\tau+t) \tag{7}\] \[+ (\mathcal{B}_{0})^{2}\mathcal{B}_{\tau}\chi_{\alpha\alpha\alpha \alpha}^{(3,1)}(t,\tau+t,\tau+t)\] \[+ \mathcal{B}_{0}(\mathcal{B}_{\tau})^{2}\chi_{\alpha\alpha\alpha \alpha}^{(3,2)}(t,t,\tau+t)+O(B^{4})\] and directly measures the second and higher order magnetic susceptibilities. Due to its exact solubility, we can analytically calculate the \(\chi_{xxx}^{(2)}\) and \(\chi_{xxxx}^{(3)}\) susceptibilities of the TKM (See Appendix C for details). A formulation for the TFIM is explicitly given in Ref. [8]. ### 2nd order susceptibility We start with the second order susceptibility, which is given by \[\chi_{xxx}^{(2)}(t,\tau+t) = \frac{-\Theta(t)\Theta(\tau)}{L}\langle[[M^{x}(\tau+t),M^{x}( \tau)],M^{x}(0)]\rangle, \tag{8}\] where \(M^{x}(T)\equiv\frac{1}{2}\sum_{i}e^{iHT}\sigma_{i}^{x}e^{-iHT}\) represents the total magnetization along \(\hat{x}\) direction in the Heisenberg picture [8]. To formulate \(\chi_{xxx}^{(2)}\) of the TKM, one needs to calculate expectation values of the form \[\langle g|M^{x}(T_{1})M^{x}(T_{2})M^{x}(T_{3})|g\rangle. \tag{9}\] Here, \(|g\rangle\) represents the ferromagnetic ground state of the TKM, which is invariant under the \(\hat{z}\)-glide operation: \(G_{z}|g\rangle=|g\rangle\). At the same time, the operator \(M^{x}(T_{1})M^{x}(T_{2})M^{x}(T_{3})\) is odd under the same glide operation: \(G_{z}M^{x}(T_{1})M^{x}(T_{2})M^{x}(T_{3})G_{z}^{\dagger}=-M^{x}(T_{1})M^{x}(T_ {2})M^{x}(T_{3})\). The invariance of \(|g\rangle\) and oddness of the operator \(M^{x}(T_{1})M^{x}(T_{2})M^{x}(T_{3})\) under \(G_{z}\) makes Eq. (9) vanish and \(\chi_{xxx}^{(2)}\) is zero [2(a)] unless additional symmetry-breaking terms are added to the model. Such a non-linear spectrum of the TKM is clearly distinct from the one of the TFIM [Fig. 2(d)] which contains a strong terahertz rectification signal in \(\chi_{xxx}^{(2)}\)[8]. When the second order susceptibility vanishes, the third or higher order susceptibilities dominate the non-linear response of the system. ### 3rd order susceptibility We now turn to the next leading order response, i.e., \(\chi_{xxxx}^{(3,1)}\) and \(\chi_{xxxx}^{(3,2)}\). Using the basis introduced in Eq. (3), we can also express \(M^{x}(T)\) in terms of fermion operators and obtain the formula for the third order susceptibility of the TKM analytically: \[\chi_{xxxx}^{(3,1)}(t,\tau+t,\tau+t) = \frac{\Theta(t)\Theta(\tau)}{L}\sum_{k>0}P_{k}^{(1)}+P_{k}^{(2)} +P_{k}^{(3)}\] with \[P_{k}^{(1)} = -8c_{k}^{4}\big{[}\sin\big{(}2l_{k}t\big{)}+(l_{k}\leftrightarrow \lambda_{k})\big{]},\] \[P_{k}^{(2)} = 8(c_{k}^{4}-c_{k}^{2})\big{[}\sin\big{(}l_{k}t+(l_{k}+\lambda_{k })\tau\big{)}+(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[P_{k}^{(3)} = 8(c_{k}^{2}-c_{k}^{4})\big{[}\sin\big{(}(l_{k}-\lambda_{k})t+(l_{k }+\lambda_{k})\tau\big{)}+(l_{k}\leftrightarrow\lambda_{k})\big{]},\] where \(c_{k}\) is the matrix element of the magnetization along \(\hat{x}\) direction in the basis introduced in Eq. (3) (See Appendix C for definitions). In \(\chi_{xxxx}^{(3,1)}\), \(P_{k}^{(1-3)}\) represent different two-time evolution paths of a fermion pair with momenta \(\pm k\) excited by the pulses. Employing the four level picture of the fermionic Hamiltonian in momentum space, we can interpret \(P_{k}^{(1)}\) as follows. The second pulse in the two-pulse setup induces transitions between \(\ket{1}\) and \(\ket{2}\), resulting in an oscillatory signal with frequency \(2l_{k}\) throughout the time interval \(t\) between the second pulse and measurement. This signal is encoded in the first term of \(P_{k}^{(1)}\). The term is not oscillatory in \(\tau\) and gives rise to a peak at \((\omega_{t},\omega_{\tau})\!=\!(2l_{k},0)\) in the frequency domain [Fig. 2(b)]. Interpreting \(\omega_{t}\) as the detecting frequency and \(\omega_{\tau}\) the pumping frequency, the signal can be understood as a pump probe signal. Such signal is also contained in \(\chi_{xxxx}^{(3,1)}\) of the TFIM [Fig. 2(e)]. \(P_{k}^{(2)}\) contains terms which are oscillatory both in \(t\) and \(\tau\). Such terms produce non-rephasing like signals at \((\omega_{t},\omega_{\tau})\!=\!(2l_{k},l_{k}\!+\!\lambda_{k})\) and \((\omega_{t},\omega_{\tau})\!=\!(2\lambda_{k},l_{k}\!+\!\lambda_{k})\), giving rise to off-diagonal peaks in the first frequency quadrant as shown in Fig. 2(b). \(P_{k}^{(3)}\) is distinct from \(P_{k}^{(1,2)}\) in that it contains a term where \(t\) and \(\tau\) come with opposite signs : the dephasing process during \(\tau\) is followed by the rephasing process during \(t\). This process induces rephasing like signals which appear as off-diagonal peaks in the fourth quadrant, mirroring the energy range of corresponding fermion pairs [Fig. 2(b)]. Qualitatively different signals are encoded in \(\chi_{xxxx}^{(3,2)}\), which is given as \[\chi_{xxxx}^{(3,2)}(t,t,\tau+t)\!=\!\frac{\Theta(t)\Theta(\tau)}{L}\sum_{k>0} Q_{k}^{(1)}\!+\!Q_{k}^{(2)}\!+\!Q_{k}^{(3)}\!+\!Q_{k}^{(4)}\] with \[Q_{k}^{(1)}\!=\!-4c_{k}^{2}\big{[}\sin\big{(}2l_{k}(t+\tau)\big{)} \!+\!(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[Q_{k}^{(2)}\!=\!-4c_{k}^{4}\big{[}\sin\big{(}2l_{k}(t-\tau)\big{)} \!+\!(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[Q_{k}^{(3)}\!=\!4(c_{k}^{4}-c_{k}^{2})\big{[}\sin\big{(}2 \lambda_{k}t+2l_{k}\tau\big{)}\!+\!(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[Q_{k}^{(4)}\!=\!8(c_{k}^{2}-c_{k}^{4})\big{[}\sin\big{(}(\lambda _{k}-l_{k})t+2l_{k}\tau\big{)}\!+\!(l_{k}\leftrightarrow\lambda_{k})\big{]}.\] The presence of \(Q_{k}^{(1)}\) and \(Q_{k}^{(2)}\) results in the appearance of diagonal peaks in the frequency domain. \(Q_{k}^{(1)}\) is oscillatory in \(t+\tau\) and can induce diagonal non-rephasing signals in the first quadrant. \(Q_{k}^{(2)}\) is unique in that \(t\) and \(\tau\) come with opposite signs but with the same oscillation frequency \(2l_{k}\) or \(2\lambda_{k}\). Unlike other terms, the phase accumulated during \(\tau\) is perfectly canceled out during \(t\), regardless of the oscillation frequency. This corresponds to the "spinon echo", which was discovered in Ref. [8] for the TFIM [Fig. 2(f)], and results in a diagonal rephasing signal in the fourth quadrant [Fig. 2(c)]. \(Q_{k}^{(3)}\) produces non-rephasing like signals at \((\omega_{t},\omega_{\tau})\!=\!(2\lambda_{k},2l_{k})\) and \((\omega_{t},\omega_{\tau})\!=\!(2l_{k},2\lambda_{k})\), giving rise to off-diagonal peaks in the first quadrant [Fig. 2(c)]. \(Q_{k}^{(4)}\) contains terms that induce strong off-diagonal peaks in the frequency domain, reflecting the energy range of corresponding states [Fig. 2(c)]. ### Discussion of 2nd and 3rd order susceptibilities The 2D spectra of the TKM and the TFIM show qualitative differences. First, \(\chi_{xxx}^{(2)}\) of the TKM vanishes due to \(\hat{z}\)-glide symmetry while it is finite for the TFIM. In such a situation, \(\chi_{xxxx}^{(3)}\) dominates the non-linear response of the system. \(\chi_{xxxx}^{(3)}\) of the TKM contains off-diagonal peaks coming from the staggered interactions, in sharp contrast to \(\chi_{xxxx}^{(3)}\) of the TFIM. The emergence of such off-diagonal peaks can be used to distinguish the TKM from the TFIM. ## V Glide symmetry and CoNb\({}_{2}\)O\({}_{6}\) ### Integrable cases We now add a transverse field term \(-h_{x}\sum_{i}^{L}\sigma_{i}^{x}\) to the TKM in Eq. (1), which breaks the \(\hat{z}\)-glide symmetry and makes the leading non-linear susceptibility \(\chi_{xxx}^{(2)}\) finite. We then investigate whether off-diagonal peaks arise in \(\chi_{xxx}^{(2)}(\omega_{t},\omega_{\tau})\), revealing the staggered interactions. Note, we only focus on the regime where the ground state remains ferromagnetic. The transverse field term is also quadratic in fermionic operators, allowing for an analytic calculation of non-linear susceptibilities (See Appendix D for details). In Fig. 3(a), we plot Im\(\chi_{xxx}^{(2)}(\omega_{t},\omega_{\tau})\) at a low transverse field \(h_{x}/J\!=\!1/20\). First, it contains a dominant vertical terahertz rectification signal similar to the one of the TFIM [8]. At the same time, off-diagonal peaks appear, reflecting the energy range of correspond ing fermion pairs. In the strong field regime \(h_{x}/J\!=\!1/2\), the amplitude of the off-diagonal peaks becomes relatively weak as shown in Fig. 3(b). It can be understood by the fact that the strength of the staggered terms in the TKM, given by \(\pm YZ\)-type interactions in Eq. (1), become relatively small in the strong field limit \(h_{x}/J\to 1\) where the full model behaves like the TFIM. ### Non-integrable cases and CoNb\({}_{2}\)O\({}_{6}\) Our results can be compared to the quasi-1D Ising magnet CoNb\({}_{2}\)O\({}_{6}\), which was recently proposed as a close material realization of the TKM [15; 16]. In CoNb\({}_{2}\)O\({}_{6}\), cobalt atoms are surrounded by distorted octahedra formed by oxygen atoms. These edge-sharing octahedra form an isolated zigzag 1D chain along the crystal axis \(\hat{c}\), as shown in Fig. 4(a). In this material, the local axis \(\hat{y}\) is exactly aligned to the crystal axis \(\hat{b}\) unlike the local axis \(\hat{x}\), which makes an angle \(\phi\!=\!\pm 31^{\circ}\) with the crystal axis \(\hat{a}\)[16; 22]. In this case, \(\chi^{(2)}_{yyy}\!=\!\chi^{(2)}_{bbb}\) would be experimentally more pronounced than \(\chi^{(2)}_{xxx}\), which is distinct from \(\chi^{(2)}_{aaa}\). On the other hand, the accurate model of CoNb\({}_{2}\)O\({}_{6}\) may comprise sub-dominant \(XX\)-type interactions, which are allowed by the crystal symmetry, as revealed by neutron scattering experiments [15; 26; 27]. In this regard, we focus on the model given as \[H= -J\sum_{i=1}^{L^{\prime}}\left[\tilde{\sigma}_{2i-1}(\theta) \tilde{\sigma}_{2i}(\theta)+\tilde{\sigma}_{2i}(-\theta)\tilde{\sigma}_{2i+1} (-\theta)\right]\] \[-J_{x}\sum_{i}^{L}\sigma_{i}^{x}\sigma_{i+1}^{x}-h_{y}\sum_{i}^{ L}\sigma_{i}^{y}. \tag{10}\] which contains the additional \(XX\)-type interaction and transverse field term along the \(\hat{y}\) axis. Since the \(\hat{b}\) and \(\hat{y}\) axes are aligned, the transverse term can be included by simply applying an external field along the crystal axis \(\hat{b}\) to CoNb\({}_{2}\)O\({}_{6}\). The model is non-integrable, except in two cases with \(\theta\!=\!0\) or \(\theta\!\neq\!0\), \(J_{x}\!=\!0\) and \(h_{y}\!=\!0\). We now calculate \(\chi^{(2)}_{yyy}\) of the model in Eq. (10) in the ferromagnetic regime with fixed \(\theta\!=\!\pi/12\), which is close to the value given in Ref. [16] for CoNb\({}_{2}\)O\({}_{6}\), using infinite MPS techniques [25]. The techniques provide a way to calculate the exact \(\chi^{(2)}_{yyy}\) for the non-integrable cases and the calculations are done for system sizes \(L\!=\!120\) and over the time range \(Jt,J\tau\!=\!20\). We checked the dependence of \(\chi^{(2)}_{yyy}\) on the bond dimension \(\chi\) and the time step \(\delta t\), settling on \(\chi_{\rm max}=1000\) and \(\delta t\!=\!0.01/J\). We first notice that \(\chi^{(2)}_{yyy}\) vanishes unless the \(\hat{z}\)-glide symmetry breaking term is finite, \(h_{y}\!\neq\!0\). In Fig. 4(b), we plot Im\(\chi^{(2)}_{yyy}(\omega_{t},\omega_{\tau})\) of the model with \(J_{x}/J\!=\!0\) and \(h_{y}/J\!=\!1/20\). Analogous to Im\(\chi^{(2)}_{xxx}(\omega_{t},\omega_{\tau})\) of the TKM with the transverse field along \(\hat{x}\) direction, it also contains off-diagonal peaks which can signal the presence of the staggered interactions. We also investigate the effect of additional \(XX\)-type interactions on such off-diagonal peaks regarding CoNb\({}_{2}\)O\({}_{6}\). As shown in Fig. 4(c) for the model with \(J_{x}/J\!=\!1/10\) and \(h_{y}/J\!=\!1/20\), such peaks still appear though the amplitude becomes relatively weak. ## VI Conclusions In the present work, we propose a way to distinguish two similar models, i.e., the ferromagnetic TKM and TFIM, using 2DCS. In both models, elementary spin flips fractionalize into domain wall excitations, resulting in a qualitatively similar continuum in the linear response dynamical structure factor. In contrast, we show that the 2D non-linear spectrum as a function of \(\omega_{\tau}\) and \(\omega_{t}\), associated with the time interval between a probe and measurement pulse, offers a clear way to discern the two models. Unlike the TFIM, the second order susceptibility \(\chi^{(2)}_{xxx}\) vanishes for the TKM due to the presence of a \(\hat{z}\)-glide symmetry. Moreover, the third order susceptibility \(\chi^{(3)}_{xxxx}\) of the TKM contains non-rephasing and rephasing like signals which appear as off-diagonal peaks in the frequency domain, originating from the presence of bond dependent interactions. Regarding the canted structure of CoNb\({}_{2}\)O\({}_{6}\), a possible material realization of the TKM, we also investigate the second order susceptibility \(\chi^{(2)}_{xyxy}\). For the non-integrable regime of the microscopic model we have employed the infinite MPS method for calculating the non-linear response. First, we find that \(\chi^{(2)}_{xyyy}\) of the TKM vanishes unless additional \(\hat{z}\)-glide symmetry breaking terms are included. Second, we observe the emergence of off-diagonal peaks with an external transverse field along \(\hat{y}\) axis. Such peaks persist with additional \(XX\)-type interactions, which can be sub-dominant in CoNb\({}_{2}\)O\({}_{6}\). We expect our results will shed light on the unambiguous identification of the correct microscopic description of CoNb\({}_{2}\)O\({}_{6}\). Advances in spectroscopic methods, accessible frequency ranges, and improved energy resolution allow for a more precise understanding of correlated quantum systems. We expect that 2DCS is an excellent tool for determining the microscopic parameters of quantum magnets, not only for the one-dimensional example considered here but also for two- and three-dimensional frustrated magnets. ###### Acknowledgements. We thank R. Coldea, N. P. Armitage, M. Drescher, H.-K. Jin, W. Choi, N.P. Perkins and S. Gopalakrishnan for insightful discussions related to this work. G.B.S. thanks P. d'Ornellas for detailed comments on the manuscript. G.B.S. is funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No. 771537). F.P. acknowledges the support of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2111-390814868. J. K. acknowledges support from the Imperial-TUM flagship partnership. The research is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. Tensor network calculations were performed using the TeNPy Library [33]. Data analysis and simulation codes are available on Zenodo upon reasonable request [34]. ## Appendix A Jordan-Wigner formalism In this appendix, we introduce the Jordan-Wigner formulation of the TKM. The TKM is written as \[H_{\text{TKM}}=-J\sum_{i=1}^{L^{\prime}}\big{(}\tilde{\sigma}_{2i-1}(\theta) \tilde{\sigma}_{2i}(\theta)+\tilde{\sigma}_{2i}(-\theta)\tilde{\sigma}_{2i+1} (-\theta)\big{)}.\] We now introduce the Jordan-Wigner transformation which maps the Pauli operators to fermionic operators through the relations [28]: \[\sigma_{j}^{x} =1-2c_{j}^{\dagger}c_{j},\] \[\sigma_{j}^{y} =-i(c_{j}^{\dagger}-c_{j})\prod_{i<j}{(1-2c_{i}^{\dagger}c_{i})},\] \[\sigma_{j}^{z} =(c_{j}^{\dagger}+c_{j})\prod_{i<j}{(1-2c_{i}^{\dagger}c_{i})}. \tag{10}\] Using Eq. (A), we arrive at a bilinear form in terms of spinless fermions: \[H_{\text{TKM}} =\sum_{i=1}^{L^{\prime}}\Big{[}e^{i\theta}c_{2i-1}^{\dagger}c_{2i }^{\dagger}+c_{2i-1}^{\dagger}c_{2i}\] \[+e^{-i\theta}c_{2i}^{\dagger}c_{2i+1}^{\dagger}+c_{2i}^{\dagger} c_{2i+1}+h.c.\Big{]}. \tag{11}\] We then adopt the Fourier transformation \(c_{2j-1}=\frac{1}{\sqrt{L}}\sum_{k}e^{-ikj}a_{k},c_{2j}=\frac{1}{\sqrt{L}}\sum_{k}e^{-ikj}b_{k}\) with the discrete momenta \(k\!=\!\frac{n\pi}{L^{\prime}},n\!=\!-(L^{\prime}-1),\ldots,(L^{\prime}-3),(L^{ \prime}-1)\). The TKM now takes the form as \[H_{\text{TKM}}=\sum_{k}[B_{k}a_{k}^{\dagger}b_{-k}^{\dagger}+A_{k}a_{k}^{ \dagger}b_{k}-A_{k}^{*}a_{k}b_{k}^{\dagger}-B_{k}^{*}a_{k}b_{-k}] \tag{12}\] where \(A_{k}=1+e^{ik}\) and \(B_{k}=e^{i\theta}-e^{i(k-\theta)}\). To diagonalize the Hamiltonian Eq. (A), we write it in a matrix form as \[H_{\text{TKM}}=\sum_{k>0}(a_{k}^{\dagger},a_{-k},b_{k}^{\dagger},b_{-k})\hat{ M}_{k}\begin{pmatrix}a_{k}\\ a_{-k}^{\dagger}\\ b_{k}\\ b_{-k}^{\dagger}\end{pmatrix} \tag{13}\] where \[\hat{M}_{k}\!=\!\left(\begin{array}{cccc}0&0&S_{k}&P_{k}+Q_{k}\\ 0&0&P_{k}-Q_{k}&-S_{k}\\ S_{k}^{*}&P_{k}^{*}-Q_{k}^{*}&0&0\\ P_{k}^{*}+Q_{k}^{*}&-S_{k}^{*}&0&0\end{array}\right)\] with \(P_{k}=-i(e^{ik}+1)\sin\theta,Q_{k}=(e^{ik}-1)\cos\theta\), and \(S_{k}=1+e^{ik}\). The diagonalization of Eq. (A) is achieved by the Bogoliubov transformation, \[(\alpha_{k}^{\dagger},\alpha_{-k},\beta_{k}^{\dagger},\beta_{-k})\ \hat{U}_{k}=(a_{k}^{ \dagger},a_{-k},b_{k}^{\dagger},b_{-k}). \tag{14}\] The Hamiltonian is now diagonalized in the new basis as \[H_{\text{TKM}}=\sum_{k>0}\big{[}l_{k}(\alpha_{k}^{\dagger}\alpha_{k}-\alpha_{- k}\alpha_{-k}^{\dagger})+\lambda_{k}(\beta_{k}^{\dagger}\beta_{k}-\beta_{-k} \beta_{-k}^{\dagger})\big{]} \tag{15}\] where \(l_{k}\!=\!\sqrt{\xi_{k}-\sqrt{\xi_{k}^{2}-\tau_{k}^{2}}},\ \lambda_{k}\!=\! \sqrt{\xi_{k}+\sqrt{\xi_{k}^{2}-\tau_{k}^{2}}}\) with \(\xi_{k}=|P_{k}|^{2}+|Q_{k}|^{2}+|S_{k}|^{2}\) and \(\tau_{k}=|P_{k}^{2}-Q_{k}^{2}+S_{k}^{2}|\). ## Appendix B Detail of MPS simulation for the dynamical structure factor In this appendix, we provide details of the MPS simulations for the dynamical structure factor [30] \[S_{xx}(k,\omega)\!=\!\frac{1}{4}\int\mathrm{d}t\sum_{j}e^{i \omega t-ik(r_{j}-r_{L/2})}\langle\sigma_{j}^{x}(t)\sigma_{L/2}^{x}(0)\rangle\] \[=\!\frac{1}{4}\int\mathrm{d}t\sum_{j}\big{[}e^{i\omega t-ik(r_{j} -r_{L/2})}e^{iE_{g}t}\langle g|\sigma_{j}^{x}e^{-iHt}\sigma_{L/2}^{x}|g\rangle G (t)\big{]}.\] 1. Find an MPS approximation of the ground state \(|g\rangle\) with an energy \(E_{g}\) using the density matrix renormalization group. 2. Apply a local operator \(\sigma_{L/2}^{x}\) and obtain \(\sigma_{L/2}^{x}|g\rangle\). 3. Perform a real time evolution following the local quench \(\sigma_{L/2}^{x}\) using time evolving block decimation method [35; 31] to get an MPS which represents \(e^{-iHt}\sigma_{L/2}^{x}|g\rangle\). 4. Evaluate an overlap of two MPS "bra" and "ket" to obtain \(\langle g|\sigma_{j}^{x}e^{-iHt}\sigma_{L/2}^{x}|g\rangle\). 5. Multiply \(e^{iE_{g}t}\) and \(\langle g|\sigma_{j}^{x}e^{-iHt}\sigma_{L/2}^{x}|g\rangle\). 6. Apply a discrete Fourier transformation in space that yields the momentum-resolved time-dependent data \(S_{xx}(k,t)\). 7. Perform a Fourier transformation of the time series convoluted with a Gaussian window function \(G(t)=e^{-t^{2}/2\sigma^{2}}\) to prevent Gibb's oscillations [36; 15; 37]. For the result given in Fig. 1, we set the system size \(L=120\), time step size \(\delta t=0.02J\), total simulation time \(t_{max}=60J\), maximum bond dimension \(\chi_{max}=500\), and the Gaussian envelope parameter \(\sigma=0.05\). ## Appendix C Magnetic Susceptibilities of the TKM Here, we analytically formulate the linear and non-linear magnetic susceptibilities of the TKM. \(M^{x}\), the total magnetization of the target system along \(\hat{x}\) direction, in fermionic formulation reads \[M^{x} =\frac{1}{2}\sum_{i}^{L^{\prime}}(\sigma_{2i-1}^{x}+\sigma_{2i}^ {x})=\sum_{k>0}m_{k}^{x}\] \[=\sum_{k>0}\big{[}-a_{k}^{\dagger}a_{k}+a_{-k}a_{-k}^{\dagger}-b_ {k}^{\dagger}b_{k}+b_{-k}b_{-k}^{\dagger}\big{]}. \tag{14}\] \(M^{x}\) can be rewritten in the basis introduced in Eq. (13): \[M^{x} =\sum_{k>0}\big{[}-a_{k}^{\dagger}a_{k}+a_{-k}a_{-k}^{\dagger}-b_ {k}^{\dagger}b_{k}+b_{-k}b_{-k}^{\dagger}\big{]}=(a_{k}^{\dagger},a_{-k},b_{k} ^{\dagger},b_{-k})\begin{pmatrix}-1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}a_{k}\\ a_{-k}^{\dagger}\\ b_{k}\\ b_{-k}^{\dagger}\end{pmatrix}\] \[=(\alpha_{k}^{\dagger},\alpha_{-k},\beta_{k}^{\dagger},\beta_{-k}) \hat{U}_{k}\begin{pmatrix}0&c_{k}&\sqrt{1-c_{k}^{2}}&0\\ c_{k}&0&0&\sqrt{1-c_{k}^{2}}\\ \sqrt{1-c_{k}^{2}}&0&0&-c_{k}\\ 0&\sqrt{1-c_{k}^{2}}&-c_{k}&0\end{pmatrix}\begin{pmatrix}\alpha_{k}\\ \alpha_{-k}^{\dagger}\\ \beta_{k}\\ \beta_{-k}^{\dagger}\end{pmatrix}. \tag{15}\] In the Heisenberg picture, \[M^{x}(t)=\sum_{k}(\alpha_{k}^{\dagger},\alpha_{-k},\beta_{k}^{\dagger},\beta_{- k})\begin{pmatrix}0&c_{k}e^{-2i\lambda_{k}t}&\sqrt{1-c_{k}^{2}}e^{i\omega(t_{k}- \lambda_{k})}&0\\ c_{k}e^{2i\lambda_{k}t}&0&0&\sqrt{1-c_{k}^{2}}e^{-it(t_{k}-\lambda_{k})}\\ \sqrt{1-c_{k}^{2}}e^{-it(t_{k}-\lambda_{k})}&0&0&-c_{k}e^{-2i\lambda_{k}t}\\ 0&\sqrt{1-c_{k}^{2}}e^{it(t_{k}-\lambda_{k})}&-c_{k}e^{2il_{k}t}&0\end{pmatrix} \begin{pmatrix}\alpha_{k}\\ \alpha_{-k}^{\dagger}\\ \beta_{k}\\ \beta_{-k}^{\dagger}\end{pmatrix}. \tag{16}\] We now calculate the linear and non-linear magnetic susceptibilities of the TKM. We first consider the linear susceptibility \(\chi^{(1)}_{xxxx}(t)\). The starting point is the Kubo formula: \[\chi^{(1)}_{xx}(t) = \frac{i\Theta(t)}{L}\langle[M^{x}(t),M^{x}(0)]\rangle \tag{10}\] \[= \frac{i\Theta(t)}{L}\sum_{k>0}\langle[m^{x}_{k}(t),m^{x}_{k}(0)]\rangle\] \[= \frac{2}{L}\sum_{k>0}c^{2}_{k}\big{(}\sin(2l_{k}t)+\sin(2\lambda_ {k}t)\big{)}\] where \(\langle\cdots\rangle\) represents the average in the ground state. The second equality comes from the fact that \(m^{x}_{k}\) with different \(k\) commute. The second order non-linear susceptibility is given as \[\chi^{(2)}_{xxx}(t,\tau+t) = \frac{i^{2}\Theta(t)\Theta(\tau)}{L}\langle[[M^{x}(\tau+t),M^{x}( \tau)],M^{x}(0)]\rangle \tag{11}\] \[= \frac{i^{2}\Theta(t)\Theta(\tau)}{L}\sum_{k>0}\langle[[m^{x}_{k} (\tau+t),m^{x}_{k}(\tau)],m^{x}_{k}(0)]\] \[= 0.\] As pointed out in the main text, the second order susceptibility of the TKM vanishes. The formula for the third order non-linear susceptibility is given as \[\chi^{(3)}_{xxxx}(t_{3},t_{2}+t_{3},t_{1}+t_{2}+t_{3}) = \frac{i^{3}\Theta(t_{1})\Theta(t_{2})\Theta(t_{3})}{L}\langle[[[M ^{x}(t_{1}+t_{2}+t_{3}),M^{x}(t_{1}+t_{2})],M^{x}(t_{1})],M^{x}(0)]\rangle \tag{12}\] \[= \frac{i^{3}\Theta(t_{1})\Theta(t_{2})\Theta(t_{3})}{L}\sum_{k>0 }\langle[[[m^{x}_{k}(t_{1}+t_{2}+t_{3}),m^{x}_{k}(t_{1}+t_{2})],m^{x}_{k}(t_{1 })],m^{x}_{k}(0)]\rangle.\] We focus on the two limit which correspond \(\chi^{(3)}_{xxxx}\) measured in the two-pulse setup, \(\chi^{(3,1)}_{xxxx}(t,\tau+t,\tau+t)\) with \(t_{1}\!\rightarrow\!0,t_{2}\!\rightarrow\!\tau,t_{3}\!\rightarrow\!t\) and \(\chi^{(3,2)}_{xxxx}(t,t,t+\tau)\) with \(t_{1}\!\rightarrow\!\tau,t_{2}\!\rightarrow\!0,t_{3}\!\rightarrow\!t\): \[\chi^{(3,1)}_{xxxx}(t,\tau+t,\tau+t) = \frac{\Theta(t)\Theta(\tau)}{L}\sum_{k>0}P^{(1)}_{k}\!+\!P^{(2)}_ {k}\!+\!P^{(3)}_{k}\] with \[P^{(1)}_{k} = -8c^{4}_{k}\big{[}\sin\big{(}2l_{k}t\!+\!(l_{k}\leftrightarrow \lambda_{k})\big{]},\] \[P^{(2)}_{k} = 8(c^{4}_{k}\!-\!c^{2}_{k})\big{[}\sin\big{(}2l_{k}t\!+\!(l_{k}+ \lambda_{k})\tau\big{)}\!+\!(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[P^{(3)}_{k} = 8(c^{2}_{k}\!-\!c^{4}_{k})\big{[}\sin\big{(}(l_{k}\!-\!\lambda_{k} )t+(l_{k}\!+\!\lambda_{k})\tau\big{)}\!+\!(l_{k}\leftrightarrow\lambda_{k}) \big{]},\] and \[\chi^{(3,2)}_{xxxx}(t,t,\tau+t) = \frac{\Theta(t)\Theta(\tau)}{L}\sum_{k>0}Q^{(1)}_{k}\!+\!Q^{(2)}_ {k}\!+\!Q^{(3)}_{k}\!+\!Q^{(4)}_{k}\] with \[Q_{k}^{(1)} = -4c_{k}^{2}\big{[}\sin\big{(}2l_{k}(t+\tau)\big{)}+(l_{k}\leftrightarrow \lambda_{k})\big{]},\] \[Q_{k}^{(2)} = -4c_{k}^{4}\big{[}\sin\big{(}2l_{k}(t-\tau)\big{)}+(l_{k} \leftrightarrow\lambda_{k})\big{]},\] \[Q_{k}^{(3)} = 4(c_{k}^{4}-c_{k}^{2})\big{[}\sin\big{(}2\lambda_{k}t+2l_{k}\tau \big{)}+(l_{k}\leftrightarrow\lambda_{k})\big{]},\] \[Q_{k}^{(4)} = 8(c_{k}^{2}-c_{k}^{4})\big{[}\sin\big{(}(\lambda_{k}-l_{k})t+2l_ {k}\tau\big{)}+(l_{k}\leftrightarrow\lambda_{k})\big{]},\] where \(c_{k}\) is the matrix element of the magnetization as given in Eq. (10). In Fig. 5, we plot the real part of Fourier transformed \(\chi^{(3,1)}_{xxxx}(t,\tau+t,\tau+t)\) and \(\chi^{(3,2)}_{xxxx}(t,t,\tau+t)\) for the TKM and TFIM. The formulation for the TFIM is explicitly given in Ref. [8]. \(\chi^{(3)}_{xxxx}\) of the TKM (Figs. 5(a,b)) contains the off-diagonal signals unlike \(\chi^{(3)}_{xxxx}\) of the TFIM (Figs. 5(c,d)). Such signals come from the transition between different excited states, whose presence originates from the bond dependent spin exchange interactions. For example, \(P_{k}^{(2)}\) contains a transition between the first excited state \(|1\rangle\) and the second excited state \(|2\rangle\) as illustrated in Fig. 5(e), resulting in an oscillatory signal with frequency \(2l_{k}\) throughout the time interval \(t\). ## Appendix D Second order susceptibilities of the TKM with transverse field The TKM with a transverse field along \(\hat{x}\) direction is written as \[H = -J\sum_{i=1}^{L^{\prime}}\big{(}\tilde{\sigma}_{2i-1}(\theta) \tilde{\sigma}_{2i}(\theta)+\tilde{\sigma}_{2i}(-\theta)\tilde{\sigma}_{2i+1} (-\theta)\big{)} \tag{12}\] \[-h_{x}\sum_{i=1}^{L}\sigma_{i}^{x}.\] We now rewrite the model in terms of spinless fermions using the Jordan-Wigner transformation: \[H_{\text{TKM}}=\sum_{k>0}(a_{k}^{\dagger},a_{-k},b_{k}^{\dagger},b_{-k})\hat{ M}_{k}\begin{pmatrix}a_{k}\\ a_{-k}^{\dagger}\\ b_{k}\\ b_{-k}^{\dagger}\end{pmatrix} \tag{13}\] where \[\hat{M}_{k} = \begin{pmatrix}2h_{x}&0&S_{k}&P_{k}+Q_{k}\\ 0&-2h_{x}&P_{k}-Q_{k}&-S_{k}\\ S_{k}^{*}&P_{k}^{*}-Q_{k}^{*}&2h_{x}&0\\ P_{k}^{*}+Q_{k}^{*}&-S_{k}^{*}&0&-2h_{x}\end{pmatrix}\]
2307.15098
Self-Supervised Learning for Improved Synthetic Aperture Sonar Target Recognition
This study explores the application of self-supervised learning (SSL) for improved target recognition in synthetic aperture sonar (SAS) imagery. The unique challenges of underwater environments make traditional computer vision techniques, which rely heavily on optical camera imagery, less effective. SAS, with its ability to generate high-resolution imagery, emerges as a preferred choice for underwater imaging. However, the voluminous high-resolution SAS data presents a significant challenge for labeling; a crucial step for training deep neural networks (DNNs). SSL, which enables models to learn features in data without the need for labels, is proposed as a potential solution to the data labeling challenge in SAS. The study evaluates the performance of two prominent SSL algorithms, MoCov2 and BYOL, against the well-regarded supervised learning model, ResNet18, for binary image classification tasks. The findings suggest that while both SSL models can outperform a fully supervised model with access to a small number of labels in a few-shot scenario, they do not exceed it when all the labels are used. The results underscore the potential of SSL as a viable alternative to traditional supervised learning, capable of maintaining task performance while reducing the time and costs associated with data labeling. The study also contributes to the growing body of evidence supporting the use of SSL in remote sensing and could stimulate further research in this area.
BW Sheffield
2023-07-27T14:17:24Z
http://arxiv.org/abs/2307.15098v1
# Self-supervised learning for improved synthetic aperture sonar target recognition ###### Abstract This study aims to evaluate the performance of two prominent SSL algorithms, MoCov2 [1] and BYOL [2], against the well-regarded supervised learning model, ResNet18 [3], for the binary image classification task as shown in Figure 1. The SSL models were trained on real-world SAS data to learn useful feature Figure 1: The objective of this paper evaluates two different SSL models in limited label scenarios representations for downstream binary image classification tasks. The findings suggest that while both SSL models can best the performance of a fully supervised model with access to a small amount of labels in limited label scenarios, they do not exceed it when all the labels are used. This study underscores the potential of SSL as a viable alternative to traditional supervised learning, capable of maintaining task performance while reducing the time and costs associated with data labeling. ## 2 Related Work SSL has been a burgeoning area of research in recent years, particularly within the remote sensing domain [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]. Although SSL applications have significantly advanced across various fields, their application to SAS remains relatively unexplored. In 2022, Preciado-Grijalva et al. [15] demonstrated the potential of SSL in forward look sonar (FLS) sonar applications can yield classification performance comparable to supervised pre-training in a few-shot transfer learning setup. In the unsupervised and semi-supervised learning domains, researchers have applied methods to reduce the burden of labeled SAS data [16, 17, 18]. The potential of SSL in Synthetic Aperture Radar (SAR) applications, a field closely related to SAS, has been demonstrated in several studies [19, 20, 21, 22, 23, 24, 25, 26, 27]. These studies have shown that SSL can effectively leverage the vast amounts of unlabeled SAR data to achieve meaningful results. However, the application of these methodologies remains largely unexplored in the context of SAS data. This gap in the literature may be due to the unique challenges associated with SAS data, such as the sensitive nature of the data and the computational resources required for training. ## 3 Methodology The subsequent experiments are designed to conduct a comparative evaluation of the performance of the models' representations. This is achieved both qualitatively, through the visualization of the latent spaces, and quantitatively, based on the ultimate classification outcomes. To establish a common baseline, all SSL models, MoCov2 and BYOL, use the same ResNet18 backbone. Training of the pre-existing models was carried out using PyTorch Lightning for up to 100 epochs. This was carried out on eight Nvidia A6000 GPUs, each equipped with 48GB RAM. The DDP strategy helps to improve the consistency of batch normalization across multiple GPUs. Distributed sampling ensures that each GPU processes a unique subset of the total data in each epoch, leading to more stable training and potentially better performance. Unlike other deep learning methods, high batch sizes are highly desirable in achieving good results as the task of the loss function is to pull positive instances together and push negative instances away. The downstream task for comparison was binary image classification using binary cross entropy for the loss function. A threshold of 50% was used to make a decision on whether an image contained a object of interest. In an iterative manner, the SSL pre-trained model is fine-tuned with a percentage of labels with the backbone frozen to compare how well the respective models evaluate against a supervised ResNet18 model. For linear evaluation, early stopping was enabled when training failed to decrease in loss for 10 epochs. ### Experimental Setup In assessing the efficacy of the representations generated by various SSL frameworks, linear evaluation is used to assess the quality of the learned representations. The linear evaluation method involves training a supervised linear classifier on the SSL pre-trained models, with the model weights kept constant. The classification score derived from this process provides insight into the discriminative capacity of the pre trained representations and serves as an indirect measure of the model's performance in subsequent tasks [28]. ### Dataset Labeled multi-band SAS data is hard to come by and quite limited. Due to the high resolution nature of SAS data, it is often too large for modern GPUs requiring the imagery to be broken up into tiles/snippets/chips. To generate a dataset that consists of snippets, a Reed-Xiaoli [29] anomaly detector is used to to detect potential objects of interest by extracting snippets from high-resolution SAS imagery. The low and high frequency, call them LF and HF respectively, snippets are first resized to 224x224 each and stacked forming a 2x224x224 multi-band SAS image. Previous works have applied the multi-band approach to success [30, 31, 32]. Different beamformers have been used to generate the SAS imagery providing semantically the same scenes to the human eye yet statistically different. The collective snippets make up the four datasets used in experiments: pre-train, train, validation, and test. For simplicity, the labeled datasets(train, validation, and test) used in linear probing experiments are balanced for positive and negative instances. ### SSL Models and Hyperparameters This work leverages two different types of SSL model architectures: MoCov2 [1] and BYOL [2]. The two SSL methods have been categorized as contrastive and non-contrastive. * **MoCov2:** displays strength in learning meaningful representations by contrasting positive and negative samples where distinguishing between different classes is important. However it's contrastive strength, it requires careful selection of negative samples and the size of the queue can significantly affect the performance. * **BYOL:** a popular non-contrastive SSL method as it avoids the need for negative samples which can simplify the training process and reduces computational requirements that contrastive loss functions require such as large batch sizes. The non-contrastive method does come at cost where distinguishing between different classes is crucial. ### Data Augmentations Data augmentation techniques, which generate diverse and challenging examples, are heavily relied upon in SSL. Ensuring that the augmentations are diverse and cover a wide range of transformations can help prevent overfitting thus a moderate amount of augmentations are lightly applied to drive the feature learning process during pre-training as shown in Figure 2. Speckle noise is artificially introduced into the SAS image that multiplies a constant noise factor across the imagery. During training, only horizontal flip augmentations were applied. ### Performance Metrics In the context of image classification with SAS, the evaluation metrics and benchmarks used for binary image classification tasks need to effectively measure the ability of the model to accurately distinguish between objects and objects not of interest (typically representing seafloor clutter or other underwater objects). The following performance metrics were used to evaluate the models: * **Contrastive loss:** During pre-training, the contrastive loss is tracked on a validation dataset providing insight on how well the model is learning to distinguish between similar and dissimilar samples. * **Recall (Sensitivity or True Positive Rate):** This measures the proportion of actual positives (objects of interest) that were identified correctly. A high recall is crucial in image classification because failing to identify an object (false negative), could be disastrous in contested military waters. * **Precision (Positive Predictive Value):** This measures the proportion of positive identifications (identified objects) that were actually correct. A high precision means a low false positive rate, which is desirable in image classification tasks to avoid wasting resources on false detections. * **Area Under the Receiver Operating Characteristic (ROC) Curve (AUC-ROC):** This metric provides a comprehensive measure of performance across all possible classification thresholds, summarizing the trade-off between the true positive rate and false positive rate. * **Accuracy:** This is the simplest metric, representing the proportion of total predictions that were correct. However, accuracy can be misleading if the classes are imbalanced (e.g., if objects are much less common than distractor objects). ## 4 Results The integration of SSL into SAS significantly enhances the performance of the SSL models, specifically MoCov2 and BYOL, when only 1% and 5% of the labels are utilized during training. However, when compared to the ResNet18 model, which had access to 100% of the labels, the SSL models fell short, as shown in Figures 3, 4, and 5. The SSL algorithms were able to effectively extract high-level features from the SAS data, resulting in enhanced performance in downstream tasks for limited label scenarios. Figure 2: Visualization of the SAS data augmentation pipeline for a box-like object during pre-training to create a drastic contrastive image. ## 5 Discussion The results suggest that SSL can be effectively applied to SAS, similar to its successful application in SAR and other computer vision tasks. The improved performance in image classification tasks indicates the potential of SSL in enhancing SAS target recognition for low labeled regimes. However, when abundant data labels exist, supervised learning outperforms in all aspects. In order to better understand the feature representations learned by the models, t-SNE [33], a popular technique for visualizing high-dimensional data was deployed. Figure 6 shows the t-SNE visualizations of the feature representations learned by MoCov2 and BYOL. As can be seen from the visualizations, both the SSL models and the supervised model have learned to cluster the sonar images in a meaningful way, with images of the same class clustering together. This suggests that the models have learned to extract features that are relevant for the task of sonar object classification. More compact and well-separated clusters indicate the SSL models learned robust and discriminative features, which could potentially lead to better performance in downstream tasks other than image classification such as object detection, segmentation, and change detection. ### Implication of Results Based on the findings, the application of SSL to SAS significantly improves the performance of target recognition tasks for low labeled regimes. This has several important implications. Firstly, it suggests that SSL can effectively leverage the abundance of unlabeled SAS data, which has traditionally been a challenge in this field. This could potentially revolutionize the way we process and analyze SAS data, leading to more efficient and cost-effective methods. Secondly, the improved performance in downstream tasks such as image classification indicates that SSL can enhance the practical utility of SAS in various applications, such as underwater exploration, marine Figure 3: Precision-Recall curves demonstrate SSL model trade-offs for varying labeled scenarios. archaeology, and other naval applications. Finally, the results contribute to the growing body of evidence supporting the use of SSL in remote sensing and could stimulate further research in this area. ## 6 Conclusion The potential of self-supervised learning to improve the classification of SAS images is underscored in this study. Given their success in various computer vision tasks, future research could explore the use of Vision Transformers (ViTs) as backbones for SSL with SAS data. Additionally, a multi-modal SSL approach that leverages all available data collected by autonomous underwater vehicles, such as bathymetric data or other sonar modalities, could potentially provide richer representations and improve performance. While the application of SSL to SAS tasks is promising, it is still in its infancy. Further exploration could significantly advance automated underwater computer vision tasks.
2302.08171
Spontaneous fission half-lives of actinides and super-heavy elements
Spontaneous fission half-lives of actinide and super-heavy nuclei are calculated, using the least-action integral, through the WKB tunneling probability of the barrier that appears in the deformation landscape obtained in the macroscopic-microscopic potential-energy surface. This deformation-energy landscape is obtained using a Fourier shape parametrization with 4 deformation parameters, taking into account the nuclear elongation, left-right asymmetry, neck formation and non-axiality degrees of freedom. The collective inertia tensor entering the WKB half-life expression is given through the so-called irrotational flow approach, successfully used in nuclear fission to reproduce observables that characterize the nuclear system in the vicinity of the scission configurations, such as fragment mass or charge distributions. For a comparisons, we have also used the so-called phenomenological mass parameter depending only on the center-of-mass difference of the forming fission fragments. Our approach is shown to be able to reproduce empirical fission half-lives of all here considered nuclei to within 3 orders of magnitude.
J. Marin Blanco, A. Dobrowolski, A. Zdeb, J. Bartel
2023-02-16T09:38:02Z
http://arxiv.org/abs/2302.08171v3
# Spontaneous fission half-lives of actinides and super-heavy elements ###### Abstract Spontaneous fission half-lives of actinide and super-heavy nuclei are calculated, using the least-action integral, through the WKB tunneling probability of the barrier that appears in the deformation landscape obtained in the macroscopic potential-energy surface. This deformation-energy landscape is obtained using a Fourier shape parametrization with 4 deformation parameters, taking into account the nuclear elongation, left-right asymmetry, neck formation and non-axiality degrees of freedom. The collective inertia tensor entering the WKB half-life expression is given through the so-called irrotational flow approach, successfully used in nuclear fission to reproduce observables that characterize the nuclear system in the vicinity of the scission configurations, such as fragment mass or charge distributions. For a comparisons, we have also used the so-called phenomenological mass parameter depending only on the center-of-mass difference of the forming fission fragments. Our approach is shown to be able to reproduce empirical fission half-lives of all here considered nuclei to within 3 orders of magnitude. spontaneous fission, potential energy surface, macroscopic-microscopic model, actinides, SHE, half-lives, pairing correlations, least-action path ## I Introduction Nuclear fission, as a decay mode competitive with the emission of light particles such as neutrons or protons, light clusters like \(\alpha\) particles or \(\gamma\) quanta, plays an essential role in determining the stability of heavy and super-heavy nuclei. The nuclear fission process, induced by the absorption of neutrons has been observed for the first time in 1938 by Hahn and Strassmann [1]. The theoretical explanation of this new phenomenon was given within a few weeks by Meitner and Frisch [2]. The authors established the basic features of the low-energy fission process, such as the energy released in this process being equal to almost 200 MeV, as well as the fact, that it results from the Coulomb repulsion of the fission fragments. In addition, it has been estimated that the number of neutrons emitted in each such fission event is larger than one, and that a chain reaction is thus possible. The spontaneous fission of uranium was discovered one and a half year later by Flerov and Petrzak [3]. Since these _early days_, a continuous interest in the theoretical description of the fission process has been observed. Based on the first theoretical model of a nucleus as a charged drop of liquid, fission was described as a collective motion of nucleons in which the nuclear deformation evolves from a form close to a sphere to an elongated shape [4]. Such a shape evolution is associated with the change of the nuclear deformation energy which grows with increasing deformation. When the elongation exceeds a certain critical value, the energy decreases again up to the point where the nuclear system splits into two separated fragments. In a quantum mechanical description the fission process can be understood as a tunneling through the potential energy barrier. The tunneling probability and, as a consequence, the spontaneous fission half-life strongly depends on the shape of the fission barrier, in particular its height and width. Over the last decades, there have been numerous attempts to present a reliable model of the fission process which allows to reproduce in particular the measured spontaneous fission half-lives. What the quality of this reproduction is concerned, one has to keep in mind, however, that already a very small change of the barrier, in particular its height, will lead to a substantial change in the fission half-life. Among the best known of these early attempts, within which the global systematics of spontaneous fission half-lives has been reproduced, the semi-empirical formula proposed in 1955 by W.J. Swiatecki would come immediately to the mind [5]. The main idea of this approach comes from the observation of the strong correlation between the logarithm of the spontaneous fission half-lives and the ground state microscopic corrections due to shell effects and pairing correlations. Later on, this concept has been applied to up-to-date experimental data [6; 7] within a modern version of the liquid-drop model, which is now known as the Lublin-Strasbourg Drop (LSD) [8]. There are also various theoretical approaches often based on mathematically quite advanced methods [9; 10; 11; 12; 13; 14; 15; 16; 17]. There have also been several attempts to apply fully microscopic, self-consistent methods in order to reproduce spontaneous fission observables [18; 19], yet the accuracy in the reproduction of the experimental data can still not be considered as being fully satisfactory. To obtain a better agreement with the experiment, one may consider pairing as a dynamical degree of freedom (see Refs. [20; 21]) and/or use improved approaches for the collective inertia [22]. Such approaches turn out, however, to be numerically very costly. In general, spontaneous fission half-life calculations require not only an assessment of the collective potential energy surface (evaluated in a purely microscopical approach or, as will be done is what follows, within the macroscopic-microscopic model), but also of the collective inertia tensor. Commonly, the latter is obtained within the cranking approximation [23; 24] or the Generator Coordinate Method (GCM) with the generalized Gaussian Overlap Approximation (GOA) [25; 26]. In this work the irrotational-flow approach of Ref. [27] (see also [28]) will be use to evaluate the inertia tensor. The present manuscript will be entirely devoted to present spontaneous fission half-lives, obtained within that approach, and their comparison with experimental data. Using the tunneling model of the WKB approximation for a multidimensional potential-energy barrier [29; 30; 31], we will analyze the half-lives for this process for selected even-even actinide and super-heavy nuclei from \(Z\) = 90 to 110. The calculations will in particular concern the isotopes of the following actinide isotopic chains: Th, U, Pu, Cm, Cf, Fm, No, as well as for the superheavy elements Rf, Sg, Hs, and Ds. The obtained results are compared with the available experimental data. Since this comparison turns out to be quite satisfactory, we will also make predictions for half-lives of nuclei where the measurements have not yet been performed. In section II the theoretical framework of our approach will be presented with the main ingredients which are the parametrization of the nuclear shape on one side and, on the other side, the model used to describe the energy of the nuclear system as function of the chosen deformation, which in our present study is the macroscopic-microscopic approach together with the Lublin-Strasbourg Drop model. Section III will explain how the spontaneous-fission half-life can be evaluated in a WKB-type model, based on the least-action path, before we present in section IV our results for such half-lives for some actinide and super-heavy nuclei. Section V finally draws some conclusions and gives an outlook on further studies which can be carried out in our approach. ## II Theoretical Framework To be able to describe very heavy nuclei and their de-excitation through fission or particle emission, a study of the evolution of its energy with deformation is mandatory. We will therefore investigate in what follows the two main ingredients required for such an investigation of what is commonly called the deformation energy of the nucleus, namely the parametrization of the nuclear shape up to very large deformations as they may occur in the fission process, and a model capable to give a reliable description of the nuclear energy at a given deformation. ### Nuclear shape parametrization The description of the huge variety of shapes encountered all across the nuclear chart when going from oblate deformations as they appear in the transition region, generated by the progressive filling of the \(pf\) shell, to prolate shapes as found in the rare-earth region and in actinide nuclei necessitates a sufficiently rich and flexible nuclear shape parametrization. This demand is even tightened if one requires to describe the typically very elongated and often necked-in shapes as they are encountered in the fission process. To model the physical reality (as far as that could be identified) as faithfully as possible, it is obviously required to involve a large number of deformation parameters, depicting the involved degrees of freedom, characterized e.g. by the multipole moments of the nuclear shape. For a numerical treatment, on the other hand, a very large number of deformation parameter would be prohibitive. It is thus the demanding task for the nuclear physicist to identify the essential degrees of freedom of a nuclear shape and to bring these into an analytical form. A very large number of shape parametrizations have been proposed (see Ref. [32] for an extensive review) and are currently used to investigate all kind of nuclear properties. One of the most widely used (see e.g. [33; 34; 35; 36]) such parametrization is the one Lord Rayleigh proposed already towards the end of the 19th century [37]. Among other more recent shape parametrizations which have been used to describe the fission process, one should mention the _quadratic surfaces of revolution_ (QSR) [38] of Ray Nix, the Cassini ovals [39; 40] of Pashkevich, the famous Funny-Hills parametrization [41] of the Copenhagen group and its improved version [42], as well as the expansion of the nuclear surface in Legendre polynomials [43] of Trentalange, Koonin and Sierk. While the Rayleigh shapes were defined through the radius vector of any surface point in spherical coordinates \(r_{s}(\theta,\varphi)\), an approach which is certainly well adapted to the description of nuclear shapes reasonably close to a sphere, it became rapidly clear that for the description of rather elongated shapes, as they are encountered in the fission process, a parametrization that defines a surface point in cylindrical coordinates in the form \(\rho_{s}(z,\varphi)\), as this has been done in the Funny-Hills parametrization [41], is much better suited. This is e.g. demonstrated by the fact that if one is interested in the description of the fission process and in particular in fission barrier heights, the Rayleigh parametrization fails, or rather converges very slowly as has been demonstrated in Ref. [44]. As it has already been mentioned above, the description of nuclear shapes as they appear along the way from the nuclear ground state to the pre-scission configurations is obviously not a trivial task. For practical reasons, that description should contain as few deformation parameters of the nucleus as possible and, at the same time, reproduce at least, major classes of its shape occurring on its path to fission. Such shapes should comprise, among others, axially-symmetric and asymmetric deformations, elongated forms, characterised in addition by a left-right symmetry or asymmetry, and the posible presence of a neck forming between the two nascent fission fragments. An idea that seems quite straightforward, but had - to our knowledge - never been proposed before (see Ref. [45]), is to make an expansion, in cylindrical coordinates \((\rho,z,\varphi)\), of the square distance \(\rho_{s}^{2}\) of any surface point from the symmetry z-axis in the form of a Fourier expansion: \[\frac{\rho_{s}^{2}(u)}{R_{0}^{2}}\!=\!\!\sum_{n=1}^{\infty}\!\!\left[a_{2n} \cos\left(\!\frac{2n-1}{2}\pi u\!\right)+a_{2n+1}\sin\left(\!\frac{2n}{2}\pi u \!\right)\right]\!, \tag{1}\] where \(R_{0}\) is the radius of the spherical nucleus having the same volume, while \(2\:z_{0}\) is the length of the nuclear shape along the symmetry \(z-\)axis. The dimensionless variable \(u=(z-z_{sh})/z_{0}\) that appears in (1) contains a parameter \(z_{sh}\) given by: \[z_{sh}=\frac{3z_{0}^{2}}{2\pi R_{0}}\sum_{n}(-1)^{n}\,\frac{a_{2n+1}}{n}, \tag{2}\] which insures that the center-of-mass of the shape is always located at the origin of the coordinate system. It turns out that with a rather limited number of Fourier coefficients (of the order of 2-3), which are going to be our deformation parameters, one is able to describe the nuclear energy within an accuracy of the order of half an MeV, which seems quite acceptable. This accuracy can of course be improved by taking higher-order Fourier coefficients into account. In this way the convergence of our Fourier expansion can be tested, which was e.g. not possible for the Funny-Hills parametrization. The above shape parametrization (1) is obviously limited to axially symmetric shapes. Shapes breaking axial symmetry can, however, be easily taken into account by assuming that the cross section perpendicular to the symmetry \(z-\)axis is always of the form of an ellipse with half axes \(a\) und \(b\) (see Fig. 1), such that \(a\,b=\rho_{s}^{2}(z)\) which ensures volume conservation. One then defines a non-axiality parameter: \[\eta=\frac{b-a}{a+b}. \tag{3}\] which is the relative difference of the half axes \(a\) and \(b\). Assuming that this parameter stays the same all across the nuclear shape, the profile function of the nucleus can then be written in the general case of an axially-asymmetric shape as [46; 47]: \[\rho_{s}^{2}(z,\varphi)=\rho_{s}^{2}(z)\,f_{\eta}(\varphi), \tag{4}\] where \[f_{\eta}(\varphi)=\frac{1-\eta^{2}}{1+\eta^{2}+2\,\eta\cos(2\varphi)}\,. \tag{5}\] In order to relate the Fourier coefficients \(a_{\nu}\), which are our original deformation coordinates, to some more physical deformation parameters and make them vanish, at the same time, for a spherical shape we have defined new collective coordinates \(q_{\nu}\) through: \[\begin{split} q_{2}&=a_{2}^{(0)}/a_{2}-a_{2}/a_{2} ^{(0)},\\ q_{3}&=a_{3},\\ q_{4}&=a_{4}+\sqrt{((q_{2}/9)^{2})+(a_{4}^{(0)})^{ 2}},\\ q_{5}&=a_{5}-(q_{2}-2)\,\frac{a_{3}}{10},\\ q_{6}&=a_{6}-\sqrt{(\frac{a_{2}}{100})^{2}+(a_{6}^ {(0)})^{2}},\end{split} \tag{6}\] where the \(a_{2n}^{(0)}\) defined by: \[a_{2n}^{(0)}=(-1)^{n-1}32/[\pi(2n-1)]^{3} \tag{7}\] correspond to the values of the \(a_{2n}\) for a sphere. In what follows, we will limit ourselves to only four deformation parameters \((q_{2},q_{3},q_{4},\eta)\), where the parameter \(q_{2}\) determines the elongation of the shape and therefore stands for the quadrupole degree of freedom, \(q_{3}\) for the octupole deformation and thus for the left-right asymmetry and \(q_{4}\) for the hexadecapole deformation and would be responsible for the possible formation of a neck region. Higher order terms would then define higher-order multipole moments. ### The macroscopic-microscopic approach Having defined above an analytical form of the parametrization of the nuclear shape that is rapidly convergent, as already mentioned and as this has been shown e.g. in Ref. [47], we shall now present the macroscopic-microscopic approach which will allow us to evaluate the nuclear energy for any deformation that can possibly be defined through the above shape parametrization. This macroscopic-microscopic approach relies on Figure 1: (Color online) Schematic visualization of the parameters entering the definition of the profile function defined through Eqs. (1)-(5). a parametrization of the average, liquid-drop type energy in the spirit of the Bethe-Weizsacker mass formula [48; 49]. The liquid-drop type approach that we are using in what follows is what is known as the Lublin-Strasbourg Drop (LSD) [8], which has the particularity to contain in the leptodermous expansion a curvature \(A^{1/3}\) term and a congruence energy term [50; 51]. The total nuclear energy is then given by: \[\begin{split} E_{LSD}=& b_{vol}(1-k_{vol}I^{2})A-\\ & b_{surf}(1-k_{surf}I^{2})A^{2/3})B_{surf}(\{q_{i}\})\\ &-b_{cur}(1-k_{cur}I^{2})A^{1/3}B_{cur}(\{q_{i}\}\\ &-\frac{3}{5}e^{2}\frac{Z^{2}}{r_{0}^{ch}A^{1/3}}B_{Coul}(\{q_{i} \}+C_{4}\frac{Z^{2}}{A}\\ &-10\,exp(-4.2|I|).\end{split} \tag{8}\] Taking in addition microscopic energy corrections, taken from Ref. [51], into account, the thus obtained total nuclear energy, which had been fitted to reproduce in the best possible way the ground-state masses of the 2766 isotopes with \(N\geq 8\) and \(Z\geq 8\) known at that time (2003), has been quite successful. It has, indeed, been shown that it does not only yield an excellent description of nuclear masses (with an r.m.s. deviation of 0.70 MeV from the experimental data), but that it is also able to reproduce experimentally determined fission-barrier heights with a very good accuracy. The coefficients of this leptodermous expansion are given in the Table 1. ### Shell and pairing corrections Observing that the average nuclear energy can be approximated to some reasonable extent by a macroscopic mass formula, like the one of Weizsacker and Bethe [48; 49], but that there exist quantum effects in such a microscopic system, which are often responsible for the essential physical phenomena, like the structure of the nuclear ground state, a description of the influence of these quantum effects, associated with the existence of the shell structure in nuclei, was introduced by Myers and Swiatecki in 1966 in terms of energy corrections to the smooth liquid-drop energy [52] given in our case by Eq. (8). An important contribution was then made by Strutinsky in 1968 [53; 54; 55] who proposed an efficient and fast method for evaluating the total energy of a nucleus by a smoothing procedure of the single-particle spectrum which at the same time takes into account in some approximate way the influence of the energy levels lying in the continuum. The average nuclear energy obtained in such a way can then be subtracted from the sum of the single-particle levels, to yield the so-called Strutinsky shell-correction energy \[\delta E_{shell}=\sum_{\nu}\left[n_{\nu}-\tilde{n}_{\nu}\right]e_{\nu} \tag{9}\] where \(n_{\nu}\) is a Heavyside step function, with values 0 or 1 depending on whether \(e_{\nu}\) is located above or below the Fermi energy and \(\tilde{n}_{\nu}\) is obtained by a Strutinsky smoothing procedure [55]. The main advantage of the Strutinsky method, is that it can be applied to an arbitrary spectrum of single-particle states. Another microscopic energy correction to the total energy of the nucleus has its origin in the pairing correlations which exist in a BCS-type approach and for the heavy nuclei in our study only between nucleons of the same type (protons or neutrons). These pairing correlations cause nuclei having an even number of protons or neutrons to be more bound. This pairing interaction is described here by means of the superconducting approach proposed initially for the correlations between electrons by Bardeen, Cooper and Schrieffer [56] in the framework of solid state physics. In order to obtain a many-body solution which is an eigenstate of the particle number operator, an approximate projection of the BCS wave functions onto good particle number is carried out in our approach using the Generator Coordinate Method (GCM) with the Gaussian Overlap Approximation (GOA), as presented e.g. in Ref. [57]. Let us recall that, in general, the \(n^{th}\) GCM many-body state \(|\Psi_{n}(X)\rangle\) is constructed as a function of the single-particle variables \(X\) as \[|\Psi_{n}(X)\rangle=\int dq\,f_{n}(q)|X;q\rangle, \tag{10}\] where \(f_{n}(q)\) is called a weight function and \(|X;q\rangle\) is a generator function (of HF or HFB eigensolutions or BCS many-body solutions) which depends on the single-particle variables \(X\) and parametrically on a certain set of collective variables \(\{q\}\) which can be taken simply as the nuclear deformation parameters or other degrees of freedom describing nuclear collective motions. To determine the weights \(f_{n}(q)\), one assumes the existence of stationary solutions \(\varepsilon_{n}\) of a many-body Hamiltonian \(\hat{H}_{mb}\), with respect to variations \(\delta f_{n}(q)\) \[\langle\Psi_{n}(X)|\hat{H}_{mb}|\Psi_{n}(X)\rangle\approx\langle\Phi_{n}(q)| \hat{\mathcal{H}}_{coll}|\Phi_{n}(q)\rangle=\varepsilon_{n}. \tag{11}\] Such a prescription represents an approximate way of mapping the single-particle fermionic space onto a collective one, spanned by collective wave functions \(|\Phi_{n}(q)\rangle\). For this purpose, one assumes that the generator coordinates are continuous and the overlap of generating functions \(|X;q\rangle\) has the form of a multidimensional Gauss \begin{table} \begin{tabular}{|c|c||c|c||c|} \hline \(b_{vol}\) & 15.4920 MeV & \(b_{surf}\) & 16.9707 MeV & \(b_{cur}\) & 3.8602 MeV \\ \hline \(k_{vol}\) & 1.8601 & \(k_{surf}\) & 2.2038 & \(k_{cur}\) & -2.3764 \\ \hline \end{tabular} \begin{tabular}{|c|c|c||c|} \hline \(C_{4}\) & 0.9181 MeV & \(r_{0}\) & 1.21725 fm & \\ \hline \end{tabular} \end{table} Table 1: Values of the parameters of the the Lublin-Strasbourg Drop Model. function or may be transformed into a gaussian shape. Let us now choose a generator function \(|X;\varphi,q\rangle\) of the form \[|X;\varphi,q\rangle=e^{i\varphi\hat{\mathcal{N}}}|X;q\rangle_{BCS}, \tag{12}\] where \(\varphi\) is the so called gauge angle and \(\{q\}\) is the set of our collective deformation parameters \((\eta,q_{2},q_{3},q_{4})\). The hermitian operator \(\hat{\mathcal{N}}\) describes the fluctuations of the particle number \[\hat{\mathcal{N}}=-i\frac{\partial}{\partial\varphi}\equiv\hat{N}-_{BCS} \left\langle X;q|\hat{N}|X;q\rangle_{BCS}.\right. \tag{13}\] With the above assumptions, the generator function entering Eq. (10) may be rewritten to the form \[|X;q\rangle_{m}=\int\limits_{0}^{2\pi}d\varphi e^{i(N+m)\varphi}[e^{-i\hat{N} \varphi}|X;q\rangle_{BCS}] \tag{14}\] with \(m=N-\langle\hat{N}\rangle=0,\pm 2,\pm 4,...\) corresponding to a quantum number of rotation in the gauge space. For \(m=0\) we get the prescription for the typical particle-number projected generator function of the ground state, where no quasi-particle pair is excited. The only effect of the particle number projection is then given by the appearance of a zero-point energy correction \(\epsilon_{0}\) given as \[\epsilon_{0}=\frac{\sum\limits_{\nu>0}\left[\left((e_{\nu}-\lambda)(u_{\nu}^ {2}-v_{\nu}^{2})+2\Delta u_{\nu}v_{\nu}+G\,v_{\nu}^{4}\right)/E_{\nu}^{2} \right]}{\sum\limits_{\nu>0}E_{\nu}^{-2}}, \tag{15}\] which subtracted from the BCS ground-state energy without the projection effects, leads to a deeper ground-state energy at a slightly higher value of the pairing gap \(\Delta\) as compared to the corresponding value in the original BCS approach. In Eq. (15), \(E_{\nu}=\sqrt{(e_{\nu}-\lambda)^{2}+\Delta^{2}}\) is the quasi-particle energy while \(\lambda\) and \(G\) are respectively the BCS Fermi energy and the constant pairing strength. Let us recall that these equations need to be defined independently for protons and neutrons. The summations in the above equations runs over the single-particle states inside what is called a pairing window of energy width \(2\Omega\) around the Fermi energy (\(\lambda-\Omega<e_{\nu}<\lambda+\Omega\)). Since the pairing interaction takes place between a pair of particles in time-reversed states and since these have precisely the same energy, this summation runs only over states with one fixed orientation of the total angular momentum (let us call these \(k\!>\!0\)), excluding their time-reversed (\(k\!<\!0\)) partners. In the above equations \(v_{\nu}^{2}\) is the occupation probability of the single-particle state of energy \(e_{\nu}\) while \(u_{\nu}^{2}\) denotes the probability that this state is unoccupied. Obviously, \(v_{\nu}^{2}+u_{\nu}^{2}=1\). The single-particle energies \(e_{\nu}\) are the eigenvalues of a mean-field Hamiltonian with a mean-field potential chosen in a well adapted way to describe the nucleus under study at the chosen deformation. In this work this is generally done by folding the deformed shape, generated in our case by the Fourier expansion described in section II.A, with a Yukawa-type folding function as explained e.g. in Refs. [58; 59]. The energy correction generated by the pairing correlations is in general defined as the difference between the nuclear energy, obtained in the above projected \(BCS\) approach and the sum of the single particle energies up to the last occupied level \[\delta E_{pair}=E_{BCS}-\sum e_{\nu}-\tilde{E}_{pair}, \tag{16}\] where \(\tilde{E}_{pair}\) is the so-called average pairing energy which is not included in the liquid drop formula. The ground state energy of the nucleus in such an approximation can then be written as \[E_{BCS}=2\sum\limits_{\nu>0}e_{\nu}v_{\nu}^{2}-G(\sum\limits_{\nu>0}u_{\nu}v_{ \nu})^{2}-G\sum\limits_{\nu>0}v_{\nu}^{4}-\epsilon_{0} \tag{17}\] The average pairing energy, projected onto good particle number, then writes as \[\begin{split}\tilde{E}_{pair}=&-\frac{1}{2}\tilde{g }\,\tilde{\Delta}^{2}+\frac{1}{2}\tilde{g}\,\tilde{\Delta}\,\arctan\left( \frac{\Omega}{\tilde{\Delta}}\right)-\log\left(\frac{\Omega}{\tilde{\Delta}} \right)\,\tilde{\Delta}\\ &+\frac{3}{4}G\,\frac{\Omega/\tilde{\Delta}}{1+\left(\frac{ \Omega}{\tilde{\Delta}}\right)^{2}}/\arctan\left(\frac{\Omega}{\tilde{\Delta }}\right)-\frac{1}{4}G,\end{split} \tag{18}\] where \(\tilde{g}\) is the average density of single-particle levels in the \(2\Omega\) energy window whereas \(\tilde{\Delta}\) denotes the average pairing gap corresponding to a given strength \(G\) of pairing interaction \[\tilde{\Delta}=2\Omega\,e^{-1/G\tilde{g}}. \tag{19}\] In all above considerations one admits a pairing energy window of width \(2\Omega\), containing \(2\sqrt{15N_{q}}\) (\(N_{q}\!=\mathrm{Z}\,\mathrm{or}\,\mathrm{N}\)) single-particle levels around the Fermi level. ### Fitting the pairing strength To be able to carry out the calculations in the above described model with pairing correlations acting inside a pairing window of width \(2\Omega\) around the Fermi energy, one has to adjust the pairing strengths \(G\) and through that, the pairing gaps \(\Delta(G)\) for protons and neutrons. The latter have to reproduce as accurately as possible the experimental proton and neutron pairing gaps \(\Delta_{q}^{(exp)}\) calculated out of measured mass excesses of neighbouring odd-even heavy and super-heavy nuclei, as tabulated e.g. in Ref. [60]. The energy gap \(\Delta_{q}\), (\(q\!=\!n\) or \(p\)) for neutrons or protons produced by the pairing interaction can be expressed as \(\Delta_{q}\!=\!E_{int}^{(q)}/2\), with \(E_{int}^{(q)}\) the interaction energy between two nucleons of type \(q\). For a given nucleus with particle numbers \(N_{q}=N\) or \(Z\), and the corresponding separation energies \(S(N_{q})\) \[E_{int}^{(q)}=S(N_{q})-\frac{1}{2}[S(N_{q}+1)+S(N_{q}-1)], \tag{20}\] the pairing gaps are easily expressed in terms of the empirical mass excesses \(B(N_{q})\) in the following way: \[\Delta_{q}^{(exp)}=\frac{1}{4}\left[2B(N_{q})-B(N_{q}+1)-B(N_{q}-1)\right]. \tag{21}\] The pairing strengths \(G\) can therefore be deduced, for all heavy nuclei with \(Z=90-100\) for which the ground-state masses are known, by requiring that the pairing gaps \(\Delta_{q}(G)\) obtained in the BCS approach (including the particle-number projection) are found as close as possible to their empirical values calculated through Eq. (21). One then requires the following expression \[\sum|\Delta_{q}^{(exp)}-\Delta_{q}(G)| \tag{22}\] to be minimal, where the sum runs over the set of the considered nuclei. In order to facilitate the above discussed calculation, one usually tries to find, in practice, a simple analytical expression, depending on \(N,Z\), which is able to reproduce the pairing strength \(G\) for both protons and neutrons in the best possible way. Among many such phe Figure 2: Comparison between calculated (in black) and empirical (in red) neutron (a) and proton (b) pairing gaps for the discussed isotopic chains from \(Z=90\) up to \(Z=100\). For a better visibility we present for a given value of the mass number \(A\) only the one isotope, for which the discrepancy between the theoretical and experimental values is the largest. Panel (c) displays the largest differences between experimental and calculated ground-state masses, evaluated with the pairing strength, ”\(G_{\rm old}\)” (red) of the Ref. [60] and the one \(G_{\rm fit}\) (black) obtained from Eq. (23). nomenological expressions, one which has proven quite successful may be written in the following form: \[G\,A=g_{0}+g_{1}\,(N-Z), \tag{23}\] This expression depends on only two free parameters \(g_{0}\) and \(g_{1}\) which are fitted to the value of \(G\) that renders the expression of Eq. (22) minimal. The optimal values for these two parameters have been found for our sample of nuclei to be \(g_{0}=18.35\) and \(g_{1}=0.103\) for protons and \(g_{0}=24.1\) and \(g_{1}=-0.135\) for neutrons. The quality of this fit is visualized in Fig. 2 (a) and (b) where the values of pairing gaps calculated using Eq. (23) are compared with the empirical ones obtained within Eq. (21). To make this comparison more transparent, we chose to present for each isobaric chain only the one isotope, for which the discrepancy between the theoretical and experimental values is the largest. One finds that the largest deviation for neutrons does not exceed 0.35 MeV (\({}^{236}\)Th and \({}^{250}\)Cf) and for protons is always lower than 0.2 MeV. It is worth to point out that on average, the largest deviations from the empirical pairing gaps for both types of nucleons reach around 0.12 MeV. Panel (c) of the Fig. 2 presents, for the nuclei of panels (a) and (b), the macroscopic-microscopic ground state energy, relative to the experimental data, with the pairing corrections obtained using the prescription of Eq. (23) (black dots) and a previous fit of the pairing strength (red dots) within the same projected BCS-like formalism as presented above (see [60] and references therein), where the nucleon number dependence of \(G\) is given by: \[G_{q}\cdot N_{q}^{2/3}=g_{q}^{(0)},\quad q=\{n,p\}. \tag{24}\] The only parameter \(g_{q}^{(0)}\) in this parametrisation of the pairing strength is chosen as \(g_{q}^{(0)}\!=\!0.28\hbar\omega_{0}\) with a value of \(\hbar\omega_{0}=41/A^{1/3}\) MeV, widely used in macroscopic-microscopic calculations, and common for both protons and neutrons. Our new fit, Eq. (23), yields, however, slightly higher ground-state masses and thereby leads to a better reproduction of the experimental data as compared to the ones obtained with the fit of Eq. (24) as shown in Fig. 2. Only very few isotopes (\({}^{228,238}\)U, \({}^{252}\)Cf and \({}^{256}\)Fm) now fail, in our new pairing-strength parametrization, to reproduce the data with a 1.2 MeV margin. Taking into account the above considerations, one can now write down the total energy of the nuclear system in the macroscopic-microscopic approach simply as \[E(N,Z,def)=E_{LSD}+\sum_{q}\delta E_{shell}^{(q)}+\delta E_{pair}^{(q)} \tag{25}\] with the shell and pairing corrections \(\delta E_{shell}^{(q)}\) and \(\delta E_{pair}^{(q)}\) (\(q=n,p\)) being given by Eqs. (9) and (16), respectively. Using the above prescription, we determine the nuclear energy as function of the deformation parameters \(\eta\), \(q_{2}\), \(q_{3}\)\(q_{4}\) introduced in the Section II.A which stand for the non-axiality, elongation, left-right asymmetry and neck formation of the nuclear shape respectively. The collection of all these energy points constitutes what we call the deformation energy or potential energy surface (PES) of a given nucleus on a discrete four-dimensional mesh. We have chosen a step length of \(\Delta q_{2}=0.05\) for the elongation parameter \(q_{2}\) and a step length of \(\Delta q_{j}=0.03\) for the other 3 deformation parameters with a total mesh size of \(n_{2}\times n_{3}\times n_{4}\times n_{\eta}=60\times 8\times 15\times 8=57600\) nodes. We have verified that within such a mesh size we are able to describe with a good enough accuracy all physically relevant effects, like local minima, saddle points, the formation of valleys and ridges. ## III Multidimensional WKB method The WKB method is a semi-classical approximation which is widely used in quantum mechanical problems to find an approximate solution of the Schrodinger equation implying a potential barrier that a particle has to overcome. The main assumption is that, under the influence of the potential, the particle wave function can still be expressed in terms of a plane wave, but with a momentum \(k(x)\) which is position-dependent and slowly varying with \(x\). ### Lifetimes for spontaneous fission In our approach we have used a multidimensional version of the above characterized WKB approximation to calculate the lifetime of a nucleus undergoing spontaneous fission. This approach has been widely used in nuclear physics for fission and particle or cluster emission to determine the penetrability of a potential-energy barrier defined in a multidimensional deformation space. In the following, the standard one-dimensional WKB method will be generalised to the case of a four-dimensional deformation space, where the deformation variables are the Fourier parameters \(q_{i}\) introduced in Eq. (6). The first step to obtain a good-quality estimate of the lifetime of a system undergoing spontaneous fission is to search for the so-called "_least-action path_" (LAP) leading to fission in our 4-dimensional PES that a nucleus would have to follow on its way to a splitting into fission fragments. Such an approach treats a fission event as a dynamical process, characterized by the collective motion of a large number of nucleons tending to elongate the nuclear shape starting from some initial state, like the nuclear ground state until the scission configuration is reached. Please note that the collective space in which the fission process is simulated can generally be multidimensional, curvilinear and non-Euclidean. In the framework of our present approach the dynamical path to fission actually proceeds in a three-dimensional space defined by the \((q_{2},q_{3},q_{4})\) deformation parameters, whereas the fourth parameter \(\eta\), defining the non-axiality of the shape, is effectively eliminated by minimization of the full 4D potential energy with respect to that deformation coordinate \(\eta\). One then eventually obtains the 3D total nuclear energy function \(E(q_{2},q_{3},q_{4})=E(\eta^{0},q_{2},q_{3},q_{4})\), Eq. (25), where \(\eta^{0}=\eta^{0}(q_{2},q_{3},q_{4})\) minimizes the 4D potential energy \(E(\eta,q_{2},q_{3},q_{4})\) for given \(\{q_{2},q_{3},q_{4}\}\). One should keep in mind that in the here presented approach the energy \(E(\eta,q_{2},q_{3},q_{4})\) is obtained in the macroscopic microscopic model, where the shell corrections in (25) are determined in the Strutinsky method, and the correction for the residual pairing interaction in the BCS approximation with projection onto good particle number obtained in the GCM approach (see Ref. [57]). In both these methods, single-particle states of a folded-Yukawa mean-field potential [61] are used. ### Least-action fission path The action in the above introduced 3(+1)-dimensional deformation space \(\{q_{2},q_{3},q_{4};\eta_{0}\}\) can be represented through the following integral: \[S=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the least-action trajectory itself, where the corresponding fission path tends to omit states associated with a sudden increase of the potential energy or the inertia. For a comparison, we have applied, in addition, another effective prescription of the collective inertia effects simulated by the so-called phenomenological mass parameter \(B(R_{12})\), expressed in units of the reduced mass \(A_{L}A_{R}/(A_{L}+A_{R})\), with \(A_{L}\) and \(A_{R}\) being respectively the mass number of the left and right fragment (see e.g. Ref. [62]) \[B(R_{12})=1+k\frac{17}{15}\exp\bigg{[}\lambda\,(R_{12}^{\rm( sph)}-R_{12})\bigg{]}. \tag{27}\] The above phenomenological mass depends on a single parameter \(R_{12}\!=\!R_{12}(q_{2},q_{3},q_{4})\) (in units of the radius \(R_{0}\) of the spherical shape) describing the evolution towards fission and which is given by the centers-of-mass distance of the nascent fission fragments. The parameter \(\lambda\!=\!0.408\) describes the "descent rate" of the exponential function, and the centers-of-mass distance for a spherical shape is given by \(R_{12}^{\rm(sph)}\!=\!0.75\). Please notice that the magnitude of the center-of-mass distance \(R_{12}\) depends essentially on the elongation \(q_{2}\) and only weakly on the left-right asymmetry and the neck formation parameters, \(q_{3}\) and \(q_{4}\), respectively. For that reason the least-action fission path obtained in this way cannot be called fully dynamical. According to the main concept, the parameter \(k\) in (27) is chosen, first of all, to ensure that the value of \(B(R_{12})\) for deformations in the vicinity of the barrier is close to the value of the hydrodynamical diagonal mass tensor component projected onto the one-dimensional fission path parameterized by the \(R_{12}\) distance. At the same time, it should reproduce the asymptotic behaviour of the rigid-body inertia when a nucleus splits into two fragments, in which case the inertia of the strongly elongated nucleus, close to the scission configuration, should smoothly merge into the reduced mass of the two fragments. It turned out that the optimal value of this parameter is simply \(k=1\). Let us now explain an effective method to determine the least-action path in our 3D deformation space, which will then be used to calculate the tunneling probability through the fission barrier to determine the spontaneous fission half-lives. In order to define any path in this deformation space, one first of all notices that any continuous and bounded function over a given finite interval of its arguments can always be approximated by a Fourier type expansion, involving only \(sin\) functions on top of an _average path_ whenever the endpoints of that path are fixed, like in our case, at the ground state and a given exit point. Defining that average path under the barrier by a straight line connecting the ground state and the chosen exit point and considering the elongation parameter \(q_{2}\) as the essential variable responsable for the fission process, one can always approximate the deformation parameters \(q_{3}\) and \(q_{4}\) along the least-action path as functions of \(q_{2}\) in the following way: \[q_{\nu}^{\rm(LAP)}(q_{2})=\bigg{[}q_{\nu_{g.x.}}+\frac{(q_{\nu_ {exit}}-q_{\nu_{g.x.}})(q_{2}-q_{2_{g.x.}})}{q_{2_{exit}}-q_{2_{g.x.}}}\] \[\qquad\qquad+\sum_{\ell=1}^{N_{F}}a_{\ell}\,sin\left(\ell\pi \frac{q_{2}-q_{2_{g.x.}}}{q_{2_{exit}}-q_{2_{g.x.}}}\right)\bigg{]},\ \ \nu=3,4 \tag{28}\] where the amplitudes \(a_{\ell}\) of the series expansion are treated as variational parameters relative to which the minimum of the action integral (26) is being searched. The upper limit \(N_{F}\) of the Fourier series expansion in (28) has to be chosen such that the final result for the tunneling probability becomes essentially independent of \(N_{F}\). We have found that a value of \(N_{F}=14\) turns out to be sufficient to obtain a very good convergence of the Fourier series and thus a well converged tunneling probability. Having found the least-action integral value (with respect to the \(a_{\ell}\) amplitudes), one thus obtains the evolution of this path for a given nucleus in the considered 3D deformation space. In Figs. 4 we present, for the \({}^{230}\)U, \({}^{234}\)U and \({}^{252}\)No isotopes, the projections of the full 3D LAP onto the 2D sub-spaces \((q_{2},q_{3})\) and \((q_{2},q_{4})\) indicated by the thick red line. These isotopes have been chosen to cover the region from light to heavy actinides. As can be seen, the PES and the associated LAP in these extreme cases have different characteristics. In the lighter actinide nuclei, due to the importance of the shell effects, the PES is showing a stronger deformation dependence than in the heavy No isotope. Consequently, the fission barrier in uranium, unlike in nobelium, is higher and shows two minima and two saddle points before reaching the scission configuration. Already from this immediate, qualitative analysis, one can expect a shorter half-life for nobelium than for uranium, an analysis which is consistent with the experiment. As can be already deduced from Eq. (26), the final course of the LAP in the multi-dimensional deformation space is dictated by the interplay between the deformation dependent PES and the inertia tensor. This is also the reason why the LAP is always shorter than the least-energy path (LEP) and passes generally through higher energy configurations (sometimes by as much as 2 MeV) as compared to the corresponding LEP. The actions along both these trajectories can therefore differ significantly, thus causing sometimes a difference of several order of magnitude in the estimates of the fission half-lives. In order to keep the computation time within reasonable limits, without making compromises on the precision of our results, we are able to consider up to the first \(N_{F}=24\) harmonic components of the Fourier series. Clearly, in such a large number of dimensions, one may encounter a problem of distinguishing between some local and the global minimum of the action integral. To avoid this behavior, we start the calculations for each nucleus with a low value of \(N_{F}\), e.g. \(N_{F}=6\) (only the \(N_{F}/2=3\) first Fourier components in the series (28) for each of the two functions \(q_{3}(q_{2})\) and \(q_{4}(q_{2})\), on top of the average path, are taken into account) and gradually increase that value, checking after each step whether convergence is obtained. It turns out, however, that restricting ourselves to the first few components of these series, like e.g. \(N_{F}=8-10\) leads to LAPs in Fig. 4 that visually cannot be distinguished from the ones obtained Figure 4: Potential energy surfaces for \({}^{230}\)U (a,b), \({}^{234}\)U (c,d) and \({}^{252}\)No (e,f) isotopes projected onto the \((q_{2},q_{3})\) deformation subspace with minimization with respect to \(\eta\) and \(q_{4}\) (left column), and onto the \((q_{2},q_{4})\) subspace, with minimization with respect to \(\eta\) and \(q_{3}\) (right column) with the least-action paths (LAP) represented by the red line and the least-energy path (LEP) by the black line. with larger values of \(N_{F}\). Let us mention at this point that the here presented method for determining the least-action path works very well in the considered nuclear deformation space restricted to 3 dimensions \((q_{2},q_{3},q_{4})\), but could also be easily extended to a 4D deformation space, where, in addition, the non-axiality deformation would be treated dynamically. Having calculated the values \(S\) of the action, one can finally determine the spontaneous fission lifetime using the standard WKB relations [63]. \[T_{1/2}^{sf}=\frac{2\pi\ln(2)}{\omega_{0}}\left(1+e^{2S}\right), \tag{29}\] where \(E_{\mathrm{ZPE}}\approx\frac{1}{2}\hbar\omega_{0}\) stands for the zero-point vibration energy which is usually taken to be in the range of \(0.5-1\) MeV. In the present work we have taken a value of \(E_{\mathrm{ZPE}}=0.5\) MeV. ## IV Results The spontaneous fission half-lives are obtained for selected isotopes of actinide nuclei, namely thorium (Th), uranium (U), plutonium (Pu), curium (Cm), californium (Cf), and fermium (Fm) and for super heavy isotopes of nobelium (Nb), rutherforodium (Rf), seaborgium (Sg), hasium (Hs) and darmstadtium (Ds) for which experimental data are available [64]. The results of the calculations together with the measured values are presented in Fig. 5. The data for the individual isotopes of atomic nuclei calculated within the above presented approach are given as open blue circles, while the experimental data are in red. In order to obtain some systematics for the spontaneous fission half-lives displayed in Fig. 5 for all isotopic chains of actinides and super-heavy elements up to \(Z\!=\!110\), we have, first of all, adjusted the parameter \(\beta\) in Eq. (26) to all the measured half-lives of actinide nuclei only, and then, using the thus obtained fixed value, we performed the spontaneous fission half-lives calculations for super-heavy elements. The latter are then compared with the experimental data. Let us mention in this connection that the hydrodynamical inertia tensor has been successfully used in the calculations of the fission properties determined by the configurations lying close to the scission point, such as fragment mass or charge distributions, whereas the barrier penetration occurs for a significantly lower elongations around the fission barrier (see e.g. Ref. [66]). As seen in Fig. 3, the inertia components \(B_{22}\) and \(B_{33}\) are much lower in the vicinity of the barrier region than the ones close to scission. On the other hand, the pure hydrodynamical approach is, in its original form, not well adapted for a reliable description of the effective inertia near the barrier. The phenomenological mass parameter (27) contains therefore, as the essential contribution, the rigid-body inertia, together with a term, controlled by the parameter \(k\), determined by the difference between the rigid body and the irrotational flow inertia. Figure 5: Half-lives of actinide and superheavy nuclei obtained in the here discussed 3D WKB approach with the irrotational flow hydrodynamical mass tensor (open blue circles) as compared with the corresponding experimental data (full red circles). However, from the exact fit of the fission half-lives of nuclei in the range \(Z=90\) to \(Z=104\) to the corresponding experimental values, we obtain a value of \(\beta=5\). Let us notice at this point that the hydrodynamical inertia used in our approach differs from the commonly used microscopic mass tensor obtained within the cranking model by almost a factor of 5. This value of \(\beta=5\) ensures that the logarithm of the evaluated half-lives in super-heavy nuclei stays within reasonable limits of approximately 2 [\(1/s\)] which is comparable with other recent evaluations [17; 67]. Nevertheless, the evaluated half-lives for \({}^{230}\)Th and some curium isotopes stick out from their isotopic systematics by several (3-4) orders of magnitude as one can see in Fig. 5. To explain these discrepancies one may refer to Ref. [7], where it is shown, within a simple analytical effective 1D WKB approach, that the main quantity determining the fission half-lives is the fission barrier height \(E_{B}\). Its dependence on the barrier width is already absorbed in the adjustable, second order polynomial of \(E_{B}\), common for all heavy and super-heavy elements. At this point it may also be worth to recall that even a small change in the fission barrier height, leads to a substantial decrease/increase of the tunneling probability and as a consequence produces longer/shorter half-lives. Since our macroscopic-microscopic model underestimates the fission barrier heights in \({}^{232-234}\)U by about \(1-2\) MeV (see, Ref. [65]) the resulting half-lives are underestimated by as much as some 4 orders of magnitude. A similar effect can be observed for the \({}^{242-246}\)Cm isotopes, where the discrepancies between the experimental and theoretical first and second barriers are the largest throughout the whole isotopic chain. The reason of the considerable overestimation of the half-lives of the super-heavy nuclei in the isotopes \({}^{258-262}\)No and \({}^{256-260}\)Rf is completely different. Due to the fact that the resulting exit point from under the barrier is located at a much more elongated and substantially more mass asymmetric shape, as compared to more symmetric and less elongated exit configurations (higher by only some 0.5 MeV than the equilibrium point), as becomes apparent from Fig. 6, the action integral and consequently the fission lifetimes can easily be significantly overestimated. Comparing the Figure 6: Two dimensional \((q_{2},q_{3})\) PES for \({}^{260}\)No obtained with the pairing strength \(G\) given in Eq. (23) (top) and with the pairing prescription (24) of Ref. [60] (bottom). Figure 7: Half-lives of actinide nuclei estimated with a phenomenological inertia parameter given by Eq. (27) and the pairing-strength, Eq. (24), of Ref. [60] PES obtained within the above presented pairing treatment with the one resulting from the pairing of Ref. [60], one can observe in Fig. 6 that for \({}^{260}\)No the configurations for \(q_{2}\approx 0.8\), which potentially might be treated as candidates for the exit point, are in our present pairing treatment, energetically slightly too high by about \(0.2-0.5\) MeV with respect to the ground state minimum. That is why the exit point in this pairing treatment had then to be found for larger elongations (\(q_{2}\approx 1.1\)) and an asymmetry parameter of \(q_{3}\approx 0.08\), which then leads to an increase of the action integral and the resulting half-life. By performing the half-lives calculation with an exit point imposed at \(q_{2}\)=0.8 and \(q_{3}=0\) (see, Fig. 6 upper panel), we have confirmed, that the obtained half-life value then deviates from the corresponding empirical one by less than 2 orders of magnitude and thus stays within the average discrepancy obtained for the other considered actinides. We would also like to present in Fig. 7 the results of half-lives calculations for actinide nuclei obtained with the phenomenological inertia of Eq. (27). Comparing the results obtained with both here presented approximations for the inertia, one observes that the half-lives with the pure hydrodynamical inertia tensor with a multiplier \(\beta=5.0\) (common for all mass-tensor components) are much closer to the experimental results than those with the phenomenological mass of Eq. (27). Please keep in mind in this comparison, that the potential energy surfaces and the procedure of searching for the least-action path are, obviously, identical in both these calculations. ## V Conclusions Spontaneous fission half-lives for nuclei with \(90\leq Z\leq 104\) have been described in the macroscopic-microscopic approach together with the Lublin-Strasbourg Drop model, the mean-field genarated by a folded-Yukawa potential and an updated pairing interaction strengths \(G\) in the BCS approach with a GCM+GOA particle-number projection. The dynamics of the fission process has been simulated by the semiclassical WKB method with the least-action integral describing the evolution of the nucleus in a deformation space given by the expansion coefficients of a Fourier shape parametrization and which stand for the elongation, mass asymmetry, non-axiality and neck degrees of freedom. In order to take into account the variation of the collective inertia along the fission path, we have inserted into the action-integral expression the irrotational flow mass tensor. Since the resulting least-action path to fission tends, to some extent, to omit states where the inertia changes dramatically due to the presence of shell effects, the usage of this effective macroscopic model of collective inertia seems to be well justified. For a comparison, we have also performed similar calculations of fission lifetimes with a collective mass parameter, Eq. (27), which has been introduced already some four decades ago. Quite astonishingly, both these inertia approaches give generally quite close values of \(T_{1/2}\) life-times, particularly in uranium, plutonium and curium isotopes while in thorium, californium and fermium, the use of the collective mass parameter, Eq. (27), leads to a mean deviation reaching several orders of magnitude relative to the experimental results. One may therefore obviously conclude that a simultaneous dynamical treatment of all here discussed degrees of freedom, namely elongation, mass asymmetry and neck degrees of freedom, is crucial to reproduce half-lives systematics for the spontaneous fission process in heavy nuclei. Let us also keep in mind that spontaneous-fission is only one of several possible nuclear decay channels, competing with the emission of light particles (like \(n\) or _p_), \(\gamma\) quanta, or the emission of light clusters (like e.g. \(\alpha\) particles). The competition between fission and these other processes is something we are presently working on, and will be the subject of a forthcoming publication. ###### Acknowledgements. This work is supported by the COPIN-IN2P3 agreement (Project No. 08-131) between the Polish and French nuclear laboratories and the Polish National Science Center (Project No. 2018/30/Q/ST2/00185). The work of A.Z. is supported by the Polish National Science Centre Grant No. 2021/43/P/ST2/03036.
2304.05903
Reheating constraints in Instant Preheating
We use Instant Preheating as a mechanism to reheat the universe when its evolution is modeled by a non-oscillating background. Once we obtain the reheating temperature, we calculate the number of e-folds using two different methods, which allows us to establish a relationship between the reheating temperature and the spectral index of scalar perturbations. We explore this connection to constrain the spectral index for different Quintessential Inflation models.
Jaume de Haro
2023-04-12T15:18:57Z
http://arxiv.org/abs/2304.05903v2
# Reheating constraints in Instant Preheating ###### Abstract We use Instant Preheating as a mechanism to reheat the universe when its evolution is modeled by a non-oscillating background. Once we obtain the reheating temperature, we calculate the number of e-folds using two different methods, which allows us to establish a relationship between the reheating temperature and the spectral index of scalar perturbations. We explore this connection to constrain the spectral index for different Quintessential Inflation models. Inflation; Quintessence; Instant Preheating pacs: 04.20.-q, 98.80.Jk, 98.80.Bp ## I Introduction The reheating temperature and the spectral index of scalar perturbations are closely linked in inflationary cosmologies. Therefore, by establishing the relationship between them and determining the range of viable reheating temperatures, we can calculate the possible values of the spectral index and compare them with the observational data provided by Planck's team. With this idea in mind, we calculated the number of e-folds for non-oscillating inflationary models in two different ways. First, we used observational data from the present to the end of inflation, which allowed us to determine the number of e-folds as a function of the reheating temperature and the spectral index. Second, we used the inflationary potential, which states that the number of e-folds depends solely on the spectral index. So, by equating both expressions, we established the relationship between the reheating temperature and the spectral index. The next step is to investigate the relationship between the reheating temperature and the spectral index when the reheating mechanism is the well-known _Instant Preheating_[2; 3]. We apply our results to various Quintessential Inflation (QI) scenarios, such as the Peebles-Vilenkin model [4], exponential \(\alpha\)-attractors [5], and double exponential models [6]. One of our main findings concerning Instant Preheating is that the coupling constant, denoted by \(\tilde{g}\), between the inflaton field and the quantum field responsible for particle production is highly constrained. We found that its value must lie between \(10^{-6}\) and \(10^{-5}\). The lower limit is necessary to prevent vacuum polarization effects during the last e-folds of inflation from affecting the evolution of the inflaton field. The upper limit is due to the requirement that the reheating temperature is below \(10^{9}\) GeV to avoid interference with the success of Big Bang Nucleosynthesis (BBN) [1], caused by the late decay of gravitationally interacting particles, such as the gravitino or the moduli fields. For a coupling value around \(\tilde{g}\cong 5\times 10^{-6}\), shortly after the start of kination, the created particles become non-relativistic, and during the kination phase, they decay into lighter particles, which reheats the universe to a temperature restricted to the range of \(10^{-12}M_{pl}\) to \(10^{-10}M_{pl}\), where \(M_{pl}\) is the reduced Planck mass. After obtaining the maximum and minimum values of the reheating temperature, we use the link between the reheating temperature and the spectral index of the scalar perturbations for a given Quintessential Inflation (QI) model to constrain it. This results in a narrow range of viable values, which falls within the \(2\sigma\) Confidence Level of the observable values obtained by the Planck's team. Finally, we also investigate the implications of this relationship when reheating occurs via gravitational particle production. Specifically, we consider an exponential \(\alpha\)-attractor model and show the interconnection between the spectral index and the mass of the produced particles. We find that for a spectral index close to \(n_{s}\cong 0.97\), there are heavy as well as light masses that can give rise to viable reheating temperatures ranging from 1 MeV to \(10^{7}\) GeV. The present work is organized as follows: In Section II, we investigate the Instant Preheating mechanism and obtain the range of viable reheating temperatures. In Section III, we present the two different methods used for calculating the number of last e-folds and the relationship between the spectral index and the reheating temperature. In Section IV, we apply this relationship to different QI models. In Section V, we consider reheating via gravitational particle production and apply it to \(\alpha\)-attractors to compute feasible reheating temperatures. Finally, in the concluding Section, we summarize our findings and present our conclusions. ## 2 Instant Preheating In this section, we will review one of the most commonly used reheating mechanisms for non-oscillating models, known as Instant Preheating, which was introduced by Felder, Kofman, and Linde in [2; 3]. The basic concept is that the inflaton field, denoted as \(\varphi\), is coupled to a scalar quantum field \(\phi\), and this coupling is responsible for particle production. The Lagrangian density of the quantum field \(\phi\) is given by \[\mathcal{L}=\frac{1}{2}\sqrt{|g|}(g^{\mu\nu}\partial_{\mu}\phi \partial_{\nu}\phi-(m^{2}+\tilde{g}^{2}(\varphi-\varphi_{kin})^{2})\phi^{2}- \xi R\phi^{2}), \tag{1}\] where \(m\) is the bare mass of the field, \(R\) is the Ricci scalar, \(\varphi_{kin}\) is the value of the inflaton at the beginning of kination, and \(\tilde{g}\) is the dimensionless coupling constant between the inflation field and the quantum field. Considering conformally coupled particles, i.e., choosing \(\xi=1/6\), the frequency of the modes will be given by: \[\omega_{k}^{2}(\eta)=k^{2}+m_{eff}^{2}(\eta)a^{2}(\eta), \tag{2}\] where \(m_{eff}(\eta)=\sqrt{m^{2}+\tilde{g}^{2}(\varphi(\eta)-\varphi_{kin})^{2}}\) is the effective mass of the produced particles. The analytic computation of the Bogoliubov coefficients is based on the linear approximation \(\varphi(\eta)-\varphi_{kin}\cong\varphi^{\prime}_{kin}(\eta-\eta_{kin})\) and the assumption that the universe is static with \(a(\eta)=a_{kin}\). Then, the frequency becomes \[\omega_{k}^{2}(\eta)=k^{2}+(m^{2}+\tilde{g}^{2}(\varphi^{\prime}_{kin})^{2}( \eta-\eta_{kin})^{2})a_{kin}^{2} \tag{3}\] and, thus, the analytic value of the \(\beta\)-Bogoliubov coefficients is given by [3]: \[|\beta_{k}|^{2}\cong\exp\left(-\frac{\pi(k^{2}+m^{2}a_{kin}^{2})}{\tilde{g}a_ {kin}\varphi^{\prime}_{kin}}\right)=\exp\left(-\frac{\pi(k^{2}+m^{2}a_{kin}^{2 })}{\sqrt{6}\tilde{g}a_{kin}^{2}H_{kin}M_{pl}}\right). \tag{4}\] This last formula was tested numerically in [7] for the original Peebles-Vilenkin model [4], and also in [8] for the non-oscillating background \[a^{2}(\eta)=\frac{1}{2}\left[(1-\tanh(\eta/\Delta\eta))\,\frac{1}{1+H_{inf}^{ 2}\eta^{2}}+(1+\tanh(\eta/\Delta\eta))(3+2H_{inf}\eta)\right], \tag{5}\] where the scale of inflation, denoted by \(H_{inf}\), is typically of the order \(10^{-6}M_{pl}\) in the majority of inflationary models. The time scale of the phase transition from the end of inflation to the beginning of kination is represented by \(\Delta\eta\). Note that the production of particles is exponentially suppressed for large values of the bare mass, due to the form of the \(\beta\)-Bogoliubov coefficient. Therefore, we will set \(m=0\), which yields \(m_{eff}=\tilde{g}(\varphi-\varphi_{kin})\). Additionally, the modes that contribute to the particle production are those satisfying \[\frac{k^{2}}{a_{kin}^{2}}<\tilde{g}H_{kin}M_{pl}, \tag{6}\] then, in order to have non-relativistic particles during kination, which \(\omega_{k}(\eta)\cong a(\eta)m_{eff}(\eta)\), we need to demand \[\tilde{g}H_{kin}M_{pl}<\tilde{g}^{2}(\varphi(\eta)-\varphi_{kin})^{2}\cong \tilde{g}^{2}M_{pl}^{2}\ln^{2}\left(\frac{H_{kin}}{H(\eta)}\right), \tag{7}\] where we have used that during kination, the inflaton field evolves according to: \[\varphi(\eta)=\varphi_{kin}+\sqrt{\frac{2}{3}}M_{pl}\ln\left(\frac{H_{kin}}{ H(\eta)}\right). \tag{8}\] Therefore, if we consider \(H(\eta)<H_{kin}/3\), then the quantity \(\ln\left(\frac{H_{kin}}{H(\eta)}\right)\) is greater than 1. Additionally, in the majority of inflationary models, we have \(H_{kin}\cong 10^{-7}M_{pl}\). So, by imposing the following condition: \[\tilde{g}H_{kin}M_{pl}<\tilde{g}^{2}M_{pl}^{2}\Longrightarrow\tilde{g}>H_{kin} /M_{pl}\cong 10^{-7}, \tag{9}\] we can ensure that the particles become non-relativistic shortly after the start of kination. After the beginning of kination, when the non-relativistic particles have already been created, the inflaton field evolves according to: \[\ddot{\varphi}+3H\dot{\varphi}=-\tilde{g}\langle\hat{\phi}^{2}\rangle m_{eff}, \tag{10}\] where, \(\langle\hat{\phi}^{2}\rangle\) is the renormalized vacuum average of the quantum operator \(\hat{\phi}^{2}\). To prevent an undesirable second inflationary period, we need to demand that the right-hand side of Eq. (10) is subdominant before the decay of these non-relativistic particles. Therefore, before the decay, we need to impose the following condition: \[H\dot{\varphi}\gg\tilde{g}\langle\hat{\phi}^{2}\rangle m_{eff}. \tag{11}\] Effectively, if the right-hand side of Eq. (10) ceases to be negligible, the inflaton field would be under the action of the quadratic potential given by: \[V(\varphi)=\frac{m_{eff}^{2}}{2}\langle\hat{\phi}^{2}\rangle=\frac{1}{2} \tilde{g}^{2}(\varphi-\varphi_{kin})^{2}\langle\hat{\phi}^{2}\rangle, \tag{12}\] As a result, the field will roll down to \(\varphi_{kin}\), which could potentially initiate a new inflationary phase that we do not desire. Therefore, taking into account that for non-relativistic particles the evolution of \(\phi\) is approximately that of a harmonic oscillator: \[(a\phi)^{\prime\prime}+a^{2}(\eta)m_{eff}^{2}(\eta)(a\phi)=0, \tag{13}\] because the term \(\Delta(a\phi)\) is negligible for non-relativistic particles, we have that its evolution is like: \[a\phi\propto e^{-i\int am_{eff}}\Longrightarrow(a\phi)^{\prime}\sim a(\eta)m _{eff}(\eta)(a\phi), \tag{14}\] meaning that the re-normalized vacuum energy density, which is the effective mass multiplied by the number density of produced particles, is like that of a harmonic oscillator, i.e., \[\langle\hat{\rho}(\eta)\rangle=m_{eff}(\eta)\langle\hat{N}(\eta)\rangle\cong \frac{1}{2a^{4}(\eta)}\langle((a\hat{\phi})^{\prime})^{2}+a^{2}(\eta)m_{eff}^ {2}(\eta)(a\hat{\phi})^{2}\rangle\cong m_{eff}^{2}(\eta)\langle\hat{\phi}^{2 }(\eta)\rangle, \tag{15}\] leading to [3] (see also the Appendix of this work, where a more rigorous demonstration was done): \[\langle\hat{\phi}^{2}(t)\rangle\cong\frac{\langle\hat{N}(t)\rangle}{m_{eff}(t )}, \tag{16}\] and recalling that during kination \(H\sim\dot{\varphi}/M_{pl}\), the condition (11) becomes: \[\dot{\varphi}^{2}(t)\gg\tilde{g}M_{pl}\langle\hat{N}(t)\rangle\Longrightarrow \rho_{B}(t)\gg\tilde{g}M_{pl}\langle\hat{N}(t)\rangle, \tag{17}\] where \(\rho_{B}(t)=\frac{\dot{\varphi}^{2}(t)}{2}\) is the energy density of the background in the kination phase. Shortly after the beginning of kination, as we have already shown, the effective mass of the produced particles becomes greater than \(\tilde{g}M_{pl}\), meaning that if they decay into lighter ones to reheat the universe before the end of kination, i.e., if \(\rho_{B}(t)\gg\langle\hat{\rho}(t)\rangle\cong m_{eff}(t)\langle\hat{N}(t)\rangle\) before their decay, the bound (11) will be automatically satisfied. Then, to ensure that the inflaton field rolls towards infinity, as in all QI models, we will assume that the decay of the produced particles into lighter ones occurs before the end of kination. Now, we calculate the energy densities at the time of decay, which occurs when \(H\sim\Gamma\), where \(\Gamma\) is the decay rate. The corresponding energy densities are \[\rho_{B,dec}=3\Gamma^{2}M_{pl}^{2}\qquad\text{and}\qquad\langle\hat{\rho}_{dec} \rangle\cong m_{dec}\frac{\Gamma}{H_{kin}}\langle\hat{N}_{kin}\rangle, \tag{18}\] where \(m_{dec}\equiv m_{eff}(t_{dec})\) and we have used that during kination the Hubble rate scales as \(a^{-3}\), which implies \(\left(\frac{a_{kin}}{a_{dec}}\right)^{3}=\frac{\Gamma}{H_{kin}}\). After the decay, the energy densities evolve as \[\rho_{B}(t)=3\Gamma^{2}M_{pl}^{2}\left(\frac{a_{dec}}{a(t)}\right)^{6}\qquad \text{and}\qquad\langle\hat{\rho}(t)\rangle\cong m_{dec}\frac{\Gamma}{H_{kin} }\langle\hat{N}_{kin}\rangle\left(\frac{a_{dec}}{a(t)}\right)^{4}, \tag{19}\] and since the reheating occurs at the end of kination, i.e., when \(\rho_{B}(t)\sim\langle\hat{\rho}(t)\rangle\), we have \[\left(\frac{a_{dec}}{a_{reh}}\right)^{2}=\frac{m_{dec}\langle\hat{N}_{kin} \rangle}{3\Gamma H_{kin}M_{pl}^{2}}, \tag{20}\] and thus, using the Stefan-Boltzmann law, the reheating temperature is given by: \[T_{reh}=\left(\frac{30}{\pi^{2}g_{reh}}\right)^{1/4}\langle\hat{ \rho}_{reh}\rangle^{1/4}=\left(\frac{10}{3\pi^{2}g_{reh}}\right)^{1/4}\left( \frac{m_{dec}\langle\hat{N}_{kin}\rangle}{\Gamma^{1/3}H_{kin}M_{pl}^{8/3}} \right)^{3/4}M_{pl}\] \[=\left(\frac{5\sqrt{3}}{\pi^{11}g_{reh}}\right)^{1/4}\left(\tilde {g}^{3/2}\frac{m_{dec}g_{B,kin}^{1/4}}{\Gamma^{1/3}M_{pl}^{5/3}}\right)^{3/4} M_{pl}, \tag{21}\] where \(g_{reh}=106.75\) is the effective number of degrees of freedom for the Standard Model, and we have taken into account that: \[\langle\hat{N}_{kin}\rangle=\frac{1}{2\pi^{2}a_{kin}^{3}}\int_{0}^{\infty}k^{ 2}|\beta_{k}|^{2}dk=\frac{1}{8\pi^{3}}(\tilde{g}\sqrt{2\rho_{B,kin}})^{3/2}. \tag{22}\] After some algebra, one has \[T_{reh}\cong 2\times 10^{-2}\tilde{g}^{15/8}\left(\frac{\sqrt{H_{kin}}}{ \Gamma^{1/3}M_{pl}^{1/6}}\right)^{3/4}\ln^{3/4}\left(\frac{H_{kin}}{\Gamma} \right)M_{pl}, \tag{23}\] which for \(H_{kin}\cong 10^{-7}M_{pl}\) becomes \[T_{reh}\cong 3\times 10^{-3}\tilde{g}^{15/8}\left(\frac{M_{pl}}{\Gamma}\right) ^{1/4}\ln^{3/4}\left(\frac{M_{pl}}{\Gamma}\right)M_{pl}, \tag{24}\] where we have introduced the notation \(\bar{\Gamma}\equiv 10^{7}\Gamma\). On the other hand, the condition that the decay occurs during kination leads to the constraint: \[\frac{\sqrt{2}}{3\sqrt{3}}10^{14}\ln\left(\frac{M_{pl}}{\Gamma}\right)\frac{ \tilde{g}\langle\hat{N}_{kin}\rangle}{M_{pl}^{2}}\leq\bar{\Gamma}<M_{pl}/3 \Longleftrightarrow 10\tilde{g}^{5/2}\ln\left(\frac{M_{pl}}{\Gamma} \right)\leq\frac{\bar{\Gamma}}{M_{pl}}<\frac{1}{3}, \tag{25}\] where the condition \(\bar{\Gamma}<M_{pl}/3\) comes from the fact that by imposing it, we have \(m_{dec}\geq\tilde{g}M_{pl}\). For \(\tilde{g}>10^{-7}\), as we have already shown, this ensures that the decaying particles are non-relativistic. Noticing that a viable reheating temperature should be above 1 MeV, as this is the temperature at which Big Bang nucleosynthesis (BBN) occurs, we have: \[5\times 10^{-22}M_{pl}\leq T_{reh}<\sqrt{M_{pl}\Gamma}\Longrightarrow\frac{ \bar{\Gamma}}{M_{pl}}\geq 10^{-36}\Longrightarrow\ln\left(\frac{M_{pl}}{ \bar{\Gamma}}\right)\leq 10^{2}, \tag{26}\] where we have used the fact that the decay occurs before reheating. This last restriction implies that the reheating temperature is bounded by \[T_{reh}\leq 10^{-1}\tilde{g}^{15/8}\left(\frac{M_{pl}}{\bar{\Gamma}}\right)^{1/4}M _{pl}, \tag{27}\] which, in order to ensure that the reheating temperature is bellow \(5\times 10^{-10}M_{pl}\cong 10^{9}\) GeV, leads to \[\frac{\bar{\Gamma}}{M_{pl}}\geq 2\times 10^{33}\tilde{g}^{15/2}. \tag{28}\] The condition \(2\times 10^{33}\tilde{g}^{15/2}\geq 10^{3}\tilde{g}^{5/2}\), implies \(\tilde{g}\geq 10^{-6}.\) Consequently, by choosing \(10^{-6}\leq\tilde{g}\leq 3\times 10^{-5}\), we ensure that \(m_{dec}<M_{pl}\), avoiding problems during BBN. If the effective mass becomes greater than the Planck's mass, each particle would become a Planck-sized black hole, which would immediately evaporate and produce gravitinos or moduli fields. Thus, a late decay could potentially jeopardize the success of BBN. With this condition satisfied, the constraint (25) becomes: \[2\times 10^{33}\tilde{g}^{15/2}\leq\frac{\bar{\Gamma}}{M_{pl}}<1/3. \tag{29}\] Finally, it is also important to ensure that the vacuum fluctuations do not disturb the evolution of the inflaton during the last stages of inflation, which is accomplished by imposing that \(m_{eff}(t)\geq H(t)\). Noticing that during the last stage of inflation the effective mass is of the order \(\tilde{g}M_{pl}\) and assuming, as in most inflationary models, that the scale of inflation is of the order \(10^{-6}M_{pl}\), one has to impose \(\tilde{g}\geq 10^{-6}\). We can show this in the case of a quadratic potential \(V(\varphi)=\frac{M^{2}}{2}(\varphi-\varphi_{kin})^{2}\) with mass \(M\), where the power spectrum of scalar perturbations \[\mathcal{P}_{\zeta}=\frac{H_{s}^{2}}{8\pi^{2}M_{pl}^{2}\epsilon_{*}}\cong 2 \times 10^{-9}, \tag{30}\] (the "star" means that the quantities are evaluated at the horizon crossing), together with the slow roll parameters \(\epsilon_{*}\) and \(\eta_{*}\), and the well-known relation \(1-n_{s}=6\epsilon_{*}-2\eta_{*}\), where \(n_{s}\) is the spectral index, tells us that the value of the mass is \(M\sim 16\pi\sqrt{0.3}(1-n_{s})10^{-4}M_{pl}\). Then, the condition \(m_{eff}(t)\geq H(t)\) becomes: \[\tilde{g}^{2}(\varphi-\varphi_{kin})^{2}\geq V(\varphi)=\frac{M^{2}}{6M_{pl}^ {2}}(\varphi-\varphi_{kin})^{2}\Longrightarrow\tilde{g}\geq\frac{3}{4}(1-n_{s })10^{-4}\cong 10^{-6}, \tag{31}\] where we have taken the conservative value \(1-n_{s}\cong 10^{-2}\). Thus, a successful reheating of the universe is achieved through the Instant Preheating mechanism when the value of the coupling constant satisfies \(10^{-6}\leq\tilde{g}\leq 3\times 10^{-5}\), \(\ \ \text{what improves the result}\ 10^{-6}\leq\tilde{g}\ll 1\) obtained in [3]. For example, taking \(\tilde{g}=6\times 10^{-6}\), we obtain \[T_{reh}\cong 5\times 10^{-13}\left(\frac{M_{pl}}{\bar{\Gamma}}\right)^{1/4}\ln ^{3/4}\left(\frac{M_{pl}}{\bar{\Gamma}}\right)M_{pl},\qquad\text{with}\qquad 10^ {-6}\leq\frac{\bar{\Gamma}}{M_{pl}}<1/3, \tag{32}\] what leads to the following maximum and minimum reheating temperatures \[T_{reh}^{max}\cong 10^{-10}M_{pl}\qquad\text{and}\qquad T_{reh}^{min}\cong 7 \times 10^{-13}M_{pl}. \tag{33}\] In summary, choosing \(\tilde{g}\cong 6\times 10^{-6}\) guarantees that the vacuum polarization effects do not disturb the evolution of the inflaton field during the last stages of inflation. Additionally, the particles become non-relativistic and have masses less than \(M_{pl}\) soon after the start of kination, and their decay occurs during this phase. In this situation, a viable reheating temperature below \(5\times 10^{10}M_{pl}\) is obtained, ensuring the success of BBN. Number of e-folds Let \(N\) be the number of e-folds from horizon crossing to the end of inflation. Then, we have \[a_{*}=e^{-N}a_{END}, \tag{34}\] where "\(END\)" denotes the end of inflation, and once again, the "star" means that the quantities are evaluated at horizon crossing. Since the pivot scale \(k_{*}\) is defined as \(k_{*}=a_{*}H_{*}\) (at horizon crossing), we have \[\frac{k_{*}}{a_{0}H_{0}}=e^{-N}\frac{H_{*}}{H_{0}}\frac{a_{END}}{a_{kin}}\frac{ a_{kin}}{a_{end}}\frac{a_{end}}{a_{m}}\frac{a_{m}}{a_{0}}=e^{-N}\frac{H_{*}}{H_{ 0}}\frac{a_{END}}{a_{kin}}\frac{\rho_{end}^{-1/12}\rho_{m}^{1/4}}{\rho_{kin}^{1/ 6}}\frac{a_{m}}{a_{0}}, \tag{35}\] where the sub-index "\(m\)" denotes the matter-radiation equality, "end" the end of kination, "0" the present time, and we have used the relations \[\rho_{end}=\rho_{kin}\left(\frac{a_{kin}}{a_{end}}\right)^{6},\qquad\rho_{m}= \rho_{end}\left(\frac{a_{end}}{a_{m}}\right)^{4}. \tag{36}\] In dealing with Instant Preheating, we have already shown that the decay of non-relativistic particles must occur prior to the end of kination. Therefore, we will have: \[\rho_{end}=\rho_{reh}=\frac{g_{reh}\pi^{2}}{30}T_{reh}^{4}. \tag{37}\] Next, as a physical scale, we use \(k_{\rm phys,0}\equiv k_{*}/a_{0}=2\times 10^{-2}\;{\rm Mpc}^{-1}\)[9], and for the current Hubble scale, \(H_{0}\cong 2\times 10^{-4}\;{\rm Mpc}^{-1}\cong 6\times 10^{-61}M_{pl}\). In addition, since the evolution is adiabatic after the matter-radiation equality, i.e., entropy is conserved, we have \(a_{m}T_{m}=a_{0}T_{0}\), as well as the relation \(\rho_{m}=\frac{g_{m}\pi^{2}}{30}T_{m}^{4}\), where \(g_{m}=3.36\) is the effective number of degrees of freedom at the matter-radiation equality. Hence, \[N=-4.6+\ln\left(\frac{H_{*}}{H_{0}}\right)+\ln\left(\frac{a_{END}}{a_{kin}} \right)+\frac{1}{6}\ln\left(\frac{\rho_{reh}}{\rho_{kin}}\right)+\frac{1}{4} \ln\left(\frac{g_{m}}{g_{reh}}\right)+\ln\left(\frac{T_{0}}{T_{reh}}\right). \tag{38}\] Now, considering the formula for the power spectrum of scalar perturbations (30), we can infer that \(H_{*}\approx 4\times 10^{-4}\sqrt{\epsilon_{*}}M_{pl}\). By using the present values of the Hubble rate and temperature \(T_{0}\approx 2.73\;{\rm K}\approx 2\times 10^{-13}\;{\rm GeV}\approx 8 \times 10^{-32}M_{pl}\), we can calculate the number of e-folds as a function of the reheating temperature and \(\epsilon_{*}\). \[N(T_{reh},\epsilon_{*})\cong 54.47+\frac{1}{2}\ln\epsilon_{*}+\frac{1}{3}\ln \left(\frac{M_{pl}^{2}}{T_{reh}H_{END}}\right), \tag{39}\] where we have neglected the model-dependent term \(\ln\left(\frac{a_{END}}{a_{kin}}\right)\) since it is close to zero, and we have assumed that there is no significant drop in energy during the phase transition from the end of inflation to the beginning of kination. It is important to note that for a given potential \(V\), one can calculate \[\epsilon_{*}=\frac{M_{pl}^{2}}{2}\left(\frac{V_{*}^{\prime}}{V_{*}}\right)^{2} \qquad{\rm and}\qquad H_{END}^{2}=\frac{V_{END}}{2M_{pl}^{2}}. \tag{40}\] On the other hand, the number of efolds can also be calculated from the formula \[N\cong\frac{1}{M_{pl}^{2}}\int_{\varphi_{*}}^{\varphi END}\left|\frac{V( \varphi)}{V^{\prime}(\varphi)}\right|d\varphi=\frac{1}{M_{pl}}\int_{\varphi_{ *}}^{\varphi END}\frac{1}{\sqrt{2\epsilon}}d\varphi. \tag{41}\] As we will see, is a function of \(\epsilon_{*}\), which is also related with the spectral index \(n_{s}\). By equating both expressions, we obtain a relationship between the reheating temperature and the spectral index, given by: \(N(T_{reh},\epsilon_{*}(n_{s}))=N(n_{s})\). We will explore this relationship in the next section. Reheating constraints In this section, we will use the results obtained in the previous section to analyze the feasibility of three important Quintessential Inflation models. Specifically, we will study the relationship between the reheating temperature and the spectral index for each of these models. By analyzing these relationships, we can determine if these models are consistent with observational data and if they are viable candidates for explaining the evolution of the Universe. ### The Peebles-Vilenkin model The first Quintessential Inflation scenario was proposed by Peebles and Vilenkin in their seminal paper [4] at the end of the 20th century, shortly after the discovery of cosmic acceleration. The corresponding potential is given by \[V(\varphi)=\left\{\begin{array}{ccc}\lambda(\varphi^{4}+M^{4})&\mbox{for}& \varphi\leq 0\\ \lambda\frac{M^{8}}{\varphi^{4}+M^{4}}&\mbox{for}&\varphi\geq 0.\end{array}\right. \tag{42}\] Here, \(\lambda\sim 10^{-14}\) is a dimensionless parameter and \(M\) is a very small mass compared to the Planck mass \(M_{pl}\). It is important to note that the quartic potential is responsible for inflation, while the inverse power law leads to dark energy (in this case, quintessence) at later times. Since for this model \(\epsilon=\frac{8M_{pl}^{2}}{\varphi^{2}}\), we have \(\varphi_{END}=-2\sqrt{2}M_{pl}\), and taking into account that for a quartic potential \(3\epsilon_{*}=1-n_{s}\), we get \(\varphi_{*}=-\frac{2\sqrt{6}}{\sqrt{1-n_{s}}}M_{pl}\). So, the number of efolds will be \[N=\frac{1}{4M_{pl}^{2}}(\varphi_{*}^{2}-\varphi_{END}^{2})=\frac{6}{1-n_{s}} -2. \tag{43}\] On the other hand, using that inflation ends when \(\epsilon_{END}=1\), i.e., when \(w_{eff}=-1/3\), one has \(\dot{\varphi}_{END}^{2}=V(\varphi_{END})\), and thus, \[\rho_{END}=\frac{3V(\varphi_{END})}{2}=96\lambda M_{pl}^{4} \Longrightarrow H_{END}=4\sqrt{2\lambda}M_{pl}. \tag{44}\] Then, from Eqs. (43) and (39) we get \[\frac{6}{1-n_{s}}-\frac{1}{2}\ln\left(\frac{1-n_{s}}{3}\right) \cong 56.47+\frac{1}{3}\ln\left(\frac{M_{pl}}{4\sqrt{2\lambda}T_{reh}} \right), \tag{45}\] which for \(\lambda\cong 10^{-14}\), leads to \[T_{reh}\cong(1-n_{s})^{3/2}\exp\left(182-\frac{18}{1-n_{s}} \right)M_{pl}. \tag{46}\] Finally, expressing the reheating temperature \(T_{reh}\) as a function of the spectral index \(n_{s}\), we can use observational data to constrain the parameter space. According to Planck 2018 data, the spectral index is measured to be \(n_{s}=0.9649\pm 0.0042\)[9]. At the \(2\sigma\) Confidence Level, the minimum value of \(n_{s}\) that leads to the maximum reheating temperature is \(n_{s}=0.9565\). However, for this value of \(n_{s}\), the reheating temperature is found to be abnormally small: \[T_{reh}\sim 10^{-2}e^{-234}M_{pl}. \tag{47}\] This demonstrates that the Peebles-Vilenkin model is not viable, as it predicts an unreasonably low reheating temperature for the observed spectral index. Equivalently, the non-feasibility of the Peebles-Vilenkin model can also be seen by calculating the number of e-folds using Eq. (43). At the \(2\sigma\) confidence level, this leads to the bound \(136\leq N\leq 223\), which is in contradiction with the number of e-folds calculated from Eq. (39). Using Eq. (39) and requiring a reheating temperature above \(1\) MeV and below \(10^{9}\) GeV, the number of e-folds is constrained to satisfy \(63\leq N\leq 74\). Therefore, the Peebles-Vilenkin model is not viable because it predicts a number of e-folds that is outside of the observational constraints. A final remark is in order: The latest observational data constrain the tensor-to-scalar ratio of scalar perturbations, \(r\), to be less than \(0.1\). For the Peebles-Vilenkin model, one has \(r=\frac{16}{3}(1-n_{s})\), and taking into account that \(n_{s}=0.9649\pm 0.0042\) at \(2\sigma\) C.L., one has the constraint \(0.1424\leq r\leq 0.232\), which is incompatible with the observational bound \(r\leq 0.1\). This provides another way to show that this model is not viable. The difference with our methodology is that we do not need a precise bound on the tensor-to-scalar ratio to disregard this model. ### Exponential \(\alpha\)-attractor We consider a Quintessential Inflation \(\alpha\)-attractor model, whose potential is given by [10] \[V(\varphi)=\lambda M_{pl}^{4}e^{-n\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl} }\right)}, \tag{48}\] where \(\lambda\), \(\alpha\) and \(n\) are some dimensionless parameters. The value of the slow roll parameter \(\epsilon\) is \[\epsilon=\frac{n^{2}}{12\alpha}\frac{1}{\cosh^{4}\left(\frac{\varphi}{\sqrt{6 \alpha}M_{pl}}\right)}, \tag{49}\] and the other slow-roll parameter is given by \[\eta=\frac{n}{3\alpha}\left[\frac{\tanh\left(\frac{\varphi}{\sqrt{6\alpha}M_{ pl}}\right)}{\cosh^{2}\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)}+ \frac{n/2}{\cosh^{4}\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)}\right]. \tag{50}\] Both slow-roll parameters must be evaluated at the horizon crossing, which occurs for large values of \(\cosh\left(\frac{\varphi}{\sqrt{6\alpha}M_{pl}}\right)\), obtaining \[\epsilon_{*}=\frac{n^{2}}{12\alpha}\frac{1}{\cosh^{4}\left(\frac{\varphi_{*}} {\sqrt{6\alpha}M_{pl}}\right)}\qquad\text{and}\qquad\eta_{*}\cong-\frac{n}{3 \alpha}\frac{1}{\cosh^{2}\left(\frac{\varphi_{*}}{\sqrt{6\alpha}M_{pl}} \right)}, \tag{51}\] with \(\varphi_{*}<0\). Therefore, the number of efolds is \[N\cong\frac{6\alpha}{n}\cosh^{2}\left(\frac{\varphi_{*}}{\sqrt{6\alpha}M_{pl} }\right)\cong\sqrt{\frac{3\alpha}{4\epsilon_{*}}}, \tag{52}\] which is related with the spectral index of the scalar perturbations via the relation \[n_{s}-1\cong-6\epsilon_{*}+2\eta_{*}\cong 2\eta_{*}=-\frac{4\sqrt{\epsilon_{*} }}{\sqrt{3\alpha}}\cong-\frac{2}{N}, \tag{53}\] obtaining \[N(n_{s})\cong\frac{2}{1-n_{s}}\qquad\text{and}\qquad\epsilon_{*}(n_{s})\cong \frac{3\alpha}{16}(1-n_{s})^{2}. \tag{54}\] From Eq. (54), we can also calculate the relationship between the parameters of the model. Specifically, since \(V_{*}\cong\lambda M_{pl}^{4}e^{n}\), we have \(H_{*}^{2}\cong\frac{\lambda M_{pl}^{2}}{3}e^{n}\). Thus, using the formula for the power spectrum of scalar perturbations, we obtain the constraint: \[\frac{\lambda}{\alpha\pi^{2}(1-n_{s})^{2}}e^{n}\cong 9\times 10^{-9}. \tag{55}\] We also need to calculate \(H_{END}\), which can be done by noting that \(\epsilon_{END}=1\) and using \(\text{arccosh}(x)=\ln(x-\sqrt{x^{2}-1})\), to obtain \[\varphi_{END}=\sqrt{6\alpha}\ln\left(\frac{\sqrt{n}}{(12\alpha)^{1/4}}-\sqrt{ \frac{n}{\sqrt{12\alpha}}-1}\right)M_{pl}^{4}. \tag{56}\] Inserting it in (48) and using the constraint (55), one has \[V(\varphi_{END})=\lambda M_{pl}^{4}e^{n^{\sqrt{1-\frac{\sqrt{12\alpha}}{n}}} }\cong\lambda M_{pl}^{4}e^{n\left(1-\frac{\sqrt{3\alpha}}{n}\right)}\cong 9 \pi^{2}\alpha(1-n_{s})^{2}e^{-\sqrt{3\alpha}}10^{-9}M_{pl}, \tag{57}\] and thus, \[\rho_{END}=\frac{3V(\varphi_{END})}{2}\Longrightarrow H_{END}\cong 3\sqrt{\frac{ \alpha}{2}}(1-n_{s})e^{-\sqrt{3\alpha}/2}10^{-4}M_{pl}. \tag{58}\] Therefore, by equating both expressions for the number of e-folds, we obtain an expression for the reheating temperature as a function of the spectral index \[T_{reh}\cong\alpha(1-n_{s})^{2}\exp\left(169+\frac{\sqrt{3\alpha}}{2}-\frac{6} {1-n_{s}}\right)M_{pl}. \tag{59}\] To be more precise, we will choose \(\alpha=10^{-2}\), obtaining \[T_{reh}\cong(1-n_{s})^{2}\exp\left(169+\frac{\sqrt{3}}{20}-\frac{6}{1-n_{s}} \right)10^{-2}M_{pl}. \tag{60}\] We can check that the allowed values of the spectral index, which lead to a reheating temperature compatible with the one obtained in Eq. (33) using Instant Preheating, are in the range of \((0.9667,0.9677)\). Thus, the value of the spectral index is approximately \(n_{s}\cong 0.967\). Additionally, the ratio of tensor to scalar perturbations is given by \(r=16\epsilon_{*}=3\alpha(1-n_{s})^{2}\), and we can conclude that its value is \(r\cong 3\times 10^{-5}\). #### 4.2.1 Comparison with other works In [5], the authors study an exponential \(\alpha\)-attractor with a cosmological constant given by \[V(\varphi)=\lambda M_{pl}^{4}\left(e^{-n\tanh\left(\frac{y}{\sqrt{\alpha}M_{ pl}}\right)}-e^{-n}\right), \tag{61}\] which at early times coincides with our potential (48), but at late times, it will become \[V(\varphi)=2n\lambda e^{-n}M_{pl}^{4}e^{-\sqrt{\frac{2}{\alpha}}\varphi/M_{pl}}. \tag{62}\] This constrains the value of the parameter \(\alpha\) to match with the current Planck data of the effective equation of state (EoS) parameter for dark energy. As shown in the Appendix of [5], the parameter \(\alpha\) must satisfy \(\alpha\geq 3/2\) (although it has been shown in [11] that the correct bound to match the observational data is \(0.5\leq\alpha\leq 3.3\)). Fortunately, this is not a problem for our model, as demonstrated in [10] and in [11] where the authors found that \(\alpha\) only has to satisfy the upper bound \(\alpha<3.5\). For the value chossen in this work \(\alpha=10^{-2}\), the present value of the effective EoS parameter is approximately \(-0.68\), which is compatible with the Planck data. Additionally, from the observational data \(\Omega_{\varphi,0}=\frac{V(\varphi_{0})}{3H_{0}^{2}\delta M_{pl}^{4}}\cong 0.7\) (where the subscript "0" denotes present time), [5] obtains a relationship between the parameters \(\alpha\), \(n\), and \(\tilde{g}\). This is because the present value of the scalar field depends on the reheating temperature, which in turn depends on \(\tilde{g}\). Looking at equation (62), we can see that \(V(\varphi_{0})\) depends on all three parameters. However, this is not the case for our potential (48). At the present time, \(V(\varphi_{0})\sim\lambda M_{pl}^{4}e^{-n}\). Thus, \(\Omega_{\varphi,0}\cong 0.7\) leads to \(\lambda e^{-n}\cong 10^{120}\). Combining this with (55), we obtain the relationship \[e^{2n}\cong 9\alpha\pi^{2}(1-n_{s})^{2}\times 10^{111}, \tag{63}\] which is independent of \(\tilde{g}\). In fact, since we have obtained \(n_{s}\cong 0.967\) for \(\alpha=10^{-2}\), we find \(n\cong 124\) and \(\lambda\cong 10^{-66}\). On the other hand, in contrast to our realistic assumption that the particles produced during kination decay when they are non-relativistic, in [5], it is assumed that the decay of these particles occurs immediately after their creation. We find it difficult to justify this assumption because, at the onset of kination, the effective mass of the produced particles vanishes. Additionally, it is assumed that the produced particles decay into light fermions with a decay rate given by \(\Gamma=\frac{h^{2}m_{eff}}{8\pi}\) (as argued in [3], where the authors suggest that the decay should occur when the particles are non-relativistic). However, the decay that occurs when \(H\sim\Gamma\) cannot happen at the beginning of kination because, at that time, \(\Gamma\cong 0\) and \(H_{kin}\sim 10^{-7}M_{pl}\). Nonetheless, it is possible that another kind of decay may occur. Assuming that the decay occurs immediately after the beginning of kination, the reheating temperature can be calculated using the simple formula: \[T_{reh}=\left(\frac{270}{\pi^{11}g_{reh}}\right)^{1/4}\tilde{g}^{3/2}\sqrt{H_{kin} M_{pl}}\cong 2\times 10^{-5}\tilde{g}^{3/2}M_{pl}, \tag{64}\] where \(\langle\hat{\rho}_{kin}\rangle\) is given by: \[\langle\hat{\rho}_{kin}\rangle=\frac{1}{2\pi^{2}a_{kin}^{4}}\int_{0}^{\infty} k^{3}|\beta_{k}|^{2}dk=\frac{\tilde{g}^{2}\dot{\varphi}_{kin}^{2}}{8\pi^{3}}, \tag{65}\] with \(\dot{\varphi}_{kin}^{2}=6H_{kin}^{2}M_{pl}^{2}\), and we have taken \(H_{kin}\cong 10^{-7}M_{pl}\). It is worth noting that the formula (64) depends solely on \(\tilde{g}\), in contrast to the formula (32), which depends on both \(\tilde{g}\) and the decay rate \(\Gamma\). Additionally, the bounds arising from the gravitino constraint (\(T_{reh}\leq 10^{9}\) GeV) and the lower bound \(T_{reh}\geq 1\) MeV, constraint in different ways the parameter \(\tilde{g}\). Effectively, when the decay is at the onset of kination one has \(10^{-4}\leq\tilde{g}\leq 10^{-2}\), but if it occurs when the particles are non-relativistic, as we have already shown, this parameter has to satisfy \(10^{-6}\leq\tilde{g}\leq 3\times 10^{-5}\). Therefore, in accordance with [5], one must select, in the corresponding allowed range, values for the parameters \(\alpha\) and \(n\) that result in a value of \(\tilde{g}\) (since these three parameters are related for the model (61)) that is consistent with the constraints arising from the gravitino constraint and the lower bound of the reheating temperature. Once this value of \(\tilde{g}\) is determined, the reheating temperature can be calculated using formula (64), and the spectral index can be calculated using the relationship between the reheating temperature and the spectral index. In summary, for the set of allowed parameters \(\alpha\) and \(n\), it is possible to calculate the reheating temperature and the spectral index by choosing appropriate values for the coupling constant \(\tilde{g}\). This is quite different from our method. In our approach, we use instant preheating with the realistic assumption that the created particles can only decay when they become non-relativistic, which results in a range of reheating temperatures for the potential (48). Therefore, after fixing the model and setting \(\alpha=10^{-2}\) to satisfy the gravitino constraint and ensure that the vacuum fluctuations do not disturb the evolution of the scalar field during the last stages of inflation, the value of \(\tilde{g}\) has to belong to a very narrow range. Once we have fixed the value of \(\tilde{g}\), we use formula (32) to obtain the range of viable values of the reheating temperature. Finally, by using the relationship between the reheating temperature and the spectral index, we can determine the range of viable values of the spectral index for the fixed model. Another paper that deals with \(\alpha\)-attractors is [11], where the authors study several potentials and use observational data to constrain the parameters of each model. The work does not deal with any preferred reheating mechanism, but it compares instant preheating with reheating via gravitational particle production of light particles, showing that instant preheating is more efficient because the gravitational production of light particles leads to a low reheating temperature of the order of \(10^{5}\) GeV (see, for instance, [4]). Furthermore, it is pointed out that due to the kination phase, the number of last e-folds is greater in Quintessential Inflation than in standard inflation. As a consequence, the value of the spectral index is greater in Quintessential Inflation than in standard models. Thus, future improvements in the accuracy of the measurement of the spectral index may distinguish between conventional inflationary models with a cosmological constant and Quintessential Inflation scenarios. Finally, in [11], the parameters of the models (48) and (61) are compared with observational data. Only taking into account that for lower reheating temperatures, the inflaton field freezes later during radiation than for higher reheating temperatures. This means that when reheating is via gravitational production of light particles, the inflaton field freezes later than in the case when the reheating mechanism is Instant Preheating. Since the models have completely different tails, this leads to different equations of state (EoS) parameters at late times, depending on the freeze value of the inflaton field, and thus on the value of the reheating temperature. This constrains the values of \(\alpha\). Specifically, for the model described by equation (48), the value of \(\alpha\) is bounded by \(\alpha<3.5\), and therefore our choice of \(\alpha=10^{-2}\) is entirely acceptable. In contrast, for the model described by equation (61), the observational data only allows values within the range of \(0.5\leq\alpha\leq 3.3\). ### The double exponential model Next, we consider a combination of two exponential potentials to depict inflation and quintessence, respectively: \[V(\varphi)=V_{0}e^{-\bar{\gamma}\varphi^{n}/M_{pl}^{n}}+M^{4}e^{-\gamma\varphi/M _{pl}}, \tag{66}\] where we must choose \(0<\gamma<\sqrt{2}\) to model the current cosmic acceleration. This is because, at late times, the effective equation of state parameter is \(w_{eff}=\frac{\gamma^{2}}{3}-1<-1/3\). The first part of the potential is a phenomenological term responsible for inflation, which has been studied in detail in [6; 12], where it is obtained that \[\epsilon=\frac{\bar{\gamma}^{2}n^{2}}{2}\left(\frac{\varphi}{M_{pl}}\right)^{ 2n-2}. \tag{67}\] So, at the end of inflation one has \(\varphi_{END}=\left(\frac{2}{n^{2}\bar{\gamma}^{2}}\right)^{\frac{1}{2n-2}}M_ {pl}\), and thus, \[\rho_{\varphi,END}=\frac{3}{2}V(\varphi_{END})\cong 9\pi^{2}e^{-\bar{\gamma} \left(\frac{2}{n^{2}\bar{\gamma}^{2}}\right)^{\frac{n}{2n-2}}}\times 10^{-11}M_{pl }^{4}\Longrightarrow H_{END}\cong\sqrt{\frac{3}{10}}e^{-\frac{\gamma}{2} \left(\frac{2}{n^{2}\bar{\gamma}^{2}}\right)^{\frac{n}{2n-2}}}\times 10^{-5}M_{pl}, \tag{68}\] which will constrain the values of the parameter \(\bar{\gamma}\) significantly, because in all viable inflationary models, the value of the Hubble rate at the end of inflation is of the order of \(10^{-6}M_{pl}\). In fact, when (68) is of the order of \(10^{-6}M_{pl}\), we get: \[\bar{\gamma}n=\sqrt{2}\left(\frac{\sqrt{2}}{3n}\right)^{n-1}. \tag{69}\] Next, we calculate the other slow-roll parameter \[\eta=-\frac{2(n-1)}{3n}\epsilon^{\frac{n-2}{2n-2}}+2\epsilon, \tag{70}\] leading to the following spectral index \[1-n_{s}=2\epsilon_{*}+\frac{4(n-1)}{3n}\epsilon_{*}^{\frac{n-2}{2n-2}}. \tag{71}\] On the other hand, for \(n>2\), the number of efolds is given by \[N=\frac{1}{n\bar{\gamma}(n-2)}\left[\left(\frac{\varphi_{*}}{M_{pl}}\right)^{ 2-n}-\left(\frac{\varphi_{END}}{M_{pl}}\right)^{2-n}\right]=\frac{3n}{2(n-2)} \left[\epsilon_{*}^{\frac{2-n}{2n-2}}-1\right]\cong\frac{3n}{2(n-2)}\epsilon_ {*}^{\frac{2-n}{2n-2}}\cong\frac{2(n-1)}{n-2}\frac{1}{1-n_{s}}, \tag{72}\] where we have used the approximation \(1-n_{s}\cong\frac{4(n-1)}{3n}\epsilon_{*}^{\frac{n-2}{2n-2}}\). Therefore, from the equations (39), (71), and (72), we obtain the reheating temperature as a function of the spectral index: \[T_{reh}=\left(\frac{3n}{4n-4}\right)^{3/2}(1-n_{s})^{\frac{3n-3}{n-2}}\exp \left(177.21-\frac{6n-6}{(n-2)(1-n_{s})}\right)M_{pl}. \tag{73}\] Note that the maximum reheating temperature is obtained from the minimum observable value of the spectral index, which at \(2\sigma\) C.L. is \(n_{s}=0.9565\). Consequently, 1. For \(n=3\), the maximum reheating temperature is of the order of \(10^{-51}M_{pl}\). 2. For \(n=4\), the maximum reheating temperature is of the order of \(9\times 10^{-20}M_{pl}\). 3. For \(n=5\), the maximum reheating temperature is of the order of \(4\times 10^{-9}M_{pl}\). 4. For \(n\gg 1\), the maximum reheating temperature is of the order of \(6\times 10^{12}M_{pl}\). In the same way, the maximum observable value of the spectral index, which at \(2\sigma\) C.L. is \(n_{s}=0.9733\), leads to the minimum reheating temperature. In the double exponential model, it is below \(10^{9}\) GeV when \(n>2\). Thus, since a viable model has to satisfy that the maximum temperature is above 1 MeV and the minimum one below \(10^{9}\) GeV, we can conclude that the viable double exponential models, those that have a range of values of the spectral index leading to a reheating temperature between 1 MeV and \(10^{9}\) GeV, are the ones satisfying \(n>3\). To end the Section, when \(n\gg 1\), the reheating temperature is approximately \[T_{reh}\cong\left(\frac{3}{4}\right)^{3/2}(1-n_{s})^{3}\exp\left(177.21-\frac{ 6}{(1-n_{s})}\right)M_{pl}, \tag{74}\] and the viable values of the spectral index compatibles with the reheating temperature via Instant Preheating (33), are in the range \(0.9683\leq n_{s}\leq 0.9688\), i.e., \(n_{s}\cong 0.9685\) and \(r=9(1-n_{s})^{2}\cong 9\times 10^{-3}\). ## V Reheating via gravitational particle production When reheating is produced via gravitational particle production of heavy particles whose decay is before the end of kination, the reheating temperature is given by [8]: \[T_{reh}=\left(\frac{10}{3\pi^{2}g_{reh}}\right)^{1/4}\left(\frac{\langle\hat{ \rho}_{kin}\rangle^{3}}{H_{kin}^{3}\Gamma M_{pl}^{8}}\right)^{1/4}M_{pl}, \tag{75}\] where the decay rate \(\Gamma\) has to be within the following range: \[\frac{\langle\hat{\rho}_{kin}\rangle}{3H_{kin}M_{pl}^{2}}\leq\Gamma\leq H_{kin}. \tag{76}\] The maximum reheating temperature is reached at the end of kination, i.e., when \(\frac{\langle\hat{\rho}_{kin}\rangle}{3H_{kin}M_{pl}^{2}}=\Gamma\). Therefore, \[T_{reh}^{max}=\left(\frac{10}{\pi^{2}g_{reh}}\right)^{1/4}\sqrt{\frac{\langle \hat{\rho}_{kin}\rangle}{H_{kin}M_{pl}^{3}}}M_{pl}. \tag{77}\] According to [8], it has been demonstrated that the energy density of conformally coupled particles created at the onset of kination can be approximated by the analytical formula \[\langle\hat{\rho}_{kin}\rangle\cong\frac{1}{4\pi^{3}}e^{-\frac{\pi m_{\chi}}{ 2\sqrt{2}H_{END}}}\sqrt{\frac{m_{\chi}}{\sqrt{2}H_{END}}}H_{END}^{2}m_{\chi}^ {2}, \tag{78}\] where \(m_{\chi}\) is the mass of the produced particles. Inserting this expression in (77) we get \[T_{reh}^{max}(m_{\chi})\cong 2\times 10^{-2}e^{-\frac{\pi m_{\chi}}{4\sqrt{2}H_{END }}}\left(\frac{m_{\chi}H_{END}}{M_{pl}^{2}}\right)^{1/4}m_{\chi}. \tag{79}\] By applying the previous result to the exponential \(\alpha\)-attractor model with \(\alpha=10^{-2}\), we can insert equation (79) into equation (60) to establish a relationship between the spectral index and the mass of the produced particles \[\frac{2.6057}{1-n_{s}}-1.75\log(1-n_{s})=79.301+\frac{1.2398}{1-n_{s}}X-1.25 \log X, \tag{80}\] where we have introduced the notation \(X\equiv 10^{4}\frac{m_{\chi}}{M_{pl}}\). It is important to note that the equation (80) has a solution for a minimum value of \(n_{s}\), which is obtained at the minimum of the function \(f(X)=\frac{1.2398}{1-n_{s}}X-1.25\log X\). By inserting \(X_{min}=\frac{1.25(1-n_{s})}{1.2398}\) into (80), we obtain: \[\frac{2.6057}{1-n_{s}}-0.5\log(1-n_{s})=80.5465. \tag{81}\] The only solution to this equation is \(\bar{n}s\cong 0.9673\) because the function \[\frac{2.6057}{1-n_{s}}-0.5\log(1-n_{s}), \tag{82}\] is increasing. This implies that Eq. (81) has only one solution. For this minimum value of the spectral index, the equation (80) also has a unique solution: \(m\chi\cong 10^{-6}M_{pl}\), which leads to a maximum reheating temperature of around \(10^{7}\) GeV For values of the spectral index in the range \((0.9673,0.9709)\) (where \(n_{s}=0.9709\) is the maximum value leading to a reheating temperature above 1 MeV), equation (80) has two solutions. For example, when \(n_{s}=0.9709\), there are two compatible masses: \(m_{\chi}\cong 3\times 10^{-5}M_{pl}\) and \(m_{\chi}\cong 5\times 10^{-15}M_{pl}\), with a reheating temperature of 1 MeV. In other words, assuming that the decay occurs at the end of kination (which leads to the maximum reheating temperature), the allowed values of the spectral index are in the range \((0.9673,0.9709)\), and for each of these values, there are two compatible masses. Alternatively, for masses satisfying the inequality \(5\times 10^{-15}\leq m_{\chi}/M_{pl}\leq 3\times 10^{-5}\), there is a value of the spectral index in the range \(0.9673<n_{s}<0.9709\), which leads to a viable maximum reheating temperature. ## VI Conclusions Throughout this work, we have emphasized the close relationship between the reheating temperature and the spectral index of scalar perturbations. We have explored this connection in the context of non-oscillating cosmologies, where the reheating mechanism is the well-known Instant Preheating. Our analysis has shown that the viable range of reheating temperatures falls between \(10^{5}\) GeV and \(10^{8}\) GeV, resulting in a narrow range of viable values for the spectral index. Specifically, for an exponential \(\alpha\)-attractor with \(\alpha=10^{-2}\), we find that the spectral index is close to \(n_{s}\cong 0.9670\), while for a double exponential model, we obtain \(n_{s}\cong 0.9685\). We have also compared the methodology used in this work with that of [5]. We pointed out that the main difference between the two approaches is that, in our work, we make the realistic assumption (following the spirit of [3]) that the created particles must decay when their effective mass is great enough to be considered non-relativistic. This assumption does not hold in [5], where the authors assume that the particles decay immediately after their creation, resulting in a vanishing effective mass. This leads to differences in the results obtained and in the parameter constraints. Finally, we have also considered an alternative to Instant Preheating: reheating via gravitational particle production. In this case, we have established a connection between the spectral index and the masses of the produced particles. Our analysis shows that the production of heavy particles with masses less than \(10^{-5}M_{pl}\) can lead to viable reheating temperatures, and values of the spectral index that fall within the observational domain provided by the Planck team, at \(2\sigma\) C.L. ###### Acknowledgements. This work is supported by the Spanish grant PID2021-123903NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe". ## Appendix: The diagonalization method Expanding a quantum field conformally coupled with gravity in terms of the creation and annihilation operators \[\hat{\phi}(\eta,\mathbf{x})=\frac{1}{(2\pi)^{3/2}a(\eta)}\int_{\mathbb{B}^{3} }(\hat{a}_{\mathbf{k}}\chi_{k}(\eta)e^{i\mathbf{kx}}+\hat{a}_{\mathbf{k}}^{ \dagger}\bar{\chi}_{k}(\eta)e^{-i\mathbf{kx}})d^{3}\mathbf{k}, \tag{83}\] where \(\chi_{k}\) and its conjugate \(\bar{\chi}_{k}\) are the mode solution of the Klein-Gordon equation \[\chi_{k}^{\prime\prime}+\omega_{k}^{2}(\eta)\chi_{k}=0, \tag{84}\] with initial conditions, at some early time \(\eta_{i}\), \[\chi_{k}(\eta_{i})=\frac{1}{\sqrt{2\omega_{k}(\eta_{i})}}\qquad\mbox{and} \qquad\chi_{k}^{\prime}(\eta_{i})=-i\sqrt{\frac{\omega_{k}(\eta_{i})}{2}}. \tag{85}\] The renormalized vacuum energy density is given by \[\langle\dot{\rho}(\eta)\rangle=\frac{1}{4\pi^{2}a^{4}(\eta)}\int_{0}^{\infty}k ^{2}dk\left(|\chi_{k}^{\prime}(\eta)|^{2}+\omega_{k}^{2}(\eta)|\chi_{k}(\eta)| ^{2}-\omega_{k}(\eta)\right), \tag{86}\] where we have subtracted the zero point oscillations of the vacuum. To express the vacuum energy density in a simple form, we can use the diagonalization method, which involves expanding the modes as follows [13] (see also Section 9.2 of [14]): \[\chi_{k}(\eta)=\alpha_{k}(\eta)\phi_{k,+}(\eta)+\beta_{k}(\eta)\phi_{k,-}(\eta), \tag{87}\] where \(\alpha_{k}(\eta)\) and \(\beta_{k}(\eta)\) are the time-dependent Bogoliubov coefficients. Here, we have introduced the positive (\(+\)) and negative (\(-\)) frequency modes \[\phi_{k,\pm}(\eta)=\frac{e^{\mp i\int_{\eta_{i}}^{\eta}\omega_{k}(\tau)d\tau}} {\sqrt{2\omega_{k}(\eta)}}. \tag{88}\] Now, imposing that the modes satisfy the condition \[\chi_{k}^{\prime}(\eta)=-i\omega_{k}(\eta)\left(\alpha_{k}(\eta)\phi_{k,+}( \eta)-\beta_{k}(\eta)\phi_{k,-}(\eta)\right), \tag{89}\] one can show that the Bogoliubov coefficients must satisfy the system \[\left\{\begin{array}{rcl}\alpha_{k}^{\prime}(\eta)&=&\omega_{k}^{\prime}( \eta)\phi_{k,-}^{2}(\eta)\beta_{k}(\eta)\\ \beta_{k}^{\prime}(\eta)&=&\omega_{k}^{\prime}(\eta)\phi_{k,+}^{2}(\eta) \alpha_{k}(\eta),\end{array}\right. \tag{90}\] in order for the expression (87) to be a solution of the equation (84). Finally, inserting (87) into the expression for the vacuum energy (86), and taking into account that the Bogoliubov coefficients satisfy the equation \(|\alpha_{k}(\eta)|^{2}-|\beta_{k}(\eta)|^{2}=1\), one finds the following diagonalized form of the energy density [13]: \[\langle\rho(\eta)\rangle=\frac{1}{2\pi^{2}a^{4}(\eta)}\int_{0}^{\infty}k^{2} \omega_{k}(\eta)|\beta_{k}(\eta)|^{2}dk, \tag{91}\] where it is important to notice that \(|\beta_{k}(\eta)|^{2}\) encodes the vacuum polarization effects and also the production of real particles, which are only produced when the adiabatic evolution breaks. In non-oscillating models this happens during the phase transition from the end of inflation to the beginning of kination, and fortunately the polarization effects disappear shortly after the beginning of kination, when the value of \(|\beta_{k}(\eta)|\) stabilizes to a value which we will denote by \(|\beta_{k}|\). Thus, it only encodes the production of real particles. It is not difficult to show that the Bogoliubov coefficients stabilize, taking into account that during kination one has \(a(t)\propto t^{1/3}\) and \(H(t)\propto 1/t\), where \(t\) is the cosmic time. Effectively, taking into account that the modes that contribute to the particle production are those which satisfy \(m_{eff}(\eta)a(\eta)\gg k\), we will have \[\frac{\omega_{k}^{\prime}(\eta)}{\omega_{k}(\eta)}=\frac{a^{\prime}(\eta)a( \eta)m_{eff}^{2}(\eta)+a^{2}(\eta)m_{eff}^{\prime}(\eta)m_{eff}(\eta)}{k^{2}+ a^{2}(\eta)m_{eff}^{2}(\eta)}\sim\frac{a^{\prime}(\eta)}{a(\eta)}=a(t)H(t) \propto t^{-2/3}. \tag{92}\] Therefore, we conclude that the derivative of the Bogoliubov coefficients goes to zero, meaning that they stabilize. In fact, when the rate of expansion of the universe slows down, the Bogoliubov coefficients always stabilizes, because in that case, \(a(t)\propto t^{\frac{2}{3(1+w_{eff})}}\) where \(w_{eff}\) denotes the effective Equation of State parameter, and thus \[a(t)H(t)\propto t^{-\frac{1+3w_{eff}}{5(1+w_{eff})}}, \tag{93}\] which converges to zero when \(w_{eff}>-1/3\), i.e., for a decelerating expansion. Finally, we want to calculate \[\langle\hat{\phi}^{2}(\eta)\rangle=\frac{1}{4\pi^{2}a^{2}(\eta)}\int_{0}^{ \infty}k^{2}\left(|\chi_{k}(\eta)|^{2}-\frac{1}{2\omega_{k}(\eta)}\right)dk, \tag{94}\] where, as we have done with the energy density, we have re-normalized it subtracting the quantity \(\frac{1}{2\omega_{k}(\eta)}\). From the diagonalization method and using the system (90), we can see that \[|\chi_{k}(\eta)|^{2}=\frac{|\beta_{k}(\eta)|^{2}}{\omega_{k}(\eta)}+\frac{(| \beta_{k}(\eta)|^{2})^{\prime}}{\omega_{k}^{\prime}(\eta)}+\frac{1}{2\omega_{ k}(\eta)}\cong\frac{|\beta_{k}|^{2}}{a(\eta)m_{eff}(\eta)}+\frac{1}{2\omega_{k}( \eta)}, \tag{95}\] because the Bogoliubov coefficients stabilize during kination. Then, inserting it in (94) we get \[\langle\hat{\phi}^{2}(t)\rangle\cong\frac{1}{4\pi^{2}a^{3}(t)m_{eff}(t)}\int_ {0}^{\infty}k^{2}|\beta_{k}|^{2}dk=\frac{\langle N(t)\rangle}{2m_{eff}(t)}. \tag{96}\]
2306.13918
Multi-task multi-station earthquake monitoring: An all-in-one seismic Phase picking, Location, and Association Network (PLAN)
Earthquake monitoring is vital for understanding the physics of earthquakes and assessing seismic hazards. A standard monitoring workflow includes the interrelated and interdependent tasks of phase picking, association, and location. Although deep learning methods have been successfully applied to earthquake monitoring, they mostly address the tasks separately and ignore the geographic relationships among stations. Here, we propose a graph neural network that operates directly on multi-station seismic data and achieves simultaneous phase picking, association, and location. Particularly, the inter-station and inter-task physical relationships are informed in the network architecture to promote accuracy, interpretability, and physical consistency among cross-station and cross-task predictions. When applied to data from the Ridgecrest region and Japan regions, this method showed superior performance over previous deep learning-based phase-picking and localization methods. Overall, our study provides for the first time a prototype self-consistent all-in-one system of simultaneous seismic phase picking, association, and location, which has the potential for next-generation autonomous earthquake monitoring.
Xu Si, Xinming Wu, Zefeng Li, Shenghou Wang, Jun Zhu
2023-06-24T09:46:18Z
http://arxiv.org/abs/2306.13918v1
Multi-task multi-station earthquake monitoring: An all-in-one seismic Phase picking, Location, and Association Network (PLAN) ###### Abstract Earthquake monitoring is vital for understanding the physics of earthquakes and assessing seismic hazards. A standard monitoring workflow includes the interrelated and interdependent tasks of phase picking, association, and location. Although deep learning methods have been successfully applied to earthquake monitoring, they mostly address the tasks separately and ignore the geographic relationships among stations. Here, we propose a graph neural network that operates directly on multi-station seismic data and achieves simultaneous phase picking, association, and location. Particularly, the inter-station and inter-task physical relationships are informed in the network architecture to promote accuracy, interpretability, and physical consistency among cross-station and cross-task predictions. When applied to data from the Ridgecrest region and Japan regions, this method showed superior performance over previous deep learning-based phase-picking and localization methods. Overall, our study provides for the first time a prototype self-consistent all-in-one system of simultaneous seismic phase picking, association, and location, which has the potential for next-generation autonomous earthquake monitoring. ## Introduction Earthquake monitoring is one of the most fundamental operations in seismology. A standard earthquake monitoring workflow involves a series of steps to detect and characterize earthquakes, including phase picking, association, and event location (Beroza et al., 2021, Mousavi et al., 2020). and Beroza, 2022, Zhu et al., 2022c]. Phase picking, a conceptually simple task which is akin to detection problems in computer vision, has recently been improved through deep learning [Zhu et al., 2022c, Ross et al., 2018b, a, Zhu and Beroza, 2018, Mousavi et al., 2019, Zhu et al., 2019, Pardo et al., 2019, Wang et al., 2019, Liu et al., 2020, Mousavi et al., 2020, Yang et al., 2021, Yano et al., 2021, Zhu et al., 2022a, Bilal et al., 2022, Feng et al., 2022, Munchmeyer et al., 2022], where convolutional neural networks (CNNs) [Krizhevsky et al., 2017] are typically used. After the phase picking, traditional [Arora et al., 2013, Gibbons et al., 2016, Zhang et al., 2019, Zhu et al., 2022b] and deep-learning-based [Ross et al., 2019, McBrearty et al., 2019a, Yu and Wang, 2022] phase association algorithms have been used to link seismic phases at multiple stations from the same events. Finally, location algorithms [Bakun and Wentworth, 1997] utilize the associated phases to obtain the earthquake hypocenters, although some deep-learning-based methods directly process raw data to locate earthquakes [Zhang et al., 2014, DeVries et al., 2018, Perol et al., 2018, Lomax et al., 2019, Mousavi and Beroza, 2019, Zhang et al., 2020, Munchmeyer et al., 2021]. These three tasks (phase picking, association, and location) are closely interdependent. The accuracy of multi-station phase picking affects the accuracy of association and location. Conversely, association and location impose constraints on multi-station phase picking. Additionally, phase picking with multi-station data can further utilize the geographic relationships and waveform similarities among multiple stations. To achieve more efficient and accurate earthquake monitoring, a suitable earthquake monitoring workflow should impose inter-task and inter-station constraints and preferrably perform all three tasks simultaneously at all stations. However, most existing earthquake monitoring methods perform phase picking, association, and earthquake location separately. In addition, most of the current phase-picking methods process seismic data on a station-by-station basis. While some recent graph-based approaches [McBrearty et al., 2019b, van den Ende and Ampuero, 2020, McBrearty and Beroza, 2022b, 2023, Zhang et al., 2022] have demonstrated the ability to handle irregularly spaced stations for phase association and event location, it remains a challenging task to develop a method that effectively leverages inter-task and inter-station constraints, and ideally performs all three tasks simultaneously. Here, we propose an all-in-one earthquake monitoring system called seismic Phase picking, Location, and Association Network (PLAN) that achieves for the first time the simultaneous implementation of the three tasks with multi-station data and inter-task constraints. PLAN consists of four interdependent neural network modules. Specifically, the first module of waveform feature extraction utilizes an encoder-decoder architecture to extract relevant features from multi-station seismic data. The second module of earthquake location encodes station locations (i.e., longitude, latitude, and elevation) and merges them with waveform features from the first module to predict the earthquake depth and epicentral distance for each station. The third module of phase association utilizes the predicted earthquake location information to estimate the time shifts required to align multi-station waveform features. Finally, the fourth module of phase picking aggregates the aligned features for simultaneous multi-station phase picking. We applied PLAN in the Ridgecrest and Japan regions and compared its efficiency and accuracy with that of state-of-the-art phase-picking and event location methods, demonstrating the merits of inter-station and inter-task constraints for accurate earthquake monitoring. ## Results ### Network architecture The proposed multi-station multi-task PLAN (Fig. 1) employs a Graph neural network (GNN) [11] as the backbone to integrate the four functional modules of waveform feature extraction, earthquake location, multi-station association, and a physics-informed multi-station phase picking (further details are provided in the Methods section). Compared with CNN, GNN is naturally suited for handling seismic data acquired from irregularly spaced sta Figure 1: The flowchart of the proposed multi-task and multi-station PLAN for earthquake monitoring. The input data for the model comprise seismic waveforms recorded by multiple stations and the locations of these stations. PLAN consists of four sub-modules: waveform feature extraction encoder and decoder, an earthquake location module, a multi-station association module and a physics-informed multi-station phase-picking module. All of these sub-modules are optimized simultaneously and constrained by each other during training to improve performance in earthquake detection, association, and location. tions whose quantity and location may vary (McBrearty and Beroza, 2023). In constructing our GNN, we define the graph nodes with seismic stations and define the feature vector for each node with the location and recoded seismic signal of the corresponding station. All nodes (corresponding to the seismic stations) are linked together and the linking weights are learnt during training to infer the relationships among the stations. We construct the GNN layers with TransformGConvs (Shi et al., 2020), which are designed based on an attention mechanism (Vaswani et al., 2017) to learn the dynamically linking weights among different stations. (details about TransformGConvs are provided in the Methods section). In addition, the graph nodes are not fixed so that the GNN could be adapted to variations in the station number and location. Raw three-component seismic signals are input into the graph nodes. The front-end waveform feature extraction module, constructed as an encoder-decoder CNN and shared among the nodes, extracts their corresponding key features. The station feature extraction block, constructed as two MLPs and shared among the nodes, extract geographic features from the normalized input longitudes, latitudes, and elevations of the stations. The earthquake location module then concatenates the extracted waveform and geographic features and employs multiple TransformGConvs to aggregate these features from multiple nodes to predict the event depth and station-event offset. The predicted offsets and depth are further used to determine the event location by a triangulation algorithm (Yu and Segall, 1996). In the earthquake location module, we do not directly predict the event location but the station-event offsets and depth instead, because we will use them as input into a followed multi-station association module to estimate the time shifts needed to align the P and S arrivals. This multi-station association module plays a key role in bridging the tasks of earthquake location and multi-station phase picking and introduces physical constraints between the two tasks. Prior to aggregating the waveform features from different stations for multi-station phase picking, the features corresponding to the same earthquake are required to be initially aligned or associated; otherwise, aggregation of unaligned features could mutually interfere and ultimately degrade the picking results. The multi-station phase-picking module includes a non-trainable physical layer, implemented with the Pytorch (Paszke et al., 2019) roll function, to shift and align the waveform features (from the decoder of the waveform feature extraction module) using the time shifts. Subsequently, multiple TransformGConvs in the phase-picking module aggregate the aligned waveform features to enhance the phase-picking features in the aligned space. Eventually, another physical layer unshifts the aggregated features back to the original space, followed by two convolutional layers to obtain the P/S-wave picks at all the stations. Three regression loss functions are defined for the three modules of phase picking, association, and earthquake location and then combined to jointly train the entire network. Because all the modules are interconnected within the entire network, the training process finds an optimal network that could perform all the tasks both accurately and consistently. Moreover, after training, the multi-station association module could be detached from the network and utilized to calculate the S-P differential travel time with inputs of offsets and event depth. Further details on this module are provided in the section titled "Multi-station association module". ### Data preparation We tested the proposed PLAN in two regions of Ridgecrest and Japan. For the Ridgecrest region (Fig. 2A), seismic recordings from 16 California Integrated Seismic Network stations within an epicentral distance of \(<\) 80 km were collected from July 4, 2019, to October 4, 2019, for a total of more than 71,000 M \(>\) 0 earthquakes. The data for Japan (Fig. 3A) included M \(>\) 2 earthquakes that occurred between January 1, 2011, and December 31, 2011, including the Mw 9.1 Tohoku sequence. We collected the 3-component High Sensitivity Seismograph Network (NIED Hi-net) [Obara et al., 2005, Aoi et al., 2020] data from over 35,000 events. Subsequently, the data were randomly divided into training, validation, and test sets (85%, 5%, and 10%, respectively) in both regions. The number of stations corresponding to each event in the training samples varied, and the trained network can flexibly handle situations where the number of stations changes in actual data. Further, the distributions of the number of stations per event in the training and test sets were balanced. The results for the test sets in the two regions are presented in Figs. 5 and 6, respectively. To accommodate different range scales in the two study regions, we used different window lengths in two regions (30.72 s for Ridgecrest and 61.44 s for Japan) with the same sampling frequency (100 Hz). To ensure a fair comparison with the existing phase-picking methods, we followed the same data preprocessing procedures used in previous studies [Zhu and Beroza, 2018, Mousavi et al., 2020]. This involved normalizing the data by removing the mean and dividing by the standard deviation, and using a Gaussian-shaped target function as training labels for the P/S-phase arrival times. The use of Gaussian-shaped targets is effective in phase picking [Zhu and Beroza, 2018, Mousavi et al., 2020], and in our study, the probabilities of P wave and S wave arrival times were set to 1 at the first arrival time and decreased to 0 within a 20 sample window before and after each phase arrival. ### Application to the 2019 Ridgecrest sequence We compared the performance of PLAN with that of other established deep learning methods for earthquake phase picking (PhaseNet [Zhu and Beroza, 2018] and EQTransformer [Mousavi et al., 2020]) and location (Aggreated-GNN [van den Ende and Ampuero, 2020]). All of the methods were retrained on the same training set and evaluated on a common test set. As shown in the Ridgecrest application in Fig. 2B-2C, the performance of PLAN in phase picking was superior to that of the other two deep learning-based methods. Specifically, the residual distribution of the P-wave picks for PLAN was more concentrated than that of the other methods, indicating a higher overall accuracy. For S-wave picks, PLAN performed significantly better than EQTransformer because the distribution of PLAN was narrower whereas the difference in performance between PLAN and PhaseNet was relatively minor. In terms of localization, our method (PLAN) outperformed the other approaches (Fig. 2D-2E; Table 1). The distribution of PLAN was notably more concentrated than that of Aggregated GNN, particularly in terms of offset prediction. To further demonstrate the effectiveness of TransformGConv, we replaced the TransformGConv layers (Fig. S1) with GCN (Kipf and Welling, 2016), SAGE (Hamilton et al., 2017), and GATv2 (Brody et al., 2021) respectively. Among the various methods compared, our method yielded the best results in terms of offset, with an average error of 1.09 km and a standard deviation of 1.41 km. Furthermore, PLAN also outperformed Aggregated-GNN in terms of depth localization, regardless of whether it was based on GCN, GATv2, or TransformGConv. These results demonstrated the superiority of the proposed PLAN in location estimations. Furthermore, we used three metrics of mPrecision, mRecall, and mF1 (described in the Methods section), to quantitatively evaluate the performance of the five methods (Table 2). In five of the six metric scores for the P-wave and S-wave picking results, our attention mechanism-based GNN method outperformed the other methods. The only exception was the mPrecision Figure 2: Distributions of phase picking and location residuals in Ridgecrest region. (**A**) Distribution of 16 stations (black triangles) and training event locations (red circles) used in our study, where the events occurred between July 4, 2019 and October 4, 2019. (**B** and **C**) The results of P-wave and S-wave arrival time residual, respectively. The blue, green, and orange lines in (B and C) represent the arrival time residuals for PLAN, PhaseNet, and EQTransform, respectively. The proposed method yields the most accurate results in P/S-wave picking. (**D** and **E**) The offset and depth residuals between model predictions and Southern California Seismic Network (SCSN) catalog of the located events. Regardless of the offset or depth, the residual distribution of PLAN (blue line) is more concentrated at zero than that of Aggregated-GNN (orange line). metric of P-wave picking, where the EQTransformer showed slightly higher scores than PLAN. Notably, even the simplified version of the multi-station phase-picking method, such as the SAGE-based PLAN, outperformed both the single station-based picking methods of EQTransformer and PhaseNet in mF1 scores for S-wave picking. This indicated that the phase-picking accuracy is significantly improved by multi-station picking, which effectively utilizes inter-station contextual information. We not only adjusted the time threshold while maintaining a constant picking probability for evaluation but also fixed the time threshold and changed the probability of picking to calculate and plot the precision-recall curves for four models (Fig. S2). Consistent with the aforementioned results, the PLAN model, based on TransformGConv, demonstrated superior performance in terms of F1, encompassing both P-wave and S-wave. ### Application to seismicity in Japan We retrained all the methods on the Japan training set for the evaluation. Compared to its performance in the Ridgecrest region, PLAN exhibited an even better performance in Japan (Fig. 3B-3C). Further, PLAN demonstrated a remarkably better performance than PhaseNet and EQTransformer for both P- and S-wave picks. The offset predicted by PLAN was notably more accurate than that predicted by Aggregated-GNN, with higher and narrower residuals (Fig. 3D-3E). In terms of depth estimation, although PLAN maintained a narrower residual distribution, the highest point of the distribution was shifted systematically, compared with the Aggregated-GNN method. Table 1 presents the comprehensive quantitative comparison of the \begin{table} \begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{Offset MAE (km)} & \multicolumn{2}{c}{Depth MAE (km)} \\ \cline{3-6} Region & Method & Mean & Std & Mean & Std \\ \hline \multirow{6}{*}{Ridgecrest} & Aggregated-GNN & 2.30 & 2.30 & 1.98 & 1.55 \\ & GCN & 8.96 & 6.94 & 1.43 & 1.34 \\ & PLAN & GATv2 & 8.95 & 6.90 & **1.42** & 1.33 \\ & SAGE & 1.28 & 2.09 & 1.68 & 1.42 \\ & Trans & **1.09** & **1.41** & 1.43 & **1.33** \\ \hline \multirow{6}{*}{Hinet} & Aggregated-GNN & 27.92 & 26.78 & 10.30 & 8.95 \\ & GCN & 21.30 & 16.47 & 13.50 & 9.95 \\ \cline{1-1} & PLAN & GATv2 & 10.79 & 10.82 & **5.59** & **6.86** \\ \cline{1-1} & SAGE & 4.87 & 6.11 & 12.85 & 9.41 \\ \cline{1-1} & Trans & **4.81** & **5.83** & 10.62 & 8.89 \\ \hline \hline \end{tabular} Note: Red and bold values represent the best performance. \end{table} Table 1: Location Performance in Ridgecrest and Japan regions. results. Although the TransformGConv-based PLAN method did not demonstrate particular superiority in depth estimation, it excelled in offset estimation. Further, the GATv2-based PLAN showed the lowest depth error, indicating potential improvement of localization capabilities of the proposed PLAN. Similar to the Ridgecrest example, we assessed the phase-picking performance of various models applied to the test data from Japan using mPrecision, mRecall, and mF1 metrics (Table 3) and precision-recall curves (Fig. S3). The TransformGConv-based PLAN model achieved superior results in terms of mRecall (95.14 for P-waves and 85.09 for S-waves) and mF1 (95.46 for P-waves and 86.72 for S-waves), whereas EQTransformer performed best in terms of mPrecision of P-waves and S-waves. TransformGConv-based PLAN demonstrated high mRecall scores, indicating that a large proportion of the samples containing P/S-waves were correctly detected. However, this was achieved at the expense of a slightly lower mPrecision compared to \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{P Picking Metrics} & \multicolumn{3}{c}{S Picking Metrics} \\ \cline{2-7} Method & mPrecision & mRecall & mF1 & mPrecision & mRecall & mF1 \\ \hline PhaseNet & 94.83 & 92.78 & 93.79 & 84.50 & 80.65 & 82.53 \\ EQtransformer & **95.43** & 91.17 & 93.25 & 86.77 & 78.21 & 82.27 \\ \hline PLAN-GATv2 & 95.05 & 93.02 & 94.03 & 85.55 & 80.49 & 82.95 \\ PLAN-SAGE & 94.99 & 93.07 & 94.02 & 85.65 & 81.48 & 83.51 \\ PLAN-Trans & 94.65 & **94.90** & **94.77** & **86.88** & **84.94** & **85.90** \\ \hline \hline \end{tabular} Note: The picking probability is 0.3. Red and bold values represent the best performance. \end{table} Table 2: Detection Performance in Ridgecrest region. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{P Picking Metrics} & \multicolumn{3}{c}{S Picking Metrics} \\ \cline{2-7} Method & mPrecision & mRecall & mF1 & mPrecision & mRecall & mF1 \\ \hline PhaseNet & 94.87 & 94.97 & 94.92 & 88.26 & 84.38 & 86.28 \\ EQtransformer & **95.91** & 94.79 & 95.35 & **89.08** & 84.40 & 86.68 \\ \hline PLAN-GATv2 & 95.14 & 93.81 & 94.47 & 88.35 & 81.87 & 84.98 \\ PLAN-SAGE & 95.65 & 94.90 & 95.27 & 88.45 & 84.19 & 86.27 \\ PLAN-Trans & 95.79 & **95.14** & **95.46** & 88.41 & **85.09** & **86.72** \\ \hline \hline \end{tabular} Note: The picking probability is 0.3. Red and bold values represent the best performance. \end{table} Table 3: Detection Performance in Japan. that of the EQTransformer, with some non-P/S-waves incorrectly classified as P/S-waves. The mF1 score provided a more comprehensive evaluation of the model performance, considering both the reduction in missed detections and the increase in correct detections. In this context, TransformGConv-based PLAN had the highest F1 score, indicating that it effectively reduced the missed detections of P/S-waves and increased the proportion of correct detections. ### Multi-station association module To simultaneously pick P/S-waves from multiple stations, PLAN utilizes GNNs, which typically aggregates raw signals received at different stations. Feature aggregation across multiple stations introduces inter-station constraints and enhances the features at each station. However, because of different travel times of the same source across different stations, directly aggregating the signals from multiple stations would deteriorate multi-station picking. To address this issue, the proposed method employes a multi-station association module to estimate the time shifts as illustrated in Fig. 1. The input to this module is the offset of Figure 3: Distributions of phase picking and location residuals in Japan. (**A**) Distribution of stations (black triangles) and training event locations (red circles) used in our study, where the events occurred between January 1, 2011 and December 31, 2011. (**B** and **C**) Similar to Fig. 2, (B and C) show the results of P-wave and S-wave arrival time residuals, respectively. (**D** and **C**) The offset and depth residuals between model predictions and the Japan Meteorological Agency (JMA) catalog of located events, respectively. each station with respect to the event and its depth. The module output is the corrected time shifts of the P/S-wave for each station. To align the P/S-wave features, the correction criteria for the P/S-wave features were set at the 10 and 15 s in the Ridgecrest region and at the 20 and 32 s in Japan. Using these criteria, the multi-station association module was trained to estimate the corrections of P/S-wave for each station. These corrections are then used to align the waveform features, enabling the graph convolution to aggregate the features in a temporally aligned space. Consequently, the method could enhance or compensate for the features at each station by fusing the aligned features from other stations, allowing simultaneous and accurate multi-station picking. The multi-station association module can be utilized independently after training. It converts the distance and depth information into arrival information and calculates the S-P differential travel time (Crotwell et al., 1999). Figure 4 illustrates the arrival time differences of the P/S-waves at various stations in Japan. Although the training process utilizes a maximum of 37 stations for a single event, the module can be adapted to cases with any number of stations (e.g., hundreds of stations shown in Fig. 4A - Fig. 4D) to estimate the P/S-wave arrival time differences for all stations. These results indicated that the module effectively enforced physical constraints based on time shifts within the overall network. ## Discussion PLAN is scalable for accommodating various numbers of stations per event. As PLAN is a network level picking and location model, we further investigated the effect of different numbers of stations on the network performance for phase picking and earthquake location with a test set of the Ridgecrest region (Fig. 5). We calculated the P- and S-wave picking residuals of the three different methods relative to the manual picking results, respectively (Fig. 5A-5B). The residuals of the single-station-based picking methods, PhaseNet and EQTransformer, exhibited oscillations for samples with station numbers 3-13 as the number of stations increased. Contrastingly, the residuals of our simultaneous multi-station picking method, PLAN, exhibited a significant residual decrease as the number of stations increased. Although the prediction residuals of the single-station-based methods should not be significantly associated with the number of stations, their prediction residuals still decreased when the number of stations was 13-16. This was probably because the events recorded by more stations tended to be larger and easier to pick. A comparison of the distribution of prediction errors for earthquake offsets and depths with respect to the number of stations indicated that the errors in PLAN were significantly smaller than those in the Aggregated-GNN method (Fig. 5C-5D). However, the errors in offset prediction did not exhibit a significant decrease with an increase in the number of stations. This was likely because a large number of stations would include more distant ones that tended to have large offset prediction errors. As the offset error metric is defined as the average value acquired from multiple stations, an increase in the number of stations can lead to a slightly higher average error for a single event. Furthermore, the statistical results for Japan (Fig. 6) were similar to those for the Ridgecrest region, with the PLAN method exhibiting smaller phase-picking errors than EQTransformer and PhaseNet, especially for S-wave picking. In addition, as the number of stations increased, the offset prediction error of PLAN became significantly smaller than that of the Aggregated-GNN. The ability of our network to handle varying numbers of stations can be attributed to the multi-station association module, which can be separated from the entire network and utilized in a manner similar to the Taup algorithm for estimating the arrival time of earthquakes at stations. Differing from the Taup algorithm, our association module does not depend on an input velocity model. Instead, it empowers the network to comprehend the concept of velocity, enabling the Figure 4: Arrival time residuals of P/S-waves at various stations in the test dataset of Japan. (**A** to **D**) Arrival time differences for each event calculated by multi-station association module, respectively, while (**E** to **H**) show manually picked arrival time differences for the same events. The numbers on the top of the figures represent the identification numbers of the events in the Japan Meteorological Agency (JMA) catalog. Above results indicate that the S-P differential arrival time predicted by our multi-station association module are consistent with the manually picking differences. They also demonstrate that our module could calculate the arrival time difference for a larger number of stations. conversion of offsets into relative time shifts. Additionally, unlike the sequential processing of one station at a time in the Taup algorithm, our module simultaneously calculates the time shifts for multiple stations associated with a single event. In essence, our association module can be considered a computationally efficient 3D Taup algorithm that operates without requiring a velocity model. To evaluate the estimation accuracy of the arrival time using this module, we applied the estimated time shifts to align different stations (Fig. S4). Because the multi-station association module can accurately estimate the arrival time, the original waveforms from all stations were aligned accordingly. To further evaluate the estimation accuracy of the arrival time using this module, we employed the TauP algorithm based on the PREM model [Dziewonski and Anderson, 1981] for comparison (Fig. 7). We also calculated the correlation coefficient (R) between the output of each method and the manually picked P/S-wave time differences. During the training process, this module used the offsets and depths obtained from the earthquake location module as in Figure 5: Comparison of prediction results using different numbers of stations in the Ridgecrest region. (**A** to **D**) The distributions of prediction errors for P-wave, S-wave, offset, and depth, respectively. The x-axis represents the number of stations, the primary y-axis denotes the number of events recorded by a specific number of stations, and the secondary y-axis represents the prediction errors for phase picking and event localization. Note that for phase picking, prediction residuals of PLAN (blue curves) decrease evidently as the number of stations increases. Moreover, the location errors of PLAN are significantly smaller than those of the Aggregated-GNN method. puts. Therefore, inputting the predicted offsets and depths into this module (Fig. 7D) could yield better P/S-wave time differences than inputting the labeled offsets and depths (Fig. 7C). The multi-station association module with manually labeled offsets and depths yielded less consistent results than the TauP algorithm. This discrepancy may not be solely attributable to errors in the deep learning estimation. Label inaccuracies may have also contributed to this outcome. This assertion was supported by the observation that using the neural network output as the input for the TauP algorithm resulted in greater correlation coefficients than when label was employed (Fig. 7A-7B). Among all the evaluated methods, the estimation results in Fig. 7D show the highest correlation coefficients. Generally, the multi-station association module and the TauP algorithm based on the PREM model have the same level of accuracy in calculating P/S-wave time differences. In addition, the out-of-distribution data is utilized to assess the network's temporal generalization ability and application potential. We applied PLAN and PhaseNet to pick the aftershock sequence of the 2022 M 7.4 Fukushima earthquake in Japan (Fig. S5). Traditional deep learning-based phase picking methods, such as PhaseNet, typically focus on picking seismic phases at a single station and perform pick and association processes step-by-step. In such a sequentially independent processing flow, the accuracy of phase picking determines the precision Figure 6: Comparison of prediction results using different number of stations in Japan. (**A** to **D**) The four distributions are similar to those described in Fig. 5 and the only difference is that we have used logarithmic coordinates for the secondary y-axis in (A) and (B). of subsequent earthquake association and location. Additionally, the consistency constraints among multiple stations for the same seismic event in the subsequent association process cannot be fed back to the previous step of phase picking to improve its accuracy. Without the consistency constraints among multiple stations, the phase picking station-by-station may be sensitive to noise and fail to pick reasonably pick multiple P/S waves at stations with short epi Figure 7: Comparison of the P/S-wave arrival time estimation using different methods. (**A** and **B**) The crossplots of the S-P differential arrival times computed by the TauP model with the input of offsets and depths from manual labels and network predications, respectively. The x-axis represents the manually picked S-P differential arrival times. The y-axis represents the S-P differential arrival times obtained by different methods. (**C** and **D**) The crossplots of the S-P differential arrival times predicted by our multi-station association module, and they are consistent with those predicted by the TauP model. This indicates that the multi-station association module, detached from the entire trained PLAN model, works physically reasonable compared to the commonly used TauP model. cental distances as shown in Fig. S5A-S5B. In contrast, PLAN is a simultaneous multi-station and multi-task processing approach that allows the aggregation of information from all stations and tend to pick P/S waves that can be consistently associated to the same events recorded by all the stations, leading to more reliable picking results. Furthermore, during the network training process, the picking, association, and location modules in the PLAN network mutually constrain and provide feedback to each other. As a result, a coherent and adaptive organic system can be achieved, simultaneously accomplishing these three tasks in a cohesive and integrated manner. Some limitations remain in our method, particularly in handling continuous waveform data. This is mainly related to the construction of our training dataset and the corresponding training strategy. In our training data, although we allow for varying numbers of stations and missing phase picking labels for some stations, we assume that each training sample contains only one earthquake event. We overlook the scenario where the sample data does not contain an earthquake event (only noise), and when the data contains multiple events, we only provide labels for the main event. These considerations simplify the process of constructing our training samples but limit the diversity of the samples and the ability of the trained network model to handle continuous waveform data. However, we believe that these limitations can be addressed by using a more diverse and comprehensive training dataset. In summary, we present a novel all-in-one multi-task multi-station system called PLAN for earthquake monitoring, which is capable of simultaneous phase picking, phase association, and earthquake location. Unlike current CNN-based methods that perform phase picking station-by-station, phase association and location separately, our proposed GNN-based multi-station multi-task system best utilizes the inherent inter-task and inter-station constraints. The multi-station association module estimates the phase shift and improves the robustness and accuracy of the phase association process. Eventually, the resulting offsets and depth enables accurate event localization. Our method demonstrates the need to factor mutual constraints among tasks and stations into next-generation earthquake monitoring systems. ## Methods ### Graph based neural network Several studies have shown that GNNs have the potential to deal with irregularly spaced stations for phase association and event localization (McBrearty et al., 2019b, van den Ende and Ampuero, 2020, Yano et al., 2021, McBrearty and Beroza, 2022b, 2023, Zhang et al., 2022, Bilal et al., 2022). Here, we build a graph-based network (Fig. 1) for multi-station earthquake monitoring. To utilize the GNN, we first need to change the data from the matrix format to the graph format and employ a graph-based representation of the stations, where each station is represented as a node in the graph and the three-channel data and the station location are used as the features of each node. In contrast to the current single-station processing methods (Ross et al., 2018, 2018; Zhu and Beroza, 2018; Mousavi et al., 2019; Zhu et al., 2019; Pardo et al., 2019; Wang et al., 2019; Liu et al., 2020; Mousavi et al., 2020; Zhu et al., 2022a], which treat each three-channel data as an individual input sample, our approach inputs all the three-channel data received from multiple stations per event as a single sample. This allows for efficient aggregation of information from multiple stations during network training. As a result, the features of different stations could be effectively integrated using GNNs during the aggregation process. In this study, we have evaluated various graph aggregation methods, including GCN [Kipf and Welling, 2016], GraphSAGE [Hamilton et al., 2017], GAT [Velickovic et al., 2017], GATv2 [Brody et al., 2021], and TransformGCONV [Shi et al., 2020]. Through this evaluation, we have determined that TransformGCONV, which is based on attention mechanism [Vaswani et al., 2017], is the most suitable module for the proposed PLAN. The message aggregation of TransformGCONV could be represented as: \[\mathbf{x}_{i}^{\prime}=\mathbf{W}_{1}\mathbf{x}_{i}+\sum_{j\in\mathcal{N}(i )}\alpha_{i,j}\mathbf{W}_{2}\mathbf{x}_{j}, \tag{1}\] where \(\mathbf{x}_{i}^{\prime}\) represents the aggregated features at the source node, and \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) represent the features of the source and distant nodes before aggregation, respectively. \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are the trainable matrices. In addition, the attention coefficients \(\alpha_{i,j}\) are computed via dot-product attention as follows: \[\alpha_{i,j}=softmax\left(\frac{\left(\mathbf{W}_{3}\mathbf{x}_{i}\right)^{ \top}\left(\mathbf{W}_{4}\mathbf{x}_{j}\right)}{\sqrt{d}}\right), \tag{2}\] where \(\mathbf{W}_{3}\) and \(\mathbf{W}_{4}\) are the trainable matrices. Similar to the attention mechanism[Vaswani et al., 2017], the source feature \(\mathbf{x}_{i}\) and distant feature \(\mathbf{x}_{j}\) are transformed into query vector and key vector, respectively, using \(\mathbf{W}_{3}\) and \(\mathbf{W}_{4}\). Compared to other graph aggregation methods, the use of the attention-based mechanism (equation 1) in TransformGCONV allows for a more fine-grained representation of the relationship between different stations, thereby improving the accuracy and efficiency of the proposed method. ### Network Architecture Here, we design a multi-station multi-task network for simultaneous phase picking, association, and location. The network (Fig. S1) comprises four components: a waveform feature extraction module, an earthquake location module, a multi-station association module, and a physics-informed multi-station phase-picking module. Similar to previous deep-learning-based phase-picking approaches [Zhu and Beroza, 2018; Mousavi et al., 2020; Zhu et al., 2022a, c], we design an encoder to extract waveform features and a decoder to produce phase-picking results. However, to address the multi-station phase-picking problem, we introduce the GNN-based TransformGCONV for aggregating features from multiple stations. Because aligned waveform features are easily used and aggregated in GNNs for multi-station phase picking, we do not employ it in the waveform feature extraction module (Fig. S1A), where the features are relatively shifted in time. Although we use a U-shape neural network for feature extraction to solve the phase-picking problem, it could be replaced with other single-station-based phase-picking networks, such as EQTransformer. No matter what type of network architecture is used, the features extracted from the middle of the network are input into the earthquake location module, and the structure of the final few layers of the network are modified for the purpose of multi-station phase picking. Additionally, the kernel size of all convolutional layers in the waveform feature extraction network is set to 7. For the earthquake location module (Fig. S1B), we first extract features from the normalized coordinate information of the stations within the range of [0,1] through two fully connected layers (3-48-96). Simultaneously, the waveform features extracted from each station are further processed through several convolutional layers and then flattened. Subsequently, the position and waveform features are concatenated and passed through two fully connected layers (192-192-96). This fuses the position and waveform features at each station. The fused features are further aggregated among multiple stations by several GNN layers to predict the offsets of each station with respect to the event and its depth. Because there is only one depth parameter for each sample, we add a global average pooling before the output. In summary, this module allows the integration of both location and waveform information into the feature extraction process, which is crucial for accurate event localization. Finally, in the physics-informed multi-station phase-picking module (Fig. S1D), we incorporate physics-motivated constraints of time alignment among waveforms corresponding to the same earthquake event. We first utilize a multi-station association module (Fig. S1C) to calculate the relative alignment shifts between stations using the estimated offsets and the depth of the event. We then use the shifts to align the waveforms to a common time standard and aggregate the features across multiple stations in the phase-picking module. Subsequently, the aggregated features at each station are unaligned and fed to two layers of convolution to yield final P/S-wave picking results. This process leverages the physical information of the event location to improve the robustness and accuracy of the multi-station phase picking. ### Loss function and training details Our multi-task learning network model has three output results corresponding to phase picking, phase association, and earthquake localization. To train the model, we define three different loss functions for these three different tasks. For phase picking, instead of using the commonly used cross-entropy, we choose the mean square error (MSE) as the loss function, which is suitable for training in multi-task problems. To estimate offsets and depth, which is similar to event localization, we also use MSE as the loss function, as suggested by previous studies (van den Ende and Ampuero, 2020; Zhang et al., 2022). Finally, to calculate the P/S-wave shift, we define the loss function as follows: \[\mathcal{L}_{\Delta p}=\sum_{i=1}^{n}\mid CTime_{p}-(label_{p_{i}}+\Delta t_{p_{i}}) \mid, \tag{3}\] \[\mathcal{L}_{\Delta s}=\sum_{i=1}^{n}\mid CTime_{s}-(label_{s_{i}}+\Delta t_{s_{ i}})\mid, \tag{4}\] where \(CTime_{p}\) and \(CTime_{s}\) represents the reference times where the P- and S-wave picks are aligned, respectively. Moreover, \(label_{p_{i}}\) or \(label_{s_{i}}\) represents the manually picked P/S-wave arrival time for each station, and \(\Delta t\) represents the predicted P/S-wave shift. Finally, we combine the three types of loss functions to form the overall objective function: \[L_{total}=\lambda_{1}\mathcal{L}_{picking-p}+\lambda_{1}\mathcal{L}_{picking-s }+\lambda_{2}\mathcal{L}_{\Delta p}+\lambda_{2}\mathcal{L}_{\Delta s}+\lambda_ {3}\mathcal{L}_{offset}+\lambda_{3}\mathcal{L}_{depth}. \tag{5}\] Here, we set the coefficients \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) to 1. During the training process, the model was optimized using the ADAM (Kingma and Ba, 2014) method with an initial learning rate of 0.001, which is gradually decreased with a decay rate of 0.9 every 100 epochs. To enhance the training efficiency, we randomly selected 2048 events from the training set for each epoch, rather than using the entire data. The model was trained for a total of 2000 epochs with a batch size of 16, and the training process required approximately 24 h using 1 NVIDIA Tesla A100 GPU. ### Evaluation metrics In previous studies (Mousavi et al., 2020, Zhu et al., 2022c), true positive phase picks were defined as those within 0.5 s of the predicted pick. The rest were counted as false positives. Nevertheless, owing to potential errors in the labels of the dataset, such statistical results based on a single threshold may not be reliable. Thus, to better evaluate the performance of algorithms, we introduce new metrics, mPrecision, mRecall, and mF1, which are calculated using multiple thresholds, following previous research(Zheng et al., 2022). The metrics are defined as: \[\mathrm{mPrecision}=(\mathrm{Precision}@11+\mathrm{Precision}@12+\cdots+ \mathrm{Precision}1@50)/40, \tag{6}\] \[\mathrm{mRecall}=(\mathrm{Recall}@11+\mathrm{Recall}@12+\cdots+\mathrm{ Recall}1@50)/40, \tag{7}\] \[\mathrm{mF1}=(\mathrm{F1}@11+\mathrm{F1}@12+\cdots+\mathrm{F1}@50)/40. \tag{8}\] where x@11, x@12, \(\cdots\), x@50 are Precision, Recall, or F1 metrics when the thresholds are 11, 12, \(\cdots\), 50 samples (corresponding to 0.11 s, 0.12 s, \(\cdots\), 0.5 s of time), respectively. These metrics, mPrecision, mRecall, and mF1 reward detectors with better picking results and, therefore, can more reasonably or fairly assess the performance of the different methods.
2308.12172
Multiple-scale analysis of the simplest large-delay differential equation
A delayed term in a differential equation reflects the fact that information takes significant time to travel from one place to another within a process being studied. Despite de apparent similarity with ordinary differential equations, delay-differential equations (DDE) are known to be fundamentally different and to require a dedicate mathematical apparatus for their analysis. Indeed, when the delay is large, it was found that they can sometimes be related to spatially extended dynamical systems. The purpose of this paper is to explain this fact in the simplest possible DDE by way of a multiple-scale analysis. We show the asymptotic correspondence of that linear DDE with the diffusion equation. This partial differential equations arises from a solvability condition that differs from the ones usually encountered in textbooks on asymptotics: In the limit of large delays, the leading-order problem is a map and secular divergence at subsequent orders stem from forcing terms in that map.
Gregory Kozyreff
2023-08-23T14:50:32Z
http://arxiv.org/abs/2308.12172v1
# Multiple-scale analysis of the simplest large-delay differential equation ###### Abstract A delayed term in a differential equation reflects the fact that information takes significant time to travel from one place to another within a process being studied. Despite de apparent similarity with ordinary differential equations, delay-differential equations (DDE) are known to be fundamentally different and to require a dedicate mathematical apparatus for their analysis. Indeed, when the delay is large, it was found that they can sometimes be related to spatially extended dynamical systems. The purpose of this paper is to explain this fact in the simplest possible DDE by way of a multiple-scale analysis. We show the asymptotic correspondence of that linear DDE with the diffusion equation. This partial differential equations arises from a solvability condition that differs from the ones usually encountered in textbooks on asymptotics: In the limit of large delays, the leading-order problem is a map and secular divergence at subsequent orders stem from forcing terms in that map. ## I Introduction In mathematical modelling, to be able to describe a physical, chemical, or an automated process by a lumped-element model, _i.e._, by a finite set of time-dependent variables, is a promising starting point for fruitful analysis. Occasionally, the interaction between elements of the model takes place with a time delay that cannot be neglected. In such a case, the appropriate mathematical problem to be solved typically consists of one ore more Delay Differential Equations (DDEs), also called functional differential equations, as opposed to ordinary ones. The consequences can be dramatic: first order DDEs can have periodic or even chaotic solutions [1; 2; 3; 4; 5; 6; 7]. Hence, despite their similarity in writing, DDEs are fundamentally different from ODEs and require specific consideration. It is easy to observe, for instance, that linear, constant-coefficient DDEs generally admit an infinity of independent exponential solutions. They may thus be regarded as ODEs of infinite order [8]. In 1996, Giacommelli and Politi went one step further by showing the asymptotic equivalence of some DDEs to partial differential equations when the delay is large [9; 10]. This result followed earlier numerical simulations [11] pointing to the relevance of a 2D representation of the solutions of large-delay DDEs, see Figure 1(a). This "spatio-temporal equivalence" has been confirmed in several subsequent works [12; 13; 14] and the purpose of this paper is to discuss it in the simplest possible setting, namely the equation \[y^{\prime}(t)+y(t)=ry(t-T), T\gg 1, \tag{1}\] with initial data \(y(t)=\psi(t)\) in the range \(t\in[-T,0]\). The interpretation of this equation is straightforward: it models a linear system whose internal dynamics is governed by the left hand side, with a simple exponential decay, and which is subjected to a delayed feedback with strength \(r\). Here, \(T\) is the delay normalised by the eigenvalue of the isolated system. Equation (1) appears in recent developments as the deterministic part of a Langevin equation \[y^{\prime}(t)+y(t)=ry(t-T)+\mu\xi(t), \tag{2}\] where \(\xi(t)\) is a noisy forcing term. The latter can model Brownian motion with a memory effect [15; 16] or delayed control [17]. More recently, it was proposed to study chaotic diffusion mediated by a nonlinear DDE [18]. Equation (1) is also a special case of \[y^{\prime}(t)=ay(t)+by(t-T) \tag{3}\] which has been studied in detail as one of the simplest DDE [19; 20; 21]. Concretely, we will derive from eq. (1) the multiple-scale asymptotic approximation \[y\sim r^{s}e^{-\ln(r)^{2}z/2}\times Y(s,z), \tag{4}\] with \(s\sim t/(T+1)\), \(z=t/T^{3}\) and \[\frac{\partial Y}{\partial z}=\frac{1}{2}\frac{\partial^{2}Y}{ \partial s^{2}}, Y(s,z)=Y(s-1,z). \tag{5}\] This calculation serves three pedagogical purposes: * to highlight the general interest of investigating the large-\(T\) limit in DDEs, * to detail the inner workings of the method of multiple scales in that particular framework, * to provide instructors with a new, easily workable application of that method, beyond the usual perturbed harmonic oscillator, in the spirit of [22]. ### Application of the multiple-scale approximation Before deriving eqs. (4) and (5), let us demonstrate their usefulness as a tool to analyze eq. (1). Recall first that an initial condition \(\delta(s+s_{0})\) at \(z=z_{0}\) evolves under the above diffusion equation as [23] \[Y(s,z)=\frac{e^{-(s+s_{0})^{2}/2(z-z_{0})}}{\sqrt{2\pi(z-z_{0})}}. \tag{6}\] From this, we may deduce, when \(r=1\), that a peaked initial condition will recur periodically with a flattening profile and a peak value that decreases in time as \(1/\sqrt{t+c}\), where \(c\) is an appropriate constant. Indeed, a gaussian initial condition \[e^{-k(t+t_{0})^{2}}\] in eq. (1) translates into \(e^{-k(T+1)^{2}(s+s_{0})^{2}}\) in eq. (5). It may thus be assimilated, up to an appropriate factor, to a delta function in the \(s\) variable in the limit \(T\to\infty\). By the same token, any similarly localized initial condition can be asymptotically regarded as proportional to a delta function on the \(s\)-scale. Therefore, eq. (6) applies. Furthermore, \(t=-t_{0}\) corresponds to \(z_{0}=-t_{0}/T^{3}\). Hence, evaluating eq. (6) at \(s=-s_{0}\), we deduce the following envelope for the maxima of \(y(t)\): \[\frac{\mathrm{const}}{\sqrt{z-z_{0}}}\propto\frac{1}{\sqrt{t+t_{0}}}. \tag{7}\] When \(r\neq 1\) the exponential factor in eq. (4) is applied. One thus obtains the envelope \[\mathrm{const}\times\frac{r^{t\left(1/(T+1)+\ln(r)/T^{3}\right)}}{\sqrt{t+t_{ 0}}}. \tag{8}\] We compare the above formulas with numerical solutions of eq. (1) with \(r=1\) and \(r=1.1\) respectively in figs. 1 and 2. fig. 1(a) confirms the relevance of the "spatio-temporal" representation of the solution, in which \(\mod(t,T+1)\propto s\), \(\lfloor x\rfloor\) denotes the integer part of \(x\) and \(\lfloor t/(T+1)\rfloor\) is a discrete variable on which the evolution is so slow that it is asymptotically equivalent to \(z\). Next, fig. 1(b) and fig. 2 show the quantitative agreement of eqs. (7) and (8) with the numerical simulations. Especially striking is the non-monotonous behavior seen in the latter, which conforms to the intuition brought by eq. (5). Indeed, diffusion promotes an initial flattening and attenuation of the peaks, before the amplification factor \(r>1\) takes over. Note that eq. (4) and eq. (6) actually make up an asymptotic approximation of the Green function of eq. (1), in an alternative way to the exact solution given in [15]. This Green function is used to study the noisy extension eq. (2) of eq. (1). The rest of the paper is devoted to the derivation of eqs. (4) and (5). It starts in section II with a standard linear analysis of eq. (1). The characteristic equation is derived and then simplified in the large-\(T\) limit. This gives us insight on the relevant time scales for the multiple scale analysis, which we develop in section III. We carry out the calculation twice, as the method can be implemented in two slightly different but equally instructive ways. For the sake of simplicity, we initially focus on \(r=1\) and deal with general \(r\) as a later step. Finally, we present our conclusions in section IV. ## II Linear spectrum Let us look for a solution of (1) in the form of \(y(t)=\exp\lambda t\). Assuming \(r=1\) for simplicity, this yields the characteristic equation \[\lambda+1=e^{-\lambda T}. \tag{9}\] Rearranging terms, we have \[\lambda =-1+\frac{u}{T}, ue^{u} =Te^{T}. \tag{10}\] The equation \(ye^{y}=x\) possesses a discrete infinity of complex solutions, denoted \(W_{n}(x)\) and called "ProductLog[n,x]" in Mathematica. \(W_{n}\) is the \(n^{\rm th}\) branch of the complex Lambert function [24]. Hence, the spectrum of eq. (1) is, exactly, \[\lambda_{n} =-1+\frac{W_{n}\left(Te^{T}\right)}{T}. \tag{11}\] Equation (11) is useful to draw the spectrum with a symbolic software but not very enlightening to anyone who is not an expert of the Lambert function. Fortunately, we can make significant progress in the limit \(T\to\infty\). Writing \(\lambda T=\sigma+i\omega\), eq. (9) yields \[\frac{\sigma}{T}+1-e^{-\sigma}\cos\left(\omega\right) =0, \frac{\omega}{T}+e^{-\sigma}\sin\left(\omega\right) =0. \tag{12}\] This suggests the expansions \(\sigma\sim\sigma_{0}+T^{-1}\sigma_{1}+T^{-2}\sigma_{2}+\cdots\) and \(\omega\sim\omega_{0}+T^{-1}\omega_{1}+\cdots\). At leading order, we get \[1-e^{-\sigma_{0}}\cos\left(\omega_{0}\right) =0, e^{-\sigma_{0}}\sin\left(\omega_{0}\right) =0, \tag{13}\] Figure 1: Numerical solution of eq. (1) with \(r=1\), \(T=30\), and initial condition \(y(t)=20e^{-\left(t+25\right)^{2}}\) in the range \(-T<t<0\). (a) Pseudo spatio-temporal plot, demonstrating the \(T+1\) period of recurrence and the diffusive behavior. (b) Usual representation as a function of \(t\). Figure 2: Numerical solution of eq. (1) with \(r=1.1\), \(T=30\), and initial condition \(y(t)=20e^{-\left(t+T/2\right)^{2}}\) in the range \(-T<t<0\). The orange curve is the envelope of the peaks predicted by eq. (8). which implies \(\sigma_{0}=0\) and \(\omega_{0}=2n\pi\), where \(n\) is an integer. It is important to remark that \(n\) must be assumed to be \(O(1)\), so that \(\omega/T\) can be treated as \(O(1/T)\) in eq. (12). To carry out the calculation to higher orders presents no difficulty and can be proposed as an exercise. One obtains \[\sigma_{1}=0, \omega_{1}=-\omega_{0}, \sigma_{2}=\frac{-\omega_{0}^{2}}{2}, \omega_{2}=\omega_{0}, \cdots \tag{14}\] Eventually, \[\lambda\sim\frac{1}{T}\left[-\frac{2n^{2}\pi^{2}}{T^{2}}+2in\pi\left(1-\frac{ 1}{T}+\frac{1}{T^{2}}\right)+\cdots\right]\sim\frac{1}{T}\left(-\frac{2n^{2} \pi^{2}}{T^{2}}+\frac{2in\pi}{1+1/T}\right)=-\frac{2n^{2}\pi^{2}}{T^{3}}+ \frac{2in\pi}{T+1}. \tag{15}\] The exact and approximate spectra are depicted in fig. 3 and are found to be in good agreement as long as the imaginary part is small, _i.e_ for mode numbers satisfying \(2\pi n\ll T\). On the other hand, note that the diffusion equation eq. (5) has exponential solutions \(\exp(\kappa z+i\Lambda s)\) provided that \[\kappa=-\Lambda^{2}/2, \tag{16}\] while the periodic boundary condition in \(s\) imposes \(\Lambda=2n\pi\), with integer \(n\). Using the definitions of \(s\) and \(z\), one thus obtains \(\exp\{[-2n^{2}\pi^{2}/T^{3}+2in\pi/(T+1)]t\}\), which is consistent with eq. (15). The imaginary part of \(\lambda\) points to an oscillatory evolution with approximate period \(T+1\). Notice that time in eq. (1) is rescaled to the intrinsic time scale of the isolated (\(r=0\)) dynamical system. With a more general time unit, this intrinsic time scale would numerically be given by \(\bar{t}_{i}\), the delay by \(\bar{T}\), and the period of damped oscillations by \(\bar{T}+\bar{t}_{i}\). In view of this, the periodicity is fixed by the time required for information to be fed back into the dynamical system plus the time required to internally process it. Such a conclusion would be difficult to draw from the contemplation of eq. (11) alone. The analysis for \(r\neq 1\) follows the same pattern. We leave it as an exercise to show that, in the general case, \[\lambda\sim\frac{1}{T}\left[-\frac{\ln r}{2T^{2}}-\frac{2n^{2}\pi^{2}}{T^{2}}+ \left(\ln r+2in\pi\right)\left(1-\frac{1}{T}+\frac{1+\ln r}{T^{2}}\right)+ \cdots\right]. \tag{17}\] ## III Multiple-scale analysis The linear stability analysis with \(r=1\) reveals that the time scales for oscillations and damping become infinitely separated as \(T\to\infty\), which suggests a multiple-scale analysis. In this section, we propose two slightly distinct implementations of the method. The first one is a two-time expansion of the solution, \[y(t)\sim Y(s,z), \tag{18}\] in which the fast time is a strained coordinate: \[s=\left(1+\frac{a_{1}}{T}+\frac{a_{2}}{T^{2}}+\cdots\right)\frac{t}{T}, \tag{19}\] Figure 3: The spectrum of eq. (1) for \(T=100\) (blue) compared to the approximation eq. (15) (orange circles). The separation between roots along the imaginary axis asymptotes to \(2\pi/(T+1)\). and \(z=t/T^{3}\). In that implementation, we construct a solution whose periodicity is strictly \(1\) in \(s\). The constants \(a_{1}\) and \(a_{2}\) are determined in the course of resolution, but they can in fact be guessed from the results of the linear stability analysis. The alternative is to pose a three-time ansatz \[y(t)\sim Y(\tau,\eta,z), \tag{20}\] where \(\tau=1/T\) and \(\eta=t/T^{2}\) with no preconception of how \(f\) must depend on each time scale. Both approaches have their advantages and disadvantages. To introduce multiple time scales and convert a total derivative into a partial differential operator can be a deterring prospect to students who are exposed to the method for the first time. This speaks in favor of minimizing the number of time scales and, hence, of choosing the ansatz (18). On the other hand, the strained \(s\) coordinate may appear pulled out of a hat. This can make (20) more easily acceptable to those who prefer a step-by-step approach. Eventually, given the lightness of the calculations, it may be useful for the instructor to present the two variants of the calculation and, in so doing, to demonstrate the robustness of the method. In order to keep the calculation down to its essential details, we focus first on the case \(r=1\). We treat the case \(r\neq 1\) as a later step. ### Two-time calculation (\(r=1\)) Assuming that \(y\) is asymptotically given by the ansatz (18), differentiation with respect to \(t\) yields, by the chain rule, \[y^{\prime}(t)\sim\left(1+\frac{a_{1}}{T}+\frac{a_{2}}{T^{2}}+\cdots\right) \frac{1}{T}\frac{\partial Y}{\partial s}+\frac{1}{T^{3}}\frac{\partial Y}{ \partial z}. \tag{21}\] On the other hand, the delayed term becomes \[y(t-T)\sim Y\left(s-1,z\right)-\frac{a_{1}}{T}\frac{\partial}{ \partial s}Y\left(s-1,z\right)+\frac{1}{T^{2}}\left[\frac{a_{1}^{2}}{2}\frac{ \partial^{2}}{\partial s^{2}}-a_{2}\frac{\partial}{\partial s}-\frac{\partial }{\partial z}\right]Y\left(s-1,z\right)+\cdots \tag{22}\] Hence, eq. (1) is transformed into \[\left(\frac{1}{T}+\frac{a_{1}}{T^{2}}\right)\frac{\partial Y(s,z) }{\partial s}+Y(s,z)\sim Y\left(s-1,z\right)-\frac{a_{1}}{T}\frac{\partial}{ \partial s}Y\left(s-1,z\right)\\ +\frac{1}{T^{2}}\left[\frac{a_{1}^{2}}{2}\frac{\partial^{2}}{ \partial s^{2}}-a_{2}\frac{\partial}{\partial s}-\frac{\partial}{\partial z} \right]Y\left(s-1,z\right)+O\left(\frac{1}{T^{3}}\right). \tag{23}\] Expanding \(Y\) as \(Y_{0}+T^{-1}Y_{1}+T^{-2}Y_{2}+\cdots\), one obtains, at \(O(1)\), \[Y_{0}(s,z)=Y_{0}\left(s-1,z\right). \tag{24}\] so that \(Y_{0}\) is periodic with period \(1\) in \(s\). Next, collecting \(O(1/T)\) terms and using eq. (24), we find \[Y_{1}(s,z)=Y_{1}\left(s-1,z\right)-\left(1+a_{1}\right)\frac{\partial}{ \partial s}Y_{0}(s,z). \tag{25}\] The forcing term, being periodic, causes a secular divergence of \(f_{1}\): \[Y_{1}(s+j,z)=Y_{1}\left(s,z\right)-j\left(1+a_{1}\right)\frac{\partial}{ \partial s}Y_{0}(s,z). \tag{26}\] Hence, the asymptotic ordering \(T^{-1}Y_{1}\ll Y_{0}\) breaks down for \(j=O(T)\), unless we set \[a_{1}=-1. \tag{27}\] At \(O(T^{-2})\), we then find \[Y_{2}(s,z)=Y_{2}\left(s-1,z\right)+\left(1-a_{2}\right)\frac{\partial Y_{0}}{ \partial s}+\frac{1}{2}\frac{\partial^{2}Y_{0}}{\partial s^{2}}-\frac{ \partial Y_{0}}{\partial z}, \tag{28}\] where \(Y_{0}\) is evaluated at \((s,z)\). Following the same reasoning as at the previous order, the solvability condition is \[\frac{\partial Y_{0}}{\partial z}=(1-a_{2})\,\frac{\partial Y_{0}}{\partial s}+ \frac{1}{2}\,\frac{\partial^{2}Y_{0}}{\partial s^{2}}, \tag{29}\] At this stage, it may seem that the constant \(a_{2}\) is still free. However, if we insist that the periodic component of the solution is strictly \(1\) in \(s\), then \(Y_{0}\) is a combination of functions of the form \(\exp(\kappa z+2in\pi s)\), where, according to eq. (29), \(\kappa=2in\pi(1-a_{2})-2n^{2}\pi^{2}\). Now, the imaginary part of \(\kappa\) perturbs the period of the solution and to avoid this, we make it vanish: \[a_{2}=1, \tag{30}\] which agrees with eq. (15). We thus obtained eq. (5). ### Three-time calculation If we assume the multi-time ansatz eq. (20), then \[y^{\prime}(t)\sim\frac{1}{T}\frac{\partial Y}{\partial\tau}+\frac{1}{T^{2}} \frac{\partial Y}{\partial\eta}+\frac{1}{T^{3}}\frac{\partial Y}{\partial z} \tag{31}\] and \[y(t-T)\sim Y\left(\tau-1,\eta-T^{-1},z-T^{-2}\right)\\ \sim Y\left(\tau-1,\eta,z\right)-\frac{1}{T}\frac{\partial}{ \partial\eta}Y\left(\tau-1,\eta,z\right)-\frac{1}{T^{2}}\left(\frac{\partial} {\partial z}-\frac{1}{2}\frac{\partial^{2}}{\partial\eta^{2}}\right)Y\left( \tau-1,\eta,z\right). \tag{32}\] We thus get to solve \[\left(1+\frac{1}{T}\frac{\partial}{\partial\tau}+\frac{1}{T^{2}}\frac{\partial }{\partial\eta}+\frac{1}{T^{3}}\frac{\partial}{\partial z}\right)Y\left(\tau, \eta,z\right)=\left(1-\frac{1}{T}\frac{\partial}{\partial\eta}-\frac{1}{T^{2} }\frac{\partial}{\partial z}+\frac{1}{2T^{2}}\frac{\partial}{\partial\eta^{2} }+\cdots\right)Y\left(\tau-1,\eta,z\right). \tag{33}\] Expanding \(Y\) again as \(Y_{0}+T^{-1}Y_{1}+\cdots\), we obtain \[Y_{0}(\tau,\eta,z)=Y_{0}(\tau-1,\eta,z), \tag{34}\] _i.e._, the same map as before, with \(s\) replaced by \(\tau\). At \(O(T^{-1})\), we have \[Y_{1}(\tau,\eta,z)-Y_{1}(\tau-1,\eta,z)=-\frac{\partial}{\partial\tau}Y_{0}( \tau,\eta,z)-\frac{\partial}{\partial\eta}Y_{0}(\tau-1,\eta,z)=-\left(\frac{ \partial}{\partial\tau}+\frac{\partial}{\partial\eta}\right)Y_{0}(\tau,\eta, z). \tag{35}\] Since the right hand side is a period-1 function in \(\tau\), solvability requires that it vanishes: \[\frac{\partial Y_{0}}{\partial\tau}+\frac{\partial Y_{0}}{\partial\eta}=0. \tag{36}\] The general solution is, simply, \[Y_{0}(\tau,\eta,z)=Y_{0}(\tau-\eta,z), \tag{37}\] or, equivalenty, \[Y_{0}(\tau,\eta,z)=Y_{0}(S,z),\hskip 72.27ptS=\tau-\eta=\frac{t}{T} \left(1-\frac{1}{T}\right). \tag{38}\] The equation for \(Y_{1}\) then implies that it is periodic in \(\tau\). Next, at \(O(T^{-2})\), and taking into account the periodicity of \(Y_{0}\) and \(Y_{1}\), the problem for \(Y_{2}\) is \[Y_{2}(\tau,\eta,z)-Y_{2}(\tau-1,\eta,z)=-\left(\frac{\partial}{\partial\tau}+ \frac{\partial}{\partial\eta}\right)Y_{1}(\tau,\eta,z)+\left(\frac{\partial} {\partial S}+\frac{1}{2}\frac{\partial}{\partial S^{2}}-\frac{\partial}{ \partial z}\right)Y_{0}(S,z). \tag{39}\] Secular divergence of \(Y_{2}\) is avoided by making the right hand side vanish: \[\frac{\partial Y_{1}}{\partial\tau}+\frac{\partial Y_{1}}{\partial\eta}=\frac{ \partial Y_{0}}{\partial S}+\frac{1}{2}\frac{\partial^{2}Y_{0}}{\partial S^{2 }}-\frac{\partial Y_{0}}{\partial z}. \tag{40}\] This is an equation for \(Y_{1}\), with general solution \[Y_{1}(\tau,\eta,z)=Y_{1}(S,z)+\eta\left(\frac{\partial Y_{0}}{\partial S}+ \frac{1}{2}\frac{\partial^{2}Y_{0}}{\partial S^{2}}-\frac{\partial Y_{0}}{ \partial z}\right). \tag{41}\] There is therefore a secular divergence in \(\eta\) of \(Y_{1}\) and to avoid this, we have a solvability condition on the solvability condition: \[\frac{\partial Y_{0}}{\partial z}-\frac{\partial Y_{0}}{\partial S}=\frac{1} {2}\frac{\partial^{2}Y_{0}}{\partial S^{2}}. \tag{42}\] Finally, if we write \[Y_{0}(S,z)=\phi(s,z), s=S+z=\tau-\eta+z=\frac{t}{T}\left(1-\frac{1}{T}+\frac{1}{T^{2}} \right), \tag{43}\] we obtain \[\frac{\partial\phi}{\partial z}=\frac{1}{2}\frac{\partial^{2}\phi}{\partial s ^{2}}, \tag{44}\] _i.e._ the desired result. Note that the fact that \(Y_{0}(\tau,\eta,z)\) is of period \(1\) in \(\tau\) means that \(\phi\) is of period \(1\) in \(s\), in full consistency with what precedes. **Remark** One may be tempted to purely and simply set \(Y_{1}=0\), after noting (i) that the initial problem is linear and (ii) that eq. (35) is homogenous after applying the solvability condition. Indeed, this would expedite the derivation of eq. (44). However, it would conceal the structure of nested solvability conditions appearing at \(O(T^{-2})\). While there is nothing wrong in seeking the shortest route to the answer, especially in the eyes of an applied mathematician, one should bear in mind that \(Y_{1}\) should, in all generality, be retained. This correction may be required to properly handle \(O(T^{-1})\) terms in the initial condition or weak nonlinearities. Interestingly, while there is only a single equation to solve at \(O(T^{-2})\), more than one solvability conditions can be extracted from it: the main one, eq. (40), and the secondary one, eq. (42). ### The case \(r\neq 1\) We now revise the calculation in the more general case \(r\neq 1\). Here again, the two implementations yield the same result. We limit ourselves to the "two-time" calculation and assume \[y\sim f(s,z), \tag{45}\] where \(s\) is given by eq. (19). This time, eq. (1) is transformed into \[\left(\frac{1}{T}+\frac{a_{1}}{T^{2}}\right)\frac{\partial f(s,z) }{\partial s}+f(s,z)\sim r\bigg{\{}f\left(s-1,z\right)-\frac{a_{1}}{T}\frac{ \partial}{\partial s}f\left(s-1,z\right)\\ +\frac{1}{T^{2}}\left[\frac{a_{1}^{2}}{2}\frac{\partial^{2}}{ \partial s^{2}}-a_{2}\frac{\partial}{\partial s}-\frac{\partial}{\partial z} \right]f\left(s-1,z\right)\bigg{\}}+O\left(\frac{1}{T^{3}}\right). \tag{46}\] Expanding \(f\) as \(f_{0}+T^{-1}f_{1}+T^{-2}f_{2}+\cdots\), the \(O(1)\) problem is now \[f_{0}(s,z)=rf_{0}\left(s-1,z\right). \tag{47}\] Posing \(f_{0}(s,z)=r^{s}F_{0}(s,z)\), this yields \[F_{0}(s,z)=F_{0}\left(s-1,z\right), \tag{48}\] so that \(F_{0}\) is periodic with period \(1\) in \(s\). Next, collecting \(O(1/T)\) terms and using eqs. (47) and (48), we find \[f_{1}(s,z)=rf_{1}\left(s-1,z\right)-\left(1+a_{1}\right)r^{s}\left[\ln(r)F_{0}( s,z)+\frac{\partial}{\partial s}F_{0}(s,z)\right]. \tag{49}\] The terms between bracket, being periodic, cause a secular divergence of \(f_{1}\). Indeed, letting \(f_{1}(s,z)=r^{s}F_{1}(s,z)\), one rapidly finds that \[F_{1}(s+j,z)=F_{1}\left(s,z\right)-\left(1+a_{1}\right)j\left[\ln(r)F_{0}(s,z) +\frac{\partial}{\partial s}F_{0}(s,z)\right]. \tag{50}\] Hence, irrespective of the factor \(r^{s}\), \(T^{-1}f_{1}\) ceases to be small compared to \(f_{0}\) when \(j=O(T)\). To prevent this, we thus set \[a_{1}=-1. \tag{51}\] At \(O(T^{-2})\), we have \[f_{2}(s,z)=rf_{2}\left(s-1,z\right)+r^{s}\Bigg{\{}\left[\frac{\ln(r)^{2}}{2}+ \ln(r)\left(1-a_{2}\right)\right]F_{0}(s,z)+\left(1+\ln(r)-a_{2}\right)\frac{ \partial F_{0}}{\partial s}+\frac{1}{2}\frac{\partial^{2}F_{0}}{\partial s^{ 2}}-\frac{\partial F_{0}}{\partial z}\Bigg{\}}. \tag{52}\] The solvability condition is now \[\frac{\partial F_{0}}{\partial z}=l_{0}F_{0}(s,z)+l_{1}\frac{\partial F_{0}}{ \partial s}+\frac{1}{2}\frac{\partial^{2}F_{0}}{\partial s^{2}}, \tag{53}\] where \(l_{0}=\frac{\ln(r)^{2}}{2}+\ln(r)\left(1-a_{2}\right)\) and \(l_{1}=1+\ln(r)-a_{2}\). The term multiplied by \(l_{1}\) causes a change in the periodicity of the solution and is therefore set to zero: \[a_{2}=1+\ln(r). \tag{54}\] This agrees with eq. (17). Hence, \(l_{0}=\frac{-\ln(r)^{2}}{2}\). It only remains to make a small change of variable, namely to write \(F_{0}=e^{l_{0}z}Y(s,z)\) to obtain eq. (5). ## IV Discussion The derivation presented in this paper hints to the great generality of the diffusion equation eq. (5) as the linear backbone of large-delay differential equations. Not all DDE develop the multiple-scale structure presented here (see below) but when they do, diffusion is to be expected from the Taylor expansion of the delayed term with the appropriate scales: see the \(O(T^{-2})\) terms in the right hand side of eq. (32). An interpretation of the delay in a DDE like eq. (1) is that the output of a given system undergoes some kind of propagation before being fed back. The form of the delayed term, \(ry(t-T)\), is strongly suggestive of a hyperbolic PDE as a mediator of this feedback, a point of view that was emphasized in [25]. Why, then, should a parabolic PDE such as eq. (5) arise out of a hyperbolic one? The answer to this question lies in the fact that the system subjected to feedback is itself dissipative, being described in isolation by the left hand side of eq. (1). Loosely speaking, a complete feedback cycle includes a dispersion-less propagation of duration \(T\) followed by an attenuation of unit duration. In the spectral domain, the left hand side of eq. (1) is a low-pass filter. This is where information is degraded, as happens in diffusive processes. In this regard, one should bear in mind that, numerically, \(T\) is the ratio of the delay to the internal dissipation time of the system. As we wrote earlier, in a general unit system, the delay is numerically given by \(\bar{T}\) and the internal dynamics is characterized by \(\bar{t}_{i}\), with \(T=\bar{T}/\bar{t}_{i}\). It follows that the parameter \(T\) does not appear as a result of the delay alone. It exists thanks to both the delay _and_ the internal time scale. Even if the latter is short compared to the former, the dissipation process occurring on the internal timescale is not to be neglected. Another facet of this question is given by the picture of the linear spectrum in fig. 3. Here, the plot of the real part of the eigenvalues as a function of their imaginary part is analogous to dispersion relations found in pattern formation, where the exponential growth rate of a perturbation is plotted as a function of its wave number [26; 27; 28]. In that context, the existence of a small band of wave numbers with near-zero, maximum growth rate generically leads to diffusive terms in amplitude equations. Presently, the spectrum is discrete but tends to a continuum as \(T\rightarrow\infty\). Hence, with a trained eye, diffusion can directly be anticipated at the sight of fig. 3. A general technical feature exemplified by the present calculation is that the leading order problem is a map, see eqs. (24), (34) and (47). This is because the main time scale is asymptotically set by the delay, and time derivatives of functions evolving on this time scale are \(O(T^{-1})\). In problems where the period is not asymptotically given by the delay, a map is _not_ expected as the leading order problem, even if the delay is large. Consider for example Minorsky's equation [29] \[y^{\prime\prime}(t)+\epsilon y^{\prime}(t)+\Omega^{2}y(t)=-by^{\prime}(t-T)+ \epsilon cy^{\prime}(t-T)^{3},\qquad\qquad\epsilon\ll 1,\qquad\qquad T=O\left( \epsilon^{-2}\right), \tag{55}\] which displays a Hopf bifurcations with an \(O(1)\) frequency. Here \(\Omega\) sets the oscillation period and the leading-order problem of the multiple-scale analysis is the familiar harmonic oscillator. The linear spectrum is again densified by the largeness of \(T\) but it now displays a maximum with near vanishing \(\mathrm{Re}(\lambda)\) in the vicinity of \(\mathrm{Im}(\lambda)=\Omega\). This is akin to a Turing bifurcation in spatially extended dynamical system. Following the preceding discussion, diffusion is therefore expected in the amplitude equation. Taking nonlinear terms into account one eventually obtains a Ginzburg-Landau equation (see [30] for a detailed derivation.) In any case, whether the leading order problem is a map or a harmonic oscillator, the general procedural idea of the multiple scale analysis is the same: identify and kill secular divergences. The present paper may give the reader the false impression that multiple-scale analysis is the method of choice to study all DDE in the large-delay limit. This is not the case and the following equation is a simple counter-example: \[-y^{\prime}(t)+y(t)=y(t-T)+y^{3}(t). \tag{56}\] Compared to eq. (1), we have simply changed the sign of the time derivate and added a nonlinear term to avoid blow up. A numerical simulation is shown on fig. 4. Independently of the initial condition, the solution of this equation asymptotes to sustained square wave oscillations of period \(2T\) that display abrupt switching between approximately \(\sqrt{2}\) and \(-\sqrt{2}\). This dynamical behavior is obviously not compatible with eq. (5). To study such a limit cycle, the method of matched asymptotic expansions appears more appropriate [31; 32]. ###### Acknowledgements. G.K. is supported by the Fonds de la Recherche Scientifique - FNRS (Belgium.)
2310.17240
Strategic Abilities of Forgetful Agents in Stochastic Environments
In this paper, we investigate the probabilistic variants of the strategy logics ATL and ATL* under imperfect information. Specifically, we present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. That is, the semantics of the logics are based on multi-agent, stochastic transition systems with imperfect information, which combine two sources of uncertainty, namely, the partial observability agents have on the environment, and the likelihood of transitions to occur from a system state. Since the model checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. The resulting setting captures the situation in which agents have qualitative uncertainty of the local state and quantitative uncertainty about the occurrence of future events. We illustrate the usefulness of this setting with meaningful examples.
Francesco Belardinelli, Wojciech Jamroga, Munyque Mittelmann, Aniello Murano
2023-10-26T08:38:17Z
http://arxiv.org/abs/2310.17240v1
# Strategic Abilities of Forgetful Agents in Stochastic Environments ###### Abstract In this paper, we investigate the probabilistic variants of the strategy logics ATL and ATL\({}^{*}\) under imperfect information. Specifically, we present novel decidability and complexity results when the model transitions are stochastic and agents play uniform strategies. That is, the semantics of the logics are based on multi-agent, stochastic transition systems with imperfect information, which combine two sources of uncertainty, namely, the partial observability agents have on the environment, and the likelihood of transitions to occur from a system state. Since the model checking problem is undecidable in general in this setting, we restrict our attention to agents with memoryless (positional) strategies. The resulting setting captures the situation in which agents have qualitative uncertainty of the local state and quantitative uncertainty about the occurrence of future events. We illustrate the usefulness of this setting with meaningful examples. ## 1 Introduction Complex and interacting Multi-Agent Systems (MAS) often face different kinds of uncertainty. One of the sources of uncertainty is the inability to completely observe the current local situation (e.g., whether there is public transport available to the target destination). On the other hand, the occurrence of many natural events and the future behaviour of other agents, while it cannot be known with certainty, can be measured based on experiments or past observations. For instance, while we cannot know if the bus is going to arrive on time, we may have observed that this happens \(0.7\%\) of the time. Clearly, intelligent autonomous agents need to consider both the imperfect information about the local state and the likelihood of stochastic events when making strategic decisions and plans. To see this, consider, for instance, the problem of online mechanisms, which are preference aggregation games in dynamic environments with multiple agents and private information. Many multi-agent problems are inherently dynamic rather than static. Practical examples include the problem of allocating computational resources (bandwidth, CPU, etc.) to processes arriving over time, selling items to a possibly changing group of buyers with uncertainty about the future supply, and selecting employees from a dynamically changing list of candidates [12]. _Probabilistic model-checking_ is a technique for the formal and automated analysis of probabilistic systems that can be modeled by stochastic state-transition models [17]. Its aim is to establish the correctness of such systems against probabilistic specifications, which may describe, e.g., the probability of an unsafe event to occur, or the ability of a coalition to ensure the completeness of a task. Logic-based approaches have been widely and successfully applied for probabilistic verification of MAS. For instance, probabilistic model-checking techniques have been used for verification of preference aggregation mechanisms [13], negotiation games [1], team formation protocols [1], and stochastic behaviors in dispersion games [1], to name a few. Gutierrez et al. (2021) investigates the problem of deciding whether the probability of satisfying a given temporal formula in a concurrent stochastic game is 1 or greater than 0. Kwiatkowska et al. (2022) details how verification techniques can be developed and implemented for concurrent stochastic games. In this paper, we consider logics for reasoning about strategic abilities while taking into account both incomplete information and probabilistic behaviors of the environment and agents. We study the Probabilistic Alternating-time Temporal Logics PATL and PATL\({}^{*}\)[10, 11] under imperfect information (II) for a classic type of agents [11] called imperfect-recall (that is, agents who use memoryless strategies, also called Markovian strategies or policies). Model checking PATL\({}^{*}\) under II for agents with perfect-recall (who uses memorylt strategies) is known to be undecidable in general even for the fragment with a single-player [1]. We introduce and motivate the problem of strategic reasoning under combined types of uncertainty and memoryless agents. We then provide results on the model-checking complexity for PATL with memoryless deterministic strategies for the coalition and point directions to challenging open questions. **Related Work.** Recently, much work has been done on logics for strategic reasoning in Multi-Agent Systems, starting from the pioneering work on Alternating-time Temporal Logics ATL and ATL\({}^{*}\)[1]. These logics enable reasoning about the strategic abilities of agents in a cooperative or competitive system. \(\mathsf{ATL}\) has been extended in various directions, considering for instance strategy contexts (Lavoussinine and Markey, 2015) or adding imperfect information (Jamroga and Bulling, 2011). Strategy Logic (\(\mathsf{SL}\)) (Chatterjee, Henzinger, and Piterman, 2010; Mogavero et al., 2014) extends \(\mathsf{ATL}\) to treat strategies as first-order variables. Contexts of imperfect information have been extensively considered in the literature on formal verification (see, for instance, (Dima and Tiplea, 2011; Kupferman and Vardi, 2000; Jamroga and Agotnes, 2007; Reif, 1984; Bulling and Jamroga, 2014; Berthon et al., 2021; Belardinelli et al., 2020; Berwanger and Doyen, 2008)). Generally, imperfect information in MAS entails higher complexity, which may be even undecidable when considered in the context of memoryful strategies (Dima and Tiplea, 2011). In order to retrieve a decidable model-checking problem, it is interesting to study imperfect information MAS with memoryless agents (Cermak et al., 2018). Several works consider the verification of systems against specifications given in probabilistic logics. In particular, Wan, Bentahar, and Hamza (2013) study the model-checking problem for Probabilistic Epistemic Computational Tree Logic with semantics based on probabilistic interpreted systems. In the context of MAS, (Huang and Luo, 2013) studies an ATL-like logic for stochastic MAS in a setting in which agents play deterministic strategies and have probabilistic knowledge about the system. (Fu et al., 2018) shows model-checking an epistemic logic with temporal operators under strategies that depend only on agents' observation history is undecidable. Chen and Lu (2007) propose model-checking algorithms for Probabilistic ATL in the perfect information setting. Perfect information was also considered with specification in Probabilistic Alternating-Time \(\mu\)-Calculus (Song et al., 2019) and Probabilistic Strategy Logic Aminof et al. (2019). ATL-based probabilistic logics were also considered for the verification of unbounded parameterized MAS (Lomuscio and Pirovano, 2020), for resource-bounded MAS (Nguyen and Rakib, 2019), and under assumptions over opponents' strategies (Bulling and Jamroga, 2009). The closest related work is (Huang, Su, and Zhang, 2012), which considers the logic \(\mathsf{PATL}^{*}\) under incomplete information and synchronous perfect recall. The complexity results show that the model-checking problem is in general undecidable even for the single-agent fragment of the logic. Also related are the works in (Gripion and Serre, 2009; Doyen and Raskin, 2011; Carayol, Loding, and Serre, 2018; Doyen, 2022), which consider algorithmic solutions for computing the existence of winning strategies and winning distributions for two-player stochastic games with imperfect information. Finally, Gurov, Goranko, and Lundberg (2022) investigate the problem of strategy synthesis for knowledge-based strategies against a non-deterministic environment. ## 2 Preliminaries In this paper, we fix finite non-empty sets of agents \(\mathsf{Ag}\), actions \(\mathsf{Ac}\), atomic propositions \(\mathsf{AP}\). We write \(\mathbf{o}\) for a tuple of objects \((o_{a})_{a\in\mathsf{Ag}}\), one for each agent, and such tuples are called _profiles_. A _joint action_ or _move_\(\mathbf{c}\) is an element of \(\mathsf{Ac}^{\mathsf{Ag}}\). Given a profile \(\mathbf{o}\) and \(C\subseteq\mathsf{Ag}\), we let \(o_{C}\) be the components of agents in \(C\), and \(\mathbf{o}_{-C}\) is \((o_{b})_{b\not\in C}\). Similarly, we let \(\mathsf{Ag}_{-C}=\mathsf{Ag}\setminus C\). Distributions.Let \(X\) be a finite non-empty set. A _(probability) distribution_ over \(X\) is a function \(\mathsf{d}:X\rightarrow[0,1]\) such that \(\sum_{x\in X}\mathsf{d}(x)=1\), and \(\text{Dist}(X)\) is the set of distributions over \(X\). We write \(x\in\mathsf{d}\) for \(\mathsf{d}(x)>0\). If \(\mathsf{d}(x)=1\) for some element \(x\in X\), then \(\mathsf{d}\) is a _point (a.k.a. Dirac) distribution_. If, for \(i\in I\), \(\mathsf{d}_{i}\) is a distribution over \(X_{i}\), then, writing \(X=\prod_{i\in I}X_{i}\), the _product distribution_ of the \(\mathsf{d}_{i}\) is the distribution \(\mathsf{d}:X\rightarrow[0,1]\) defined by \(\mathsf{d}(x)=\prod_{i\in I}\mathsf{d}_{i}(x_{i})\). Markov Chains.A _Markov chain_\(M\) is a tuple \((St,p)\) where \(St\) is a set of states and \(p\in\text{Dist}(St\times St)\) is a distribution. The values \(p(s,t)\) are called _transition probabilities_ of \(M\). A _path_ is an infinite sequence of states. Concurrent Game Structures.A _stochastic concurrent game structure with imperfect information_ (or simply _CGS_) is a tuple \((St,\mathsf{L},\delta,\ell,\{\sim_{a}\}_{a\in\mathsf{Ag}})\) where (i) \(St\) is a finite non-empty set of _states_; (ii) \(\mathsf{L}:St\times\mathsf{Ag}\to 2^{\mathsf{Ac}}\setminus\{\emptyset\}\) is a _legality function_ defining the available actions for each agent in each state, we write \(\mathsf{L}(\mathbf{s})\) for the tuple \((\mathsf{L}(s,a))_{a\in\mathsf{Ag}}\); (iii) for each state \(s\in St\) and each move \(\mathbf{c}\in\mathsf{L}(\mathbf{s})\), the _stochastic transition function_\(\delta\) gives the (conditional) probability \(\delta(s,\mathbf{c})\) of a transition from state \(s\) for all \(s^{\prime}\in St\) if each player \(a\in\mathsf{Ag}\) plays the action \(\mathbf{c}_{a}\), we also write this probability as \(\delta(s,\mathbf{c})(s^{\prime})\), to emphasize that \(\delta(s,\mathbf{c})\) is a probability distribution on \(St\); (iv) \(\ell:St\rightarrow{2^{\mathsf{AP}}}\) is a _labelling function_; (v) \(\sim_{a}\subseteq St\times St\) is an equivalence relation called the _observation relation_ of agent \(a\). Throughout this paper, we assume that the CGS is uniform, that is, if two states are indistinguishable for an agent \(a\), then \(a\) has the same available actions in both states. Formally, if \(s\sim_{a}s^{\prime}\) then \(\mathsf{L}(s,a)=\mathsf{L}(s^{\prime},a)\), for any \(s,s^{\prime}\in St\) and \(a\in\mathsf{Ag}\). For each state \(s\in St\) and joint action \(\mathbf{c}\in\prod_{a\in\mathsf{Ag}}\mathsf{L}(s,a)\), we also assume that there is a state \(s^{\prime}\in St\) such that \(\delta(s,\mathbf{c})(s^{\prime})\) is non-zero, that is, every state has a successive state from a legal move. We say that \(\mathcal{G}\) is _deterministic_ (instead of stochastic) if every \(\delta(s,\mathbf{c})\) is a point distribution. Plays.A _play_ or path in a CGS \(\mathcal{G}\) is an infinite sequence \(\pi=s_{0}s_{1}\cdots\) of states such that there exists a sequence \(\mathbf{c}_{0}\mathbf{c}_{1}\cdots\) of joint-actions such that \(\mathbf{c}_{i}\in\mathsf{L}(s_{i})\) and \(s_{i+1}\in\delta(s_{i},\mathbf{c}_{i})\) (i.e., \(\delta(s_{i},\mathbf{c}_{i})(s_{i+1})>0\)) for every \(i\geq 0\). We write \(\pi_{i}\) for \(s_{i}\), \(\pi_{\geq i}\) for the suffix of \(\pi\) starting at position \(i\). Finite paths are called _histories_, and the set of all histories is denoted Hist. Write \(\text{last}(h)\) for the last state of a history \(h\). Strategies.A (general) _probabilistic strategy_ is a function \(\sigma:\text{Hist}\rightarrow\text{Dist}(\text{Ac})\) that maps each history to a distribution of actions. We let _Str_ be the set of all strategies. A memoryless uniform probabilistic strategy_ for an agent \(a\) is a function \(\sigma_{a}:St\rightarrow\text{Dist}(\text{Ac})\) in which for all positions \(s,s^{\prime}\) such that \(s\sim_{a}s^{\prime}\), we have \(\sigma(s)=\sigma(s^{\prime})\). We let \(\text{Str}^{r}_{a}\) be the set of uniform strategies for agent \(a\). A deterministic (or _pure_) strategy \(\sigma\) is a strategy in which \(\sigma(s)\) is a point distribution for any \(s\). A _strategy profile_ is a tuple \(\boldsymbol{\sigma}\) of strategies, one for each agent. We write \(\sigma_{a}\) for the strategy of \(a\) in the strategy profile \(\boldsymbol{\sigma}\). For a strategy \(\sigma_{a}\) for agent \(a\), we assume that \(\boldsymbol{\sigma}(h)(c)=0\) if \(c\not\in\text{L}(\text{last}(h),a)\). ## 3 Probabilistic \(\mathsf{ATL}\) and \(\mathsf{ATL}^{*}\) We begin by introducing the Probabilistic Alternating-Time Temporal Logics \(\mathsf{PATL}^{*}\) and \(\mathsf{PATL}\). The syntax of \(\mathsf{PATL}^{*}\) is defined by the grammar \[\varphi::=p\mid\varphi\vee\varphi\mid\neg\varphi\mid\mathbf{X}\varphi\mid \varphi\mathbf{U}\varphi\mid\langle\!\langle C\rangle\!\rangle^{\bowtie\!d}\varphi\] where \(p\in\text{AP}\), \(C\subseteq\text{Ag}\), \(d\) is a rational constant in \([0,1]\), and \(\bowtie\!d\in\{\!<,<,>,\geq\}\). The intuitive reading of the operators is as follows: \(\langle\!\langle C\rangle\!\rangle^{\bowtie\!d}\varphi\) means that there exists a strategy for the coalition \(C\) to collaboratively enforce \(\varphi\) with a probability in relation \(\bowtie\) with constant \(d\), "next" \(\mathbf{X}\) and "until" \(\mathbf{U}\) are the standard temporal operators. We make use of the usual syntactic sugar \(\mathbf{F}\varphi:=\top\mathbf{U}\varphi\) and \(\mathbf{G}\varphi:=\neg\mathbf{F}\neg\varphi\) for temporal operators. Finally, we use \([\!C]^{\bowtie\!d}\varphi:=\neg\langle\!\langle C\rangle\!\rangle^{\bowtie\!d}\neg\varphi\) to express that no strategy of \(C\) can prevent \(\varphi\) with a probability in relation \(\bowtie\) with constant \(d\). An \(\mathsf{PATL}^{*}\) formula of the form \(\langle\!\langle C\rangle\!\rangle^{\bowtie\!d}\varphi\) or \([\!C]^{\bowtie\!d}\varphi\) is also called state formula. An important syntactic restriction of \(\mathsf{PATL}^{*}\), namely \(\mathsf{PATL}\), is defined as follows. The syntax of \(\mathsf{PATL}\) is defined by the grammar \[\varphi::=p\mid\varphi\vee\varphi\mid\neg\varphi\mid\langle\!\langle C \rangle\!\rangle^{\bowtie\!d}\mathbf{X}\varphi\mid\langle\!\langle C\rangle \!\rangle^{\bowtie\!d}(\varphi\mathbf{U}\varphi)\] where \(p\in\text{AP}\), \(C\subseteq\text{Ag}\), and \(\bowtie\in\{\!<,>,\geq\}\). Formulas of \(\mathsf{PATL}\) and \(\mathsf{PATL}^{*}\) are interpreted over CGSs. Probability Space on Outcomes.An _outcome_ of a strategy profile \(\boldsymbol{\sigma}\) and a state \(s\) is a play \(\pi\) that starts with \(s\) and is extended by \(\boldsymbol{\sigma}\), i.e., \(\pi_{0}=s\), and for every \(k\geq 0\) there exists \(\boldsymbol{c}_{k}\in\boldsymbol{\sigma}(\pi_{k})\) such that \(\pi_{k+1}\in\delta(\pi_{k},\boldsymbol{c}_{k})\). The set of outcomes of a strategy profile \(\boldsymbol{\sigma}\) and state \(s\) is denoted \(Out(\boldsymbol{\sigma},s)\). A given system \(\mathcal{G}\), strategy profile \(\boldsymbol{\sigma}\), and state \(s\) induce an infinite-state Markov chain \(M_{\boldsymbol{\sigma},s}\) whose states are the finite prefixes of plays in \(Out(\boldsymbol{\sigma},s)\). Such finite prefixes of plays are called _histories_ and written \(h\), and we let \(\text{last}(h)\) denote the last state in \(h\). Transition probabilities in \(M_{\boldsymbol{\sigma},s}\) are defined as \(p(h,hs^{\prime})=\sum_{e\in\text{Ac}^{\bowtie\!d}}\boldsymbol{\sigma}(h)( \boldsymbol{c})\times\delta(\text{last}(h),\boldsymbol{c})(s^{\prime})\). The Markov chain \(M_{\boldsymbol{\sigma},s}\) induces a canonical probability space on its set of infinite paths (Kemeny, Snell, and Knapp, 1976), which can be identified with the set of plays in \(Out(\boldsymbol{\sigma},s)\) and the corresponding measure is denoted \(out(\boldsymbol{\sigma},s)\). 1 Footnote 1: This is a classic construction, see for instance (Clarke et al., 2018; Berthon et al., 2020). Given a coalition strategy \(\boldsymbol{\sigma_{C}}\in\prod_{a\in C}\text{Str}^{r}_{a}\), we let \(n=|\text{Ag}\setminus\{C\}|\) and define the set of possible outcomes of \(\boldsymbol{\sigma_{C}}\) from a state \(s\in St\) to be the set \(out_{C}(\boldsymbol{\sigma_{C}},s)=\{out((\boldsymbol{\sigma_{C}},\boldsymbol{ \sigma_{-C}}),s):\boldsymbol{\sigma_{-C}}\in\text{Str}^{n}\}\) of probability measures that the players in \(C\) enforce when they follow the strategy \(\boldsymbol{\sigma_{C}}\), namely, for each \(a\in\text{Ag}\), player \(a\) follows strategy \(\sigma_{a}\). We use \(\mu^{\boldsymbol{\sigma_{C}}}_{s}\) to range over \(out_{C}(\boldsymbol{\sigma_{C}},s)\). \(\mathsf{PATL}\) and \(\mathsf{PATL}^{*}\) Semantics\(\mathsf{PATL}\) and \(\mathsf{PATL}^{*}\) formulas are interpreted in a transition system \(\mathcal{G}\) and a path \(\pi\), \[\mathcal{G},\pi\models p \text{iff }p\in\ell(\pi_{0})\] \[\mathcal{G},\pi\models\neg\varphi \text{iff }\mathcal{G},\pi\models\varphi\] \[\mathcal{G},\pi\models\varphi_{1}\vee\varphi_{2} \text{iff }\mathcal{G},\pi\models\varphi_{1}\text{ or }\mathcal{G},\pi\models\varphi_{2}\] \[\mathcal{G},\pi\models\langle\!\langle C\rangle\!\rangle^{ \bowtie\!d}\varphi \text{iff }\exists\boldsymbol{\sigma_{C}}\in\prod_{a\in C}\text{Str}^{r}_{a}\text{ such that}\] \[\forall\mu^{\boldsymbol{\sigma_{C}}}_{\pi_{0}}\in out_{C}( \boldsymbol{\sigma_{C}},\pi_{0}),\] \[\mu^{\boldsymbol{\sigma_{C}}}_{\pi_{0}}(\{\pi^{\prime}:\mathcal{G},\pi^{\prime}\models\varphi\})\bowtie\!d\] \[\mathcal{G},\pi\models\mathbf{X}\varphi \text{iff }\mathcal{G},\pi_{\geq 1}\models\varphi\] \[\mathcal{G},\pi\models\psi_{1}\mathbf{U}\psi_{2} \text{iff }\exists k\geq 0\text{ s.t. }\mathcal{G},\pi_{\geq k}\models\psi_{2}\text{ and}\] \[\forall j\in[i,k)\text{ }\mathcal{G},\pi_{\geq j}\models\psi_{1}\] ## 4 Strategic Reasoning under Uncertainty Many real-life scenarios require agents to interact in partially observable environments with stochastic phenomena. A natural application of strategic reasoning over both of these sources of uncertainty is card games, as the distribution of cards is a stochastic event and the hand of each agent is kept secret from the other players. Let us see a more detailed example based on online mechanism design2 and, in particular, elections. While the majority of elections have a static set of candidates which is known upfront, there are contexts where candidates appear over time. A classic example is hiring a committee: the candidates that will appear the next day to pass an interview are unknown, and the voters must decide immediately whether to hire one of the current candidates or not (Do et al., 2022). Footnote 2: Previous work (Maubert et al., 2021; Mittelmann et al., 2022, 2023) have shown how to encode notions from Mechanism Design (e.g., strategyprofness) using logics for strategic reasoning. In online approval-based election (Do et al., 2022), there is a non-empty set of candidates \(C=\{1,...,m\}\) and the goal is to select \(k\leq 1\) candidates for a committee. In each state, an unseen candidate \(j\) is presented and the agents vote on whether to include the current candidate in the committee or not. The election continue until the committee is completed or all candidates have been rejected. For a candidate \(j\), we let the propositions \(rejected_{j}\), \(selected_{j}\), \(interview_{j}\), denote whether candidate \(j\) was already rejected, whether she was selected to the committee, and whether she is been currently interviewed, resp. For each agent \(a\), \(likes_{a,j}\) denotes whether \(a\) is currently willing to approve the candidate \(j\). Agents know their own preferences, that is, the candidates they like but are uncertain about others' preferences. Voters can distinguish the candidate currently interviewed, but are unaware of the next candidate to be presented (i.e., whether \(\mathbf{X}interview_{j}\) holds in any given state). In each state \(s\), agents can either accept or reject the current candidate (actions \(y\) and \(n\), resp.). The probability of selecting candidate \(j\) being selected is determined by the transition function \(\delta(s,\mathbf{c})\), according to the actions in \(\mathbf{c}\). If all agents accept (similarly, reject) a candidate, the system transitions to a state in which the candidate is selected (resp. rejected) with a probability equal to one. If there is no consensus on whether to accept the candidate, the probability to transition to a state in which the candidate is selected is given by a rational constant \(p_{j,\mathbf{c}}\in(0,1)\). Similarly, the probability of moving to a state where she is rejected is \(1-p_{j,\mathbf{c}}\). The \(\mathsf{PATL}\) formula \[rejected_{j}\rightarrow\neg\langle\!\langle C\rangle\!\rangle^{\geq 1}\mathbf{F} selected_{j}\] represents that the coalition \(C\) cannot select a candidate that was already rejected. The \(\mathsf{PATL}^{*}\) formula \[\langle\!\langle C\rangle\!\rangle^{\geq\frac{1}{2}}\bigwedge_{a\in C} \bigvee_{j\in C}likes_{a,j}\wedge\mathbf{F}selected_{j}\] represents that the coalition \(C\) can ensure, with probability greater or equal to \(\frac{1}{2}\) to select in the future at least one candidate liked by each agent in \(a\), while \[\langle\!\langle C\rangle\!\rangle^{\geq\frac{1}{2}}\bigwedge_{a\in C} \bigwedge_{j\in C}likes_{a,j}\wedge\mathbf{F}selected_{j}\] states that they can ensure, with probability greater or equal to \(\frac{1}{2}\), all their liked candidates are eventually selected. The formula \[interview_{j}\rightarrow\langle\!\langle C\rangle\!\rangle^{\leq\frac{1}{4}} \mathbf{X}selected_{j}\] says that the probability the coalition \(C\) ensures the currently interviewed candidate is selected in the next state is at most \(\frac{1}{4}\). ## 5 Model Checking Complexity In this section, we look at the complexity of model-checking for \(\mathsf{PATL}\). In particular, we show that the problem for _memoryless deterministic strategies of the coalition_ against probabilistic play of the other agents and a stochastic environment is no more complex than in standard (non-probabilistic) case. The settings introduced in this paper include both deterministic and probabilistic memoryless strategies for the coalition and deterministic and stochastic CGSs. This gives 4 semantic variants in total, but the case of deterministic strategies and deterministic CGSs consists of the standard setting for \(\mathsf{ATL}\), whose complexity results are well-established. The main technical result of this paper is as follows. _Theorem 1_.: Model checking \(\mathsf{PATL}_{\mathrm{ir}}\)3 with deterministic strategies for the coalition is \(\mathbf{\Delta_{2}^{\mathrm{P}}}\)-complete. Footnote 3: As usual in the verification process, we denote no recall with r and imperfect information with i. Proof.: The lower bound follows by a reduction of \(\mathsf{ATL}_{\mathrm{ir}}\) model checking, which is \(\mathbf{\Delta_{2}^{\mathrm{P}}}\)-hard (Jamroga and Dix, 2006). Given are: a pointed CGS \((M,q)\) and a formula \(\langle\!\langle C\rangle\!\rangle\varphi\) of \(\mathsf{ATL}_{\mathrm{ir}}\). Note that \(M\) can be seen as stochastic CGS with only Dirac probability distributions for transitions. Recall that, in finite games, the opponents always have a deterministic best-response strategy to any given strategy \(\sigma_{C}\). Thus, \(M,q\models_{\mathsf{ATL}_{\mathrm{ir}}}\langle\!\langle C\rangle\!\rangle\varphi\) iff the agents in \(C\) have a uniform deterministic memoryless strategy to enforce \(\varphi\) on all paths iff they have such a strategy against all the probabilistic responses from \(\overline{C}\). Since the set of best responses includes deterministic strategies of \(\overline{C}\) played against deterministic strategy \(\sigma_{C}\) in the deterministic CGS \(M\), this is equivalent to saying that \(M,q\models_{\mathsf{PATL}_{\mathrm{ir}}}\langle\!\langle C\rangle\!\rangle^{ \geq 1}\varphi\), which completes the reduction. For the upper bound, we apply a similar procedure to that of \(\mathsf{ATL}_{\mathrm{ir}}\)(Schobbens, 2004). For formulas of type \(\langle\!\langle C\rangle\!\rangle^{\mathsf{odd}}\varphi\) without nested strategic modalities, we guess a strategy \(\sigma_{C}\), prune the model accordingly, and merge the remaining agents (\(\overline{C}\)) into a single opponent. This yields a single-agent Markov Decision Process with full observability. Then, we check the Probabilistic Computation Tree Logic formula \(A^{\mathsf{odd}}\varphi\), which can be done in time \(\mathbf{NP}\cap\mathbf{co}\text{-}\mathbf{NP}\)(Chen and Lu, 2007). For nested strategic modalities, we proceed recursively (bottom up), which runs in time \(\mathbf{P}^{\mathbf{NP}\cap\mathbf{co}\text{-}\mathbf{NP}}=\mathbf{\Delta_{2}^{ \mathrm{P}}}\). ## 6 Discussion This paper analyses the verification of the strategic abilities of autonomous agents in MAS while accounting for both incomplete information and probabilistic behaviours of the environment and agents. The setting considered in this paper is significant as MAS are often set in partially observable environments, whose evolution might not be known with certainty, but can be measured based on experiments and past observations. Verification of strategic abilities in the general setting with perfect recall is known to be undecidable, but the restriction to memoryless strategies is meaningful. We provided complexity results for deterministic strategies for the proponent coalition and point out different settings that are currently challenging open questions, based on probabilistic strategies for the proponent coalition. For solving the model checking problem w.r.t. probabilistic strategies for the proponent coalition, notice that it is not possible to exploit the technique used in Section 5 for deterministic strategies, i.e., calling an oracle that guesses the successful memoryless strategy. This is because there are _infinitely many_ probabilistic memoryless strategies, and hence the oracle Turing machine would either have to run in unbounded time, or allow for infinite branching. In fact, the synthesis of optimal probabilistic strategies is a special case of _jointly constrained bilinear optimization_, which is a notoriously hard problem (Al-Khayyal, 1990). Additionally, techniques employed for partially observable Markov decision processes (see for instance (Vlassis, Littman, and Barber, 2012)) can not be easily adapted as they refer to single-agent abilities. Moreover, the work on Probabilistic Alternating \(\mu\)-calculus (Song et al., 2019) seems unhelpful in our case. First, it is known that Probabilistic Alternating \(\mu\)-calculus and \(\mathsf{PATL}\) are incomparable (Bulling and Jamroga, 2011; Song et al., 2019). Second, the work (Song et al., 2019) only considers perfect information strategies. Finally, using the work on PSL (Aminof et al., 2019) does not seem the right direction either. Indeed, it only considers perfect information strategies. Additionally, the model checking problem for PSI is 3-EXPTIME-complete, while we expect a much lower complexity in our setting. ## Acknowledgments This research has been supported by the PRIN project RIPER (No. 20203FFYLK), the PNRR MUR project PE0000013-FAIR, the InDAM project "Strategic Reasoning in Mechanism Design", the EU ICT-48 2020 project TAILOR (No. 952215), the NCBR Poland/FNR Luxembourg projects STV (POLLUX-VII/1/2019 and C18/IS/12685695/IS/STV/Ryan), SpaceVote (POLLUX-XI/14/SpaceVote/2023 and C22/IS/17232062/SpaceVote) and PABLO (C21/IS/16326754/PABLO), as well as the EU H2020 Marie Sklodowska-Curie project with grant agreement No 101105549.
2309.02048
Probabilistic Self-supervised Learning via Scoring Rules Minimization
In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations. Our proposed approach involves two neural networks; the online network and the target network, which collaborate and learn the diverse distribution of representations from each other through knowledge distillation. By presenting the input samples in two augmented formats, the online network is trained to predict the target network representation of the same sample under a different augmented view. The two networks are trained via our new loss function based on proper scoring rules. We provide a theoretical justification for ProSMIN's convergence, demonstrating the strict propriety of its modified scoring rule. This insight validates the method's optimization process and contributes to its robustness and effectiveness in improving representation quality. We evaluate our probabilistic model on various downstream tasks, such as in-distribution generalization, out-of-distribution detection, dataset corruption, low-shot learning, and transfer learning. Our method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on large-scale datasets like ImageNet-O and ImageNet-C, ProSMIN demonstrates its scalability and real-world applicability.
Amirhossein Vahidi, Simon Schoßer, Lisa Wimmer, Yawei Li, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
2023-09-05T08:48:25Z
http://arxiv.org/abs/2309.02048v1
# Probabilistic Self-supervised Learning via ###### Abstract In this paper, we propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN), which leverages the power of probabilistic models to enhance representation quality and mitigate collapsing representations. Our proposed approach involves two neural networks; the online network and the target network, which collaborate and learn the diverse distribution of representations from each other through knowledge distillation. By presenting the input samples in two augmented formats, the online network is trained to predict the target network representation of the same sample under a different augmented view. The two networks are trained via our new loss function based on proper scoring rules. We provide a theoretical justification for ProSMIN's convergence, demonstrating the strict propriety of its modified scoring rule. This insight validates the method's optimization process and contributes to its robustness and effectiveness in improving representation quality. We evaluate our probabilistic model on various downstream tasks, such as in-distribution generalization, out-of-distribution detection, dataset corruption, low-shot learning, and transfer learning. Our method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on large-scale datasets like ImageNet-O and ImageNet-C, ProSMIN demonstrates its scalability and real-world applicability. ## 1 Introduction In the field of machine learning, probabilistic approaches have emerged as a powerful toolkit, offering advantages such as uncertainty quantification, robustness, and flexibility, especially in domains with inherent complexity and uncertainty Dutordoir et al. (2021); Peharz et al. (2020); Ghahramani (2015). While deterministic models have their merits, a crucial frontier lies in extending the benefits of probabilistic methods to self-supervised learning (SSL) for representation reliability. Representation reliability provides a more comprehensive understanding of the underlying data, which is particularly valuable in domains where prediction errors can have serious consequences, such as disease diagnosis Ozdemir et al. (2019); Rezaei et al. (2022), climate prediction Gneiting and Raftery (2007), computational finance Dawid and Musio (2014), and economic forecasting Gneiting and Katzfuss (2014). Current SSL methods have made impressive progress, but a notable gap exists in their evaluation of representational reliability through a probabilistic lens. Incorporating probabilistic modeling into SSL training provides a way to not only predict outcomes but also provide distributions of possible predictions along with associated probabilities Ho et al. (2020); Nichol and Dhariwal (2021). This distinctive feature enables SSL models to capture the inherent uncertainties in real-world data and provide a more complete understanding of underlying patterns. Encouragingly, recent advances in SSL, exemplified by methods such as Oquab et al. (2023), have demonstrated the potential to generate effective features across diverse image distributions and tasks without the need for fine-tuning, confirming the viability of this paradigm shift. As the demand for more reliable representations escalates, the integration of probabilistic principles into SSL is well poised to improve representation quality, align predictions with real-world complexity, and thereby advance the frontiers of machine learning. In this paper, we present ProSMIN, a novel probabilistic formulation of self-supervised learning that contributes to the understanding and improvement of probabilistic modeling in this domain. Our proposed method involves two deep neural networks, an online and a target network, each learning a different representation of input samples. The online network maps input samples to a probability distribution inferred from the representation of the online encoder part. We train the online network in such a manner that samples from its output distribution predict the target network's representation on a second augmented view of the input. The loss realized by this prediction is expressed via a modified scoring rule, which incentivizes the recovery of the true distribution. Our contributions are: * We introduce a novel probabilistic definition of representation reliability for self-supervised learning. The probabilistic definition provides a deeper understanding of the quality and trustworthiness of the learned representations in guiding subsequent tasks. To the best of our knowledge, this is the first comprehensive study to investigate probabilistic formulation in the self-supervised representation domain. * Our probabilistic approach effectively mitigates collapsing representation by encouraging the online and target networks to explore a diverse range of representations, thus avoiding convergence to a limited set which results in more comprehensive representation space that better encapsulates the intricacies of the data distribution. * We provide a rigorous theoretical foundation for the convergence of our proposed algorithm. Specifically, we demonstrate the strict propriety of our modified scoring rule, substantiating the convergence of the optimization process. This theoretical insight not only underscores the robustness of our approach but also provides a principled explanation for its effectiveness in improving representation quality. * Through extensive empirical analysis, we validate the effectiveness of our approach in diverse scenarios. Our method achieves competitive predictive performance and calibration on various tasks such as in-distribution (IND), out-of-distribution (OOD), and corrupted datasets, demonstrating generalization capabilities. Moreover, we demonstrate the superiority of our method in semi-supervised and low-shot learning scenarios. Additionally, our framework establishes a superior trade-off between predictive performance and robustness when compared to deterministic baselines. This outcome is particularly notable on large-scale datasets such as ImageNet-O and ImageNet-C, underscoring the scalability and effectiveness of our method in real-world, high-dimensional settings. ## 2 Background and Related work Self-supervised methods are designed to tackle unsupervised problems by training on a _pretext task_ that utilizes the data itself to generate labels, effectively employing supervised methods to solve unsupervised problems Grill et al. (2020); Chen et al. (2020); Jang et al. (2023); Caron et al. (2021); Zhou et al. (2021); Zbontar et al. (2021); Bardes et al. (2021); Chen et al. (2021). The resulting representations learned from the pretext task can serve as a foundation for _downstream supervised tasks_, such as image classification or object detection. Alternatively, the extracted representation can be directly utilized for downstream applications, such as detecting anomalies and OOD data Tran et al. (2022). Recent studies Oquab et al. (2023); Zhou et al. (2021) provided evidence that performing self-supervised pretext task learning on a large-scale and diverse dataset can extract features that are effective across different image distributions and tasks without the need for fine-tuning. Following this, we introduce a novel probabilistic self-supervised framework aiming to learn robust representation over parameters using _self-distillation_ and by _minimizing the scoring rule_. Self-distillation is a variant of knowledge distillation Hinton et al. (2015) in which a larger model (_teacher_) is used to distill knowledge into a smaller model (_student_) of the same architecture Caron et al. (2021); Zhou et al. (2021). Given an input sample \(\mathbf{x}\), the student network \(f_{\mathbf{\theta}}\) is trained on the soft labels provided by the teacher network \(f_{\mathbf{\xi}}\). Self-distillation combines self-supervised learning with knowledge distillation and was introduced by DINO Caron et al. (2021). The two networks share the same architecture but take different augmentations of the input sample and output different representation vectors. The knowledge is distilled from the teacher network \(f_{\mathbf{\xi}}\) to student \(f_{\mathbf{\theta}}\) by minimizing the cross-entropy between the respective representation vectors. The parameters of the student network \(\mathbf{\theta}\) are obtained from an exponential moving average (EMA) of the parameters of the teacher network \(\mathbf{\xi}\), thus reducing computational cost by confining backpropagation to the teacher network. A scoring rule is a function used to evaluate the accuracy of a probabilistic prediction Gneiting and Raftery (2007); Der Kiureghian and Ditlevsen (2009). It quantifies the divergence between the predicted probability distribution and the true distribution of the event. The concept of a scoring rule is fundamental to many areas of machine learning, including probabilistic classification Parry (2016) and decision theory Dawid and Musio (2014). A proper scoring rule Gneiting and Katzfuss (2014) is one that incentivizes truthful reporting of the probabilities by the forecaster, i.e., the forecaster is incentivized to report the correct probability distribution Pacchiardi et al. (2021). The use of proper scoring rules has had significant implications in areas such as online learning V'yugin and Trunov (2019), generative neural networks Pacchiardi and Dutta (2022), and uncertainty quantification Benggs et al. (2023); Gruber and Buettner (2022). This paper utilizes scoring rules and adapts them for the purpose of pretext task learning of a self-supervised framework. This adaptation is inspired by the endeavor to infuse probabilistic learning principles into the self-supervised learning domain. ## 3 Problem formulation Avoiding collapsing representationOne of the major challenges in self-supervised learning is collapsing representation where learned representations converge to a limited set of points in the representation space. In other words, the model fails to capture the full diversity and richness of the underlying data distribution. This can lead to reduced representation quality, impaired generalization to new tasks or domains, and limited capacity to handle variations in the data. Most recent studies addressed this problem with contrastive learning through effective augmentation Chen et al. (2020), negative sample strategies Wang et al. (2021), ensemble approach Vahidi et al. (2023), regularization techniques Rezaei et al. (2023), and self-distillation Caron et al. (2021). In this paper, we formulate the collapsing representation problem through a probabilistic lens, aiming to provide a comprehensive and nuanced solution that not only addresses the limitations of deterministic approaches but also harnesses the power of uncertainty quantification and broader representation distributions. This pivotal shift offers promising avenues for achieving representation reliability and superior generalization capabilities in self-supervised learning scenarios. Scoring rulesA scoring rule Gneiting and Raftery (2007) is a function that evaluates how well a predicted distribution \(P\) over a random variable \(\mathbf{X}\) aligns with the actually observed realizations \(\mathbf{x}\) of \(\mathbf{X}\). We define the loss1 of predicting distribution \(P\) while observing \(\mathbf{x}\) as \(S(P,\mathbf{x})\). Assuming that \(\mathbf{X}\) follows some true distribution \(Q\), the _expected_ scoring rule measuring the loss of predicting \(P\) can be expressed as \(S(P,Q)\triangleq\mathbb{E}_{\mathbf{X}\sim Q}S(P,\mathbf{X})\). A scoring rule \(S\) is _proper_ with respect to a set of distributions \(\mathcal{P}\) if for all \(P,Q\in\mathcal{P}\) it holds that \(S(Q,Q)\leq S(P,Q)\), thus incentivizing the prediction of the true distribution \(Q\). If the former holds with equality, i.e., the expected score \(S(P,Q)\) is uniquely minimized in \(Q\) at \(Q=P\), then the scoring rule is called _strictly proper_. In practice, the expectation with respect to \(Q\) is usually replaced by an empirical mean over a finite amount of samples. We will denote the resulting scoring rule as \(\hat{S}\). There exist many types of scoring rules, including entire parameterized families of strictly proper scoring rules. We provide more details on some of them that we use in our experiments in Appendix 9. Footnote 1: The original proposal in Gneiting and Raftery (2007) defines scoring rules in terms of a gain that is to be maximized. We adhere to the convention in deep learning of expressing the objective via a loss function that we seek to minimize. ## 4 Method Consider a randomly sampled mini-batch of training data \(\mathbf{X}\triangleq[\mathbf{x}_{1},\dots,\mathbf{x}_{n}]^{N}\in\mathbb{R}^{N\times D}\) and transformation functions \(\tau,\tau^{\prime}\) acting on the data. To enhance the training process, the transformation functions produce two augmented views \(\tilde{\mathbf{x}}\triangleq\tau(\mathbf{x})\) and \(\tilde{\mathbf{x}}^{\prime}\triangleq\tau^{\prime}(\mathbf{x})\) for each sample in \(\mathbf{X}\). These augmented views are generated by sampling \(\tau,\tau^{\prime}\) from a distribution of suitable data transformations, such as partially masking image patches (He et al., 2022) or applying image augmentation techniques (Chen et al., 2020). The first augmented view \(\tilde{\mathbf{x}}\) is fed to the encoder of online network \(f_{\mathbf{\theta}}\) that outputs a _representation_\(y_{\mathbf{\theta}}\triangleq f_{\mathbf{\theta}}(\tilde{\mathbf{x}})\). This is followed by passing it subsequently through a projector \(g_{\mathbf{\theta}}\) and predictor \(q_{\mathbf{\theta}}\), such that \(t_{\mathbf{\theta}}\triangleq g_{\mathbf{\theta}}(y_{\mathbf{\theta}})\) and \(z_{\mathbf{\theta}}\triangleq q_{\mathbf{\theta}}(t_{\mathbf{\theta}})\). We collect all trainable parameters of the online network in \(\mathbf{\theta}\). Similarly, the encoder of target network \(f_{\mathbf{\xi}}\) takes the second augmented view and outputs \(y_{\mathbf{\xi}}\triangleq f_{\mathbf{\xi}}(\tilde{\mathbf{x}}^{\prime})\), followed by the projector network \(g_{\mathbf{\xi}}\) producing \(t_{\mathbf{\xi}}\triangleq g_{\mathbf{\xi}}(y_{\mathbf{\xi}})\), where the trainable parameters are denoted as \(\mathbf{\xi}\). It is important to note that the predictor is applied exclusively to the online network. In order to introduce probabilistic self-distillation training, we employ a scoring rule Gneiting and Raftery (2007) as our loss function. To accomplish this, it is necessary to generate samples from the online network. One way of producing samples from a neural network architecture, while still enabling backpropagation with respect to the latent representation \(\mathbf{z}\), is to use the reparametrization trick Kingma et al. (2015). Here, we assume that the output of the predictor network \(z_{\mathbf{\theta}}=q_{\mathbf{\theta}}(t_{\mathbf{\theta}})\) follows an underlying normal distribution with mean \(\mu\) and standard deviation \(\sigma\). We generate \(r\in\mathbb{N}\) samples from the output of the linear layers following the prediction head by sampling random noise \(\epsilon_{j}^{i}\sim N(0,1)\) for each augmented view of the \(i\)-th data point, such that the \(j\)-th sample, with \(j\in 1,...,r\), is given by: \(\mathbf{z}_{j}^{i}=\mu+\sigma\odot\epsilon_{j}^{i}\). We thus obtain samples \(\mathbf{z}_{j}^{i}\) by shifting and scaling the random noise samples \(\epsilon_{j}^{i}\) by the outputs \((\mu,\sigma)\) of a neural network with trainable parameters \(\mathbf{\theta}\), and the loss incurred by these samples can be backpropagated to update \(\mathbf{\theta}\) during training. We update the online network parameters by minimizing the scoring rule as follows: \[\hat{\mathbf{\theta}}:=\operatorname*{arg\,min}_{\mathbf{\theta}}J(\mathbf{\theta}),\quad J (\mathbf{\theta})=\hat{S}(P_{\mathbf{\theta}},P_{\mathbf{\xi}}):=\mathbb{E}_{z_{\mathbf{\xi} }\sim P_{\mathbf{\xi}}}[P_{\mathbf{\theta}},z_{\mathbf{\xi}}], \tag{1}\] where \(P_{\mathbf{\xi}}\) denotes the target distribution and \(z_{\mathbf{\xi}}\) denotes the target output and \(P_{\mathbf{\theta}}\) represents the online induced multivariate normal distribution. Our custom scoring rule loss over \(P_{\mathbf{\theta}}\) and \(P_{\mathbf{\xi}}\) is defined as \[\hat{S}(P_{\mathbf{\theta}},P_{\mathbf{\xi}})=\frac{1}{N}\sum_{i=1}^{N}\left|\left[ \frac{2\lambda}{r}\sum_{j=1}^{r}\|\mathbf{z}_{j}^{i}-\mathbf{z}_{\xi}^{i}\|_{2}^{\beta }-\frac{1-\lambda}{r(r-1)}\sum_{j\neq k}\|\mathbf{z}_{j}^{i}-\mathbf{z}_{k}^{i}\|_{2}^{ \beta}\right]\right|, \tag{2}\] Figure 1: Illustration of our proposed architecture (ProSMin). Given a batch \(\mathbf{X}\) of input samples, two different augmented samples \(\tilde{\mathbf{x}}\) and \(\tilde{\mathbf{x}}^{\prime}\) are taken by an online network with \(\mathbf{\theta}\) and target network with \(\mathbf{\xi}\) parameters respectively. Our objective is to minimize the proposed scoring rule between \(P_{\mathbf{\theta}}\), \(P_{\mathbf{\xi}}\). where \(\mathbf{z}_{\mathbf{\xi}}^{i}\) represents the target prediction for the \(i\)-the input sample. \(\beta\in(0,2)\) and \(\lambda\in(0,1)\) are hyperparameters. Detailed information is provided in Appendix 9.5.6. By the principle of knowledge distillation, the parameters of the target network are updated through the EMA of the weights from the online network Grill et al. (2020); Caron et al. (2021), saving the need for backpropagation and thus reducing computation time considerably. \[\mathbf{\xi}_{t}=\alpha\mathbf{\theta}_{t}+(1-\alpha)\mathbf{\xi}_{t-1},\quad t=1,2,\dots, \quad\alpha\in[0,1] \tag{3}\] The initial weights \(\mathbf{\xi}_{0}\) are obtained through random initialization. Next, we explain the details of our objective function based on scoring rules 4.2. We refer to Section 9 for the background and different variations of scoring rules. Then, we show that, by utilizing a strictly proper scoring rule and retaining the ability to calculate gradients as usual (as proven in 9.5.4), we can infer that our algorithm will converge towards the desired minimum (see Section 4.3). ### Avoiding collapse There are different methods in self-supervised learning to avoid collapse by adding batch normalization Grill et al. (2020), prediction layer Grill et al. (2020), and sharpening and centering Caron et al. (2021). Our framework can benefit from more layers in the online network and centering on the target outputs to improve the results. Still, our method inherently can not collapse to the same representation because of its probabilistic nature. Furthermore, our loss function converges to zero without collapsing. ### Objective function Many scoring rules decompose into two terms, one of which is a function of two samples \(\mathbf{z}_{j},\mathbf{z}_{k}\) from the predicted distribution \(P\), while the other is a function of \(\mathbf{z}_{j}\) and the realized observation \(\mathbf{z_{\xi}}\). Our objective function is an adjusted version of the former, where the two components are posed as a convex combination with component weights controlled via hyperparameter \(\lambda\in(0,0.5]\). This adjustment of the scoring rules can be useful to adjust the focus of the loss function on either part of the scoring rule. The hyperparameter \(\lambda\) helps to improve the performance by shifting the focus of the loss function. Two notable examples of scoring rules adhering to this form are the _energy score_ and the _kernel score_Gneiting and Raftery (2007). With the above notation, we define the energy score as: \[S_{\mathrm{E}}(P_{\mathbf{\theta}},\mathbf{z_{\xi}})=2\cdot\mathbb{E}_{P_{\mathbf{\theta} }}\left[\left\|\mathbf{z}_{j}-\mathbf{z_{\xi}}\right\|_{2}^{\beta}\right]-\mathbb{E}_ {P_{\mathbf{\theta}}}\left[\left\|\mathbf{z}_{j}-\mathbf{z}_{k}\right\|_{2}^{\beta}\right] =:S_{\mathrm{E}}^{1}(P_{\mathbf{\theta}},\mathbf{z_{\xi}})+S_{\mathrm{E}}^{2}(P_{\mathbf{ \theta}},\mathbf{z_{\xi}}) \tag{4}\] Analogously, we write the kernel score as: \[S_{\mathrm{K}}(P_{\mathbf{\theta}},\mathbf{z_{\xi}})=\mathbb{E}_{P_{\mathbf{\theta}}} \left[k(\mathbf{z}_{j},\mathbf{z}_{k})\right]-2\cdot\mathbb{E}_{P_{\mathbf{\theta}}} \left[k(\mathbf{z}_{j},\mathbf{z_{\xi}})\right]=:S_{\mathrm{K}}^{2}(P_{\mathbf{\theta}}, \mathbf{z_{\xi}})+S_{\mathrm{K}}^{1}(P_{\mathbf{\theta}},\mathbf{z_{\xi}}), \tag{5}\] with suitable kernel function \(k(\cdot,\cdot)\). For simplicity, we only write \(S(P_{\mathbf{\theta}},\mathbf{z_{\xi}})=S^{1}(P_{\mathbf{\theta}},\mathbf{z_{\xi}})+S^{2}(P_{ \mathbf{\theta}},\mathbf{z_{\xi}})\) in the following for both scores. With \(\lambda\in(0,1)\), we define the general form of our objective function as follows: \[S^{*}(P_{\mathbf{\theta}},\mathbf{z_{\xi}}):=\left|\lambda S^{1}(P_{\mathbf{\theta}},\mathbf{ z_{\xi}})+(1-\lambda)S^{2}(P_{\mathbf{\theta}},\mathbf{z_{\xi}})\right|. \tag{6}\] The proposed objective function for energy and kernel score is strictly proper in our setup. We provide proof in following Section 4.3. ### Theoretical justification This section provides the mathematical justification of our algorithm. We establish that the custom loss is strictly proper in our setup. In subsection 9.5.4, we demonstrate that the expectation and gradient of our theoretical derivative can be interchanged, thereby enabling us to derive the gradients as usual. Due to the use of the samples within the scoring rule as well as the use of non-differential activation functions, this step becomes necessary. This is followed by subsection 9.5.5, where we present an unbiased estimate of the gradient. By undertaking these three steps, we ensure that our algorithm converges effectively. **Strict propriety of the proposed objective function** Let \(\lambda\in(0,1)\). We only show the proof for the energy score. The proof for the kernel score can be done analogously. We need to show that \[S^{*}(P_{\mathbf{\xi}},P_{\mathbf{\xi}})<S^{*}(P_{\mathbf{\theta}},P_{\mathbf{\xi}})\quad \forall P_{\mathbf{\theta}},P_{\mathbf{\xi}}\in\mathcal{P}\quad s.t.\quad P_{\mathbf{ \theta}}\neq P_{\mathbf{\xi}} \tag{7}\] We define the following based on Eq. 1: \[S^{*}(P_{\mathbf{\theta}},P_{\mathbf{\xi}}) =\left|\mathbb{E}_{\mathbf{z}_{\mathbf{\xi}}^{i}\sim P_{\mathbf{\xi}}}S^{*}(P _{\mathbf{\theta}},\mathbf{z}_{\mathbf{\xi}}^{i})\right|\] \[=\left|\mathbb{E}_{\mathbf{z}_{\mathbf{\xi}}^{i}\sim P_{\mathbf{\xi}}}\left[ \lambda\mathbb{E}_{\mathbf{z}_{j}^{i}\sim P_{\mathbf{\theta}}}\left[2\|\mathbf{z}_{j}^{i} -\mathbf{z}_{\mathbf{\xi}}^{i}\|_{2}^{\beta}\right]-(1-\lambda)\mathbb{E}_{\mathbf{z}_{j} ^{i},\mathbf{z}_{k}^{i}\sim P_{\mathbf{\xi}}}\left[\left\|\mathbf{z}_{j}^{i}-\mathbf{z}_{k}^{i }\right\|_{2}^{\beta}\right]\right]\right|\] \[>\left|\mathbb{E}_{\mathbf{z}_{\mathbf{\xi}}^{i}\sim P_{\mathbf{\xi}}}\left[ \lambda\mathbb{E}_{\mathbf{z}_{j}^{i}\sim P_{\mathbf{\xi}}}\left[2\|\mathbf{z}_{j}^{i}-\bm {z}_{\mathbf{\xi}}^{i}\|_{2}^{\beta}\right]-(1-\lambda)\mathbb{E}_{\mathbf{z}_{j}^{i}, \mathbf{z}_{k}^{i}\sim P_{\mathbf{\xi}}}\left[\left\|\mathbf{z}_{j}^{i}-\mathbf{z}_{k}^{i} \right\|_{2}^{\beta}\right]\right]\right|\] \[=S^{*}(P_{\mathbf{\xi}},P_{\mathbf{\xi}})=0\] \(S^{*}(P_{\mathbf{\xi}},P_{\mathbf{\xi}})\) can be estimated as zero and hence is smaller than \(S^{*}(P_{\mathbf{\theta}},P_{\mathbf{\xi}})\) for every \(P_{\mathbf{\theta}}\neq P_{\mathbf{\xi}}\). For a specific observation \(\mathbf{x}_{i}\), there is only a single target prediction \(\mathbf{z}_{\mathbf{\xi}}\), such that \(\mathbf{z}_{\mathbf{\xi}}\) essentially follows a Dirac distribution. This concludes the proof. By utilizing a strictly proper scoring rule and retaining the ability to calculate gradients as usual (as proven in 9.5.4), we can infer that our algorithm will converge towards the desired minimum. ## 5 Implementation details and experimental setup **Image augmentation** We define a random transformation function \(\mathbf{T}\) that applies a combination of multi-crop, horizontal flip, color jittering, and grayscale. Similar to Caron et al. (2021), we perform multi-crops with a random size from \(0.8\) to \(1.0\) of the original area and a random aspect ratio from \(3/4\) to \(4/3\) of the original aspect ratio. We define color-jittering of \((0.8,0.8,0.8,0.2)\), and Gaussian blurring with \(0.5\) probability and \(\zeta=(0.1,2.0)\). **Deep self-supervised network architecture** The _online neural network_ is constructed from a backbone \(f\), which can be either ViT Dosovitskiy et al. (2021) or ResNet He et al. (2016), and a projection head \(g\) followed by a prediction \(q\). The backbone \(f\) output is used as features for downstream tasks. The projection consists of a 3-layer multilayer perceptron (MLP) with a hidden dimension of 2048, followed by 2 normalizations and a weight-normalized fully connected layer with \(K\) dimensions, similar to the design used in the DINO projection head. We use a predictor with two layers of MLPs with a hidden dimension of 12000, with nonlinearity GELU in between. Notably, ViT architectures do not use batch normalization (BN) by default, so we do not use BN in the projection or prediction when applying ViT. The _target network_ has the same backbone and projection as the online network and it learns through self-distillation. Similar to DINO Caron et al. (2021), after the online network parameters are updated, an EMA of the online parameters (i.e., a momentum encoder) is used to update the target parameters. The EMA prevents the target parameters from updating too quickly. After parameter updates the target also gets a new centering parameter. **Optimization** Our pretraining process involves training the models on the ImageNet training dataset Deng et al. (2009) using the _adamw_ optimizer Loshchilov and Hutter (2017) and a batch size of 512, distributed across 8 GPUs utilizing Nvidia Tesla A100 with ViT-S/16 architecture. We adopt a linear scaling rule to determine the base value of the learning rate, which is ramped up linearly during the first 30 epochs. Specifically, the learning rate is set to \(lr=0.0005*\text{batchsize}/256\). After the warmup phase, we decay the learning rate using a cosine schedule Loshchilov and Hutter (2017). The weight decay also follows a cosine schedule, decreasing from \(0.04\) to \(0.4\). The centering (smooth parameter) is \(0.9\). **Datasets** The datasets utilized in our experiments are as follows: The **ImageNet** Deng et al. (2009) dataset with 1.28 million training images and 50,000 validation images with the size of \(256\times 256\) contains 1,000 classes. The **ImageNet-O** dataset Srivastava et al. (2022) comprises images belonging to classes that are not present in the ImageNet-1k dataset. It is considered a challenging benchmark for evaluating the robustness of the models, as it requires models to generalize to a diverse range of visual conditions and handle variations that are not typically encountered in standard training datasets. The **ImageNet-C** Hendrycks and Dietterich (2019) is a benchmark for evaluating the robustness of the models against common corruptions and perturbations that can occur in real-world scenarios. It consists of more than 30,000 images derived from the ImageNet dataset, with each image being corrupted in one of 15 different ways, including noise, blur, weather conditions, and digital artifacts. **CIFAR-10/100**Krizhevsky (2009) are subsets of the tiny images dataset. Both datasets include 50,000 images for training and 10,000 validation images of size \(32\times 32\) with 10 and 100 classes, respectively. The **Oxford 102 Flower**Nilsback and Zisserman (2008) consists of 102 flower categories, each class including between 40 and 258 images. The images have large scale, pose, and light variations. In addition, there are categories that have large variations within the category and several very similar categories. **NNaturalist-2018**Van Horn et al. (2018) (iNat) comprises a vast collection of 675,170 training and validation images, classified into 5,089 distinct fine-grained categories found in the natural world. It is worth noting that the iNat dataset exhibits a significant imbalance, as the number of images varies greatly across different categories. **Tasks** We evaluate the performance of ProSMin representations after self-supervised pretraining on the ImageNet on the basis of **In-Domain (IND) generalization**, **OOD detection**, **semi-supervised learning**, **corrupted dataset evaluation**, as well as **transfer learning to other datasets and tasks**. **Evaluation metrics** We report the prediction performance with the following metrics: **Top-1 accuracy**\(\uparrow\): refers to the proportion of test observations that are correctly predicted by the model's output as belonging to the correct class. **AUROC**\(\uparrow\): the area under the ROC curve represents the relationship between false-positive and false-negative rates for various classification thresholds. In this case, the positive and negative classes refer to whether an observation is in or out of a given distribution, respectively, and the ROC curve is plotted as the threshold for classifying an observation as "positive" is gradually increased. **Negative log-likelihood (NLL)**\(\downarrow\): measures the probability of observing the given test data given the estimated model parameters, multiplied by -1. This measure quantifies the degree to which the model's estimated parameters fit the test observations. **Expected calibration error (ECE)**\(\downarrow\)(Naeini et al., 2015): calculated as the mean absolute difference between the accuracy and confidence of the model's predictions, where confidence is defined as the highest posterior probability among the predicted classes. The difference is calculated across equally-spaced confidence intervals or bins and is weighted by the relative number of samples in each bin. A lower value of ECE indicates better calibration of the model. **mean Calibration Error (mCE)**\(\downarrow\) is a metric used to evaluate the calibration of a classification model, similar to ECE. It is calculated as the mean of the absolute differences between the predicted and true probabilities of a given class across all classes. A lower value of mCE indicates better calibration of the model. ## 6 Results and discussion **In-distribution generalization** IND generalization (or _linear evaluation_) measures how well a model's confidence aligns with its accuracy. To assess and compare the predictive abilities of our proposed model on in-distribution datasets, we freeze the encoder of the online network, denoted as \(f_{\mathbf{\theta}}\), after performing unsupervised pretraining. Then, we train a supervised linear classifier using a fully connected layer followed by softmax, which is placed on top of \(f_{\mathbf{\theta}}\) after removing the projection and prediction network. The desired outcome is high predictive scores and low uncertainty scores. In Table 1, a comprehensive comparison is presented between our approach and other self-supervised methods. The results indicate that our method attained the second-highest accuracy with 300 epochs while outperforming all others in terms of calibration, as evidenced by the lowest Expected Calibration Error (ECE) and Negative Log-Likelihood (NLL) scores. **Out-of-distribution detection** The ability of a model to recognize test samples from classes that were not present during training is evaluated using OOD detection, as discussed in Geng et al. (2020). We conduct experiments on ImageNet-O Srivastava et al. (2022) to assess the generalization of the model from IND to OOD datasets, as well as to predict the uncertainty of the models on OOD datasets. Note that evaluation is performed directly after self-supervised pretraining without a fine-tuning step. The y-axis of Figure 2 shows and compares the results for the OOD task in terms of AUROC. Remarkably, our method, trained for 300 epochs, exhibits competitive performance comparable to iBOT, which was trained for 800 epochs. This outcome aligns with our expectations as we directly use the probabilistic latent representation for this task. Corrupted dataset evaluationAn essential aspect of model robustness is its capacity to produce precise predictions when the test data distribution changes. We examine model robustness under the context of _covariate shift_. Figure 2 presents the improved performance metrics. Our method achieves better results than the baseline and comparable predictive performance to the baseline in terms of mCE. **Semi-supervised and Low-shot learning on ImageNet** Following the semi-supervised protocol established in Chen et al. (2020), we employ fixed 1% and 10% splits of labeled training data from ImageNet. In Table 2, we compare our performance against several concurrent models, including the baseline (DINO). Our results demonstrate that our features trained for 300 epochs are on par with DINO. Moreover, we achieve better results for 10% compared to state-of-the-art methods despite our network being pretrained for significantly less time (300 epochs versus 800 epochs). We assess our model's efficacy on a low-shot image classification task, where we train logistic regression on frozen weights with 1% and 10% labels. It's important to note that this experiment was performed on frozen weights, without finetuning. Table 2 shows our features are on par with state-of-the-art semi-supervised models which these methods trained for 800 epochs. Transfer learning evaluationWe further assess the generalization capacity of the learned representation on learning a new dataset. We followed the same transfer learning protocol explained in Caron et al. (2021). To this end, we evaluate the performance of our model pretrained on ImageNet Deng et al. (2009) to CIFAR10/100 Krizhevsky (2009), an imbalanced naturalist dataset (iNat-18) Van Horn et al. (2018), and the Flower dataset Nilsback and Zisserman (2008). According to the results shown \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & Top-1 Acc (\%) (\(\uparrow\)) & \(\kappa\)-NN (\(\uparrow\)) & NLL (\(\downarrow\)) & ECE (\(\downarrow\)) \\ \hline DINO Caron et al. (2021) & 76.8 & 74.5 & 0.919 & 0.015 \\ MOCO-V3 Chen et al. (2021) & 73.2 & 64.7 & 1.152 & 0.027 \\ i-BOT Zhou et al. (2021) & 77.9 & 75.2 & 0.918 & 0.013 \\ ProSMin & **78.4** & **76.2** & **0.900** & **0.006** \\ \hline \hline \end{tabular} \end{table} Table 1: **IND Generalization (or Linear Evaluation)**: Top-1 accuracy, \(\kappa\)-NN, ECE, and NLL averaged over in-distribution on test samples of the **ImageNet** dataset where the encoder is _ViT-S/16_ over 800 epochs. The best score for each metric is shown in **bold**, and the second-best is underlined. Figure 2: **OOD detection and corrupted dataset evaluation, Methods with higher AUROC and lower mCE are better. Our method and iBOT have the best performance across both evaluations.** in Table 3, we observe that our method provides a robust solution when transferring to the new dataset. ## 7 Ablation study To gain a deeper understanding of the behavior and performance of our proposed method, we conduct several ablation studies to explore various aspects of our approach. Specifically, we investigate the following factors: different scoring rules as an objective function, the hyperparameter of our loss function (\(\lambda\)), the number of samples used for generating latent representations, the size of the embedding, the effect of the momentum hyperparameter, the impact BN, and prediction network (PL). These investigations aim to provide insights and intuition regarding our approach. **Impact of scoring rule** We conduct a series of experiments to explore alternative scoring rules, including kernel scoring rules and various variations of energy scoring rules, for our objective function. The results, as presented in Table 4, demonstrate that the kernel scoring rule exhibits instability. Since the kernel score becomes negative and it is not anymore a proper scoring rule (See our proof in Section. 4.3). However, the energy score with \(\beta=1\) (\(L1\) loss) yields the best performance among the tested scoring rules. We provide a theoretical explanation for \(L1\) and \(L2\) (see Section 9.5.3). **Study of hyperparameters** Figure 2(a) depicts the influence of the hyperparameter \(\lambda\) utilized in our proposed loss function, which controls the impact of \(S^{1}(P_{\theta},\mathbf{z_{\xi}})\) and \(S^{2}(P_{\theta},\mathbf{z_{\xi}})\) on the objective function. \begin{table} \begin{tabular}{l c c c c} \hline \hline Energy (L1 loss) & Kernel score & Energy (L2-loss) & BN. & PL. & Accuracy \\ \hline ✓ & ✗ & ✗ & ✗ & ✓ & 73.2 \\ ✗ & ✓ & ✗ & ✗ & ✓ & 1.1 \\ ✗ & ✗ & ✓ & ✗ & ✓ & 43 \\ ✓ & ✗ & ✗ & ✓ & ✓ & 68.5 \\ ✓ & ✗ & ✗ & ✗ & ✗ & 71.2 \\ ✓ & ✗ & ✗ & ✗ & ✓ & 71 \\ \hline \hline \end{tabular} \end{table} Table 4: **Important component for training. We investigate the effect of each component on the Linear evaluation performance. The first line shows the best combination. PL. is the prediction layer. BN. is Batch normalization layer** \begin{table} \begin{tabular}{l c c c} \hline \hline Method & 1\% & 10\% & Architecture & Parameters \\ \hline \hline Semi-supervised & & & \\ DINO Caron et al. (2021) & 60.3 & 74.3 & ViT-S/16 & 21 \\ i-BOT Zhou et al. (2021) & 61.9 & 75.1 & ViT-S/16 & 21 \\ ProSMin (ours) & **62.1** & **75.6** & ViT-S/16 & 21 \\ \hline \hline Low-shot learning & & & \\ DINO Caron et al. (2021) & 64.5 & 72.2 & ViT-S/16 & 21 \\ i-BOT Zhou et al. (2021) & 65.9 & 73.4 & ViT-S/16 & 21 \\ ProSMin (ours) & **66.1** & **73.8** & ViT-S/16 & 21 \\ \hline \hline \end{tabular} \end{table} Table 2: **Low-shot and semi-supervised evaluation**: Top-1 accuracy (ACC), ECE, and NLL for semi-supervised on ImageNet classification using 1% and 10% training examples fine-tuning. and Low-shot results with frozen ViT features. \begin{table} \begin{tabular}{l c c c c} \hline \hline Method & CIFAR-10 & CIFAR-100 & iNat-18 & Flowers \\ \hline DINO Caron et al. (2021) & 99.0 & 90.5 & 72 & 98.5 \\ i-BOT Zhou et al. (2021) & 99.1 & 90.7 & 73.7 & 98.6 \\ ProSMin (ours) & 99.0 & 90.2 & 72.5 & 98.5 \\ \hline \hline \end{tabular} \end{table} Table 3: **Transfer to new dataset evaluation**: Transfer learning by finetuning pretrained models on different datasets. We report top-1 accuracy. Self-supervised pretraining with ProSMin transfers better than supervised pretraining. An ablation analysis was conducted to investigate the effect of increasing the number of samples from \(q_{\mathbf{\theta}}\), as illustrated in Figure 2(b). The results demonstrate that employing four samples yields satisfactory performance. Figure 2(c) showcases the outcomes obtained from the knowledge distillation rate. In previous approaches, the exponential moving average parameter initiated from a value relatively close to 1 (e.g., 0.996 Grill et al. (2020), Caron et al. (2021)). However, in our case, \(\alpha\) commences from 0.9, implying a faster pace of knowledge distillation. Furthermore, we examined the impact of different sizes for the embedding vector, as presented in Fig. 2(d). The results obtained after 100 epochs reveal that increasing the embedding size leads to improved performance. However, it should be noted that larger embedding sizes necessitate additional computational resources, thus our choice of size was based on the available computational capacity. Table 4 provides insights into the influence of batch normalization in the prevention of representation collapse Grill et al. (2020). As our framework operates in a probabilistic setting, the inclusion of batch normalization is unnecessary for averting collapse. Additionally, the prediction layer (PL \(q_{\mathbf{\theta}}\)) enhances performance by facilitating improved feature extraction in online networks. Figure 3: Study of hyperparameters of our proposed ProSMin (a) \(\lambda\), (b) Number of samples, (c) Momentum coefficient (\(\alpha\)), and (d) Size of embedding obtained by 100 epochs on ImageNet dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Method (ViT-s) & parameters (M) & im/s & time/ 300-epochs (hr) & number of GPUs & memory (G) \\ \hline DINO Caron et al. (2021) & 21 & 1007 & 72.6 & 16 & 15.4 \\ i-BOT Zhou et al. (2021) & 21 & 1007 & 73.8 & 16 & 19.5 \\ ProSMin (ours) & 21 & 1007 & 98 & 8 & 21.1 \\ \hline \hline \end{tabular} \end{table} Table 5: **Evaluation of Computational Efficiency** We conduct a thorough analysis of the computational efficiency of our novel probabilistic approach in comparison to alternative self-supervised methods. This evaluation encompasses memory utilization and computational expenditure. Analysis of computational costWe further assess the effectiveness of our proposed approach and compare it with DINO and iBOT in Table 5. The presented values are obtained from the data reported in the DINO and iBOT papers. However, these papers do not include information regarding the total number of parameters. ## 8 Conclusion In this paper, we presented _ProSMin_ as a novel probabilistic self-supervised framework that involves two neural networks which collaborate and learn from each other using an augmented format. Our framework is trained by minimizing a proposed scoring rule. We provided theoretical justification and showed that our modified loss is strictly proper. We evaluated ProSMin across different tasks, including in-distribution generalization, out-of-distribution detection, dataset corruption, transfer learning, and semi-supervised learning. The results demonstrate that our method achieves superior performance in terms of accuracy and calibration, thus showing the effectiveness of our proposed approach. Broader impact and limitationsThis study has the potential to inspire new algorithms and stimulate theoretical and experimental exploration. The algorithm presented here can be used for many different probabilistic downstream tasks, including (but not limited to) uncertainty quantification, density estimation, image retrieval, probabilistic unsupervised clustering, program debugging, image generation, music analysis, and ranking. In addition, we believe that our extended concept probabilistic framework opens many interesting avenues for future development in self-supervised learning, and addresses many problems of existing models, such as avoiding representation collapse. However, there are several limitations. One limitation of our model compared to other learning methods (such as supervised learning) is that self-supervised learning may require more computational resources and training time. However, considering that our proposed method does not require manual annotation, which is usually very expensive, we would argue that this trade-off is acceptable. In addition, due to limited computational resources, we provide results for fewer epochs compared to other methods. Furthermore, we believe that the results can be improved with extensive hyperparameter optimization.
2308.02361
Probing the magnetic field strength dependence of the Chiral Magnetic Effect
The article presents a study aimed at probing the dependence of the Chiral Magnetic Effect (CME) on the magnetic field strength using the Anomalous Viscous Fluid Dynamics (AVFD) model in Pb--Pb at LHC energies. The results demonstrate the quadratic dependence of the correlators used for the study of the CME in heavy ion collisions on the number of spectators, a proxy of the magnitude of the magnetic field. The article also presents the extension of this approach to a two dimensional space, formed by both the aforementioned proxy of the magnetic field strength but also a proxy of the final state ellipticity, a key ingredient of the background in these measurements, for each centrality interval. This provides an exciting possibility to experiments to isolate the background contributions from the potential CME signal.
Panos Christakoglou
2023-08-04T14:52:36Z
http://arxiv.org/abs/2308.02361v3
# Probing the magnetic field strength dependence of the Chiral Magnetic Effect ###### Abstract The article presents a study aimed at probing the dependence of the Chiral Magnetic Effect (CME) on the magnetic field strength using the Anomalous Viscous Fluid Dynamics (AVFD) model in Pb-Pb at LHC energies. The results demonstrate the quadratic dependence of the correlators used for the study of the CME in heavy ion collisions on the number of spectators, a proxy of the magnitude of the magnetic field. The article also presents the extension of this approach to a two dimensional space, formed by both the aforementioned proxy of the magnetic field strength but also a proxy of the final state ellipticity, a key ingredient of the background in these measurements, for each centrality interval. This provides an exciting possibility to experiments to isolate the background contributions from the potential CME signal. ## I Introduction The chiral magnetic effect (CME) [1] is the development of an electric current \(\vec{J}\), induced by a chirality imbalance between left- and right-handed chiral fermions characterised by a chiral chemical potential \(\mu_{5}\), that develops parallel to an external magnetic field (\(\vec{B}\)) according to \[\vec{J}=\sigma_{5}\vec{B}. \tag{1}\] In the equation above \(\sigma_{5}\) is the chiral magnetic conductivity that is proportional to \(\mu_{5}\). This chirality imbalance in theories like quantum chromodynamics or QCD is connected to transitions between different vacuum states of the theory and is, consequently, a reflection of fundamental symmetries such as parity (P) and its combination with charge conjugation (C) being broken [2; 3; 4]. An exiting possibility emerged under the realisation that such effects can be accessed experimentally using heavy ion collisions accelerated at ultrarelativistic energies like the ones achieved at the Relativistic Heavy Ion Collider (RHIC) or the Large Hadron Collider (LHC) [5; 6; 7; 8; 9; 10; 11; 12; 13]. These collisions can create extreme conditions of energy density and temperature which exceed the necessary values expected by lattice-QCD calculations [14; 15; 16] to reach a state of matter called Quark Gluon Plasma (QGP) [17] which consists of strongly coupled chiral fermions and gluons [18; 19; 20; 21; 22]. In addition, in non-central heavy ion collisions i.e. in collisions with large values of impact parameter, the charged nucleons that do not reside in the overlap region also called "spectators", fly away with large velocities and can generate large values of magnetic fields. This magnetic field, that can reach magnitudes larger than \(10^{16}\) T [23; 24; 25; 26] at the LHC, decays rapidly with a rate that depends on the electrical conductivity of the QGP, a property of the medium which is unconstrained experimentally. The search for the discovery of the CME intensified after Voloshin in Ref. [27] proposed a sensitive experimental observable that relies on measuring two-particle azimuthal correlations relative to the reaction plane (\(\Psi_{\rm RP}\)), the plane defined by the impact parameter and the beam axis, according to \[\gamma=\langle\cos(\varphi_{\alpha}+\varphi_{\beta}-2\Psi_{\rm RP})\rangle, \tag{2}\] where \(\alpha\) and \(\beta\) indicate particles with the same or opposite charge. This expression can probe the first coefficient \(a_{1}\) which quantifies the magnitude of the CME signal and more specifically it is proportional to correlations between the leading terms for different charge combinations \(\langle a_{1,\alpha}a_{1,\beta}\rangle\). In parallel, one can also measure the two particle correlator that has no dependence on the reaction plane, of the form \[\delta=\langle\cos(\varphi_{\alpha}-\varphi_{\beta})\rangle, \tag{3}\] This correlator is still sensitive to the potential signal from \(\langle a_{1,\alpha}a_{1,\beta}\rangle\) but is dominated by background contributions as discussed and demonstrated in Ref. [26]. The first experimental measurements using this approach were reported by the STAR Collaboration in Au-Au collisions at \(\sqrt{s_{\rm NN}}=0.2\) TeV [28; 29] and were consistent with initial expectations for a charge separation relative to the reaction plane due to the CME. Since then, many more attempts not only at RHIC [30] but also at the LHC [31; 32; 33; 34] reported results that were not able, at least until this point, to identify unambiguously the existence of the CME. One of the main reasons is the fact that these measurements are dominated by background sources, the most prominent of which is the combination of local charge conservation or LCC (i.e. the production of oppositely charged particles from a neutral fluid element) in combination with how the QGP expands outwards in an anisotropic way. This latter is encapsulated by the phenomenon of anisotropic flow, which is usually quantified by the anisotropic flow coefficients \(v_{n}\) of the Fourier expansion of the azimuthal particle distribution in the final state of a heavy ion collision. After these first results the field moved in two parallel directions: the first one focuses in constraining and quantifying the background while in the second one modifies some of the components of the signal and looks for relative changes in the measurement. A characteristic example of the first direction is the event shape engineering (ESE) studies that allows to select events with different magnitude of ellipticity within the same centrality [35]. This allows to quantify the dependence of the measured charge dependent differences, \(\Delta\gamma\), on one of the main background components i.e. \(v_{2}\) or elliptic flow, the second and most dominant flow coefficient. On the other side of the spectrum, the STAR collaboration used two isobar systems [36], namely \({}^{96}_{40}Zr\) and \({}^{96}_{44}Ru\), that are very similar in size and thus the background contributions to the measurements were expected to be very similar. At the same time, however, the \(Ru\)-nucleus contains 10% more protons than the \(Zr\) which results in a significantly larger magnitude of \(\vec{B}\). This, consequently, is expected to result in a larger contribution to \(\Delta\gamma\) originating from the CME signal, if any, in \(Ru\) than in \(Zr\). In both cases, either using the ESE or different isobars, the results are consistent with no CME contribution and allow for the extraction of upper limits [34; 37]. The study presented in this article explores the dependence of \(\Delta\gamma\) on the magnitude of the magnetic field. In particular, since the first order coefficient \(a_{1}\) is proportional to the value of \(\mu_{5}\) but also to the magnitude of \(\vec{B}\), the correlator of Eq. 1 is expected to have a quadratic dependence on both quantities. Within models the value of the magnetic field can be calculated for each centrality interval and can be connected to the number of spectator nucleons. Furthermore, since within a given centrality interval this latter number is expected to fluctuate from event to event, one can devise a strategy for selecting events with large or small number of spectators which would, consequently, result into large or small magnitude of \(\vec{B}\). The combination of this with the ESE technique could provide a powerful tool to isolate or fix the background contribution while probing in parallel the quadratic dependence of \(\Delta\gamma\) on \(\vec{B}\). Experimentally, triggering on events with different values of spectators within a given centrality can be realised with the energy deposited by these non-interacting nucleons on zero-degree calorimeters that are positioned close to the beam pipe far away from the interaction region. This article presents the strategy for such study in Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV with the Anomalous-Viscous Fluid Dynamics (AVFD) framework [38; 39; 40]. The next section discusses some details about the model, the sample that was analysed and illustrates the connection between the magnitude of \(\vec{B}\) and the number of spectators as well as the ESE procedure. Section 3 presents the main results and is followed by the summary. ## II Model and analysis details The study was performed over a sample of Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV generated with the AVFD model. This state-of-the-art model describes the initial state of the collision using a Glauber prescription, and accounts for the development of the early stage electromagnetic fields as well as for the propagation of anomalous fermion currents. The expanding medium is treated after 0.6 fm/\(c\) with a 2+1 dimensional viscous hydrodynamics (VISH2+1) code using values of shear and bulk viscosities over entropy density of \(\eta/s=0.08\) and \(\zeta/s=0\). Beyond a decoupling energy density of \(\epsilon=0.18\) fm/\(c^{3}\) the system is described by a hadron cascade model (UrQMD) [41]. In addition, the model allows for the inclusion of a non-zero axial current density \(n_{5}\)/s which dictates the imbalance between right- and left-handed fermions induced in the initial stage of each event. This, consequently, leads to a CME signal in the final state. Furthermore the background contribution in this measurement is controled by the percentage of positive and negative charged partners emitted from the same fluid element relative to the total multiplicity of the event, referred to from now on as LCC percentage. Both values of \(n_{5}\)/s and LCC percentage are identical to the ones reported in Ref. [26] where the model was tuned to describe the experimental measurements reported at LHC energies. Around 100K events were produced for the centrality interval 10-70% defined by different impact parameter ranges, in steps of 10%. An additional sample of 1M events for the 50-60% centrality interval was generated to allow for the extension of the analysis using the ESE method. On one hand computing resources reasons and on the other hand the expectation that the magnetic field is expected to be significantly smaller, leading to a smaller CME signal, in central than in semi-central and peripheral Pb-Pb collisions resulted in not studying the 0-10% centrality interval. The analysis is performed in the same kinematic ranges as the experimental measurements for primary charged particles that are emitted within a pseudorapidity of \(|\eta|<0.8\) and have transverse momentum of \(0.2<p_{\rm T}<5\) GeV/\(c\). The model gives in addition the possibility to calculate but also evolve the value of the magnetic field that decays with time according to \[B(\tau,x)=\frac{B_{0}}{1+\tau^{2}/\tau_{B}^{2}}, \tag{4}\] where \(\tau_{B}\) is the magnetic field lifetime which is set, in this work, conservatively to 0.2 fm/\(c\), similarly to what was done in Ref. [26]. At the same time and for each centrality interval one can calculate the number of spectator nucleons, N\({}_{\rm spec.}\), from the Glauber model. Figure 1 presents the dependence of the magnitude of the magnetic field on N\({}_{\rm spec.}\), where a clear correlation can be observed. From this plot it also becomes evident that for each centrality interval the number of spectators and thus the magnitude of the magnetic field fluctuates from event to event. One can thus define percentiles from the distribution of the number of spectators e.g. 25% highest or lowest number of spectators and map these events to events where the magnetic field is largest or smallest within a given centrality interval. In this work, every centrality interval is split in four subsamples that correspond to different percentiles of number of spectators, from 0% to 100% with a step of 25%. In parallel, one can also trigger on events where one of the main components that drives the background, the elliptic flow or \(v_{2}\), is small or large within the same centrality. This is done by calculating the magnitude of the second-order reduced flow vector, \(q_{2}\), which is defined according to \[q_{2}=\frac{|Q_{2}|}{\sqrt{M}}, \tag{5}\] where \(Q_{2}=\sqrt{Q_{2,x}^{2}+Q_{2,y}^{2}}\) is the magnitude of the second order harmonic flow vector and \(M\) is the multiplicity. In this work, the vector \(Q_{2}\) is calculated from the azimuthal distribution of primary charged particles emitted in the pseudorapidity range \(-3.7<\eta<-1.7\), thus simulating the acceptance of one of the scintillator counters of ALICE at the LHC, namely the V0C detector [42]. Figure 2 presents the \(q_{2}\) distributions for two indicative centrality intervals, i.e. 30-40% and 50-60%, of Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. These distributions are then split in the relevant percentiles with a 25% step, thus allowing the selection of events that are characterised by different magnitude of final state ellipticity, ranging from the 25% lowest to the 25% highest \(q_{2}\) values. The combination of triggering on events with different value of number of spectators and thus magnetic filed and, at the same time, different value of \(q_{2}\) within a given centrality interval forms a two-dimensional space where the contribution from the signal and background components, respectively, can be varied and controlled in an efficient way. Considering also the expectation for a quadratic dependence of \(\Delta\gamma\) on the value of \(\vec{B}\) and the relevant scaling that the same observable has with \(v_{2}\), the strategy creates a powerful tool that could also be used experimentally with clear and unique expectations from theory that can also be demonstrated using suitable models. ## III Results Figure 3 presents the centrality dependence of \(\Delta\gamma\) and \(\Delta\delta\) in the upper and lower panels, respectively, as obtained from the analysis of the AVFD samples of Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The red filled star markers correspond to the unbiased results and are compatible with the ones presented in Refs. [22; 26] where it was shown that they describe at the same time both observables. For each centrality interval, this data point is accompanied with the corresponding results for different selections in the number of spectators from the lowest 25% to the highest 25% of the N\({}_{\rm spec.}\) distribution, with a 25% step. It can be seen that the magnitude of both \(\Delta\gamma\) and \(\Delta\delta\) increases with increasing N\({}_{\rm spec.}\) and, conse Figure 2: The distribution of \(q_{2}\) for two different centrality intervals of Pb–Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV as obtained from the tuned AVFD sample (see text for details). Figure 1: The correlations between the number of spectators and the magnitude of the magnetic field generated in Pb–Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV as obtained from the tuned AVFD sample (see text for details). quently as discussed in Section II, with the magnitude of the magnetic field. This increase seems to be of quadratic nature, as expected from the dependence of both correlators on the value of \(\vec{B}\). Similar observations can be made for the rest of the centrality intervals (not shown in this article). In order to further illustrate but also quantify this behavior, figure 4 presents the dependence of \(\Delta\gamma\) (upper panel) and \(\Delta\delta\) (lower panel) on N\({}_{\rm spec.}\) for one indicative centrality interval i.e., 50-60% Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The data points are fitted with a second order polynomial that yields a significantly non-zero value of the second order coefficient. This unambiguously confirms the expected behavior that rises from the dependence of these two correlators on the term \(\langle a_{1,\alpha}a_{1,\beta}\rangle\). This, in turns, gives rise to the quadratic dependence of both \(\Delta\gamma\) and \(\Delta\delta\) on the value of \(\vec{B}\). Finally, figure 5 presents \(\Delta\gamma\) and \(\Delta\delta\) in the upper and lower panels, respectively, for events with different percentiles of the distribution of the number of spectators for the 50-60% centrality interval of Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. Each percentile range of N\({}_{\rm spec.}\) contains results for the various \(q_{2}\) selections, ranging from the 25% lowest to the 25% highest \(q_{2}\) values, with a step of 25%. The linear dependence of \(\Delta\gamma\) on \(q_{2}\) and, consequently, on \(v_{2}\) (a major component of the background contribution) is evident. In parallel, the values of \(\Delta\gamma\) increase quadratically as a function of the N\({}_{\rm spec.}\) for a fixed \(q_{2}\) interval. At the same time, the value of \(\Delta\delta\) remains constant within uncertainties with increasing \(q_{2}\) interval, as expected. This two dimensional grid formed by the proxies of the magnitude of the magnetic field and the final state ellipticity provides a powerful tool in experiments to disentangle the dominating background contributions in the measurements from the potential CME Figure 3: The centrality dependence of \(\Delta\gamma\) (upper panel) and \(\Delta\delta\) (lower panel) for various percentiles of N\({}_{\rm spec.}\). The results are obtained from the analysis of Pb–Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV produced with the AVFD model which is tuned to describe the experimental measurements (see text for details). Figure 4: The dependence of \(\Delta\gamma\) (upper panel) and \(\Delta\delta\) (lower panel) on the number of spectators for the 50–60% centrality interval of Pb–Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The data points are fitted with a second order polynomial represented by the red solid line. signal. ## IV Summary In this article, a new way of probing the magnetic field dependence of the Chiral Magnetic Effect is presented using the Anomalous-Viscous Fluid Dynamics framework [38; 39]. The two correlators used regularly in the search of the CME i.e., \(\Delta\gamma\) and \(\Delta\delta\), were used to analyse samples of AVFD generated Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV. The results demonstrated a quadratic dependence of both correlators with increasing number of spectators, the latter being a proxy for the magnitude of the early stage magnetic field. Finally, the extension of this study to a two dimensional space, formed by the number of spectators and a proxy of the final state ellipticity for each centrality interval, provides an exciting possibility to isolate experimentally the contributions of the background and the potential CME signal. ###### Acknowledgements. I am grateful to Prof. Jinfeng Liao and Dr. Shuzhe Shi for providing the source code of the model, for their guidance and their feedback during this study. I would also like to thank Prof. Sergei Voloshin for his suggestions but also Prof. Dima Kharzeev for the enlightening discussion. I am also thankful and acknowledge the stimulating discussions with members of my group such as Shi Qiu and Jasper Westbroek.
2307.03822
The ${\Bbb Z}_2$ anomaly in some chiral gauge theories
We revisit the simplest Bars-Yankielowicz (BY) model (the $\psi\eta$ model), starting from a model with an additional Dirac pair of fermions in the fundamental representation, together with a complex color-singlet scalar $\phi$ coupled to them through a Yukawa interaction. This model possesses a color-flavor-locked 1-form ${\Bbb Z}_N$ symmetry, due to the intersection of the color $SU(N)$ and two nonanomalous $U(1)$ groups. In the bulk, the model reduces to the $\psi\eta$ model studied earlier when $\phi$ acquires a nonzero vacuum expectation value and the extra fermions pair up, get massive and decouple (thus we will call our extended theory as the ``X-ray model"), while it provides a regularization of the $\Bbb Z_2$ fluxes needed to study the $\Bbb Z_2$ anomaly. The anomalies involving the 1-form ${\Bbb Z}_N$ symmetry reduce, for $N$ even, exactly to the mixed ${\Bbb Z}_2$ anomaly found earlier in the $\psi\eta$ model. The present work is a first significant step to clarify the meaning of the mixed ${\Bbb Z}_2-[{\Bbb Z}_N^{(1)}]^2$ anomaly found in the $\psi\eta$ and in other BY and Georgi-Glashow type $SU(N)$ models with even $N$.
Stefano Bolognesi, Kenichi Konishi, Andrea Luzio
2023-07-07T20:30:06Z
http://arxiv.org/abs/2307.03822v2
# The \(\mathbb{Z}_{2}\) anomaly in some chiral gauge theories ###### Abstract We revisit the simplest Bars-Yankielowicz (BY) model (the \(\psi\eta\) model), starting from a model with an additional Dirac pair of fermions in the fundamental representation, together with a complex color-singlet scalar \(\phi\) coupled to them through a Yukawa interaction. This model possesses a color-flavor-locked 1-form \(\mathbb{Z}_{N}\) symmetry, due to the intersection of the color \(SU(N)\) and two nonanomalous \(U(1)\) groups. In the bulk, the model reduces to the \(\psi\eta\) model studied earlier when \(\phi\) acquires a nonzero vacuum expectation value and the extra fermions pair up, get massive and decouple (thus we will call our extended theory as the "X-ray model"), while it provides a regularization of the \(\mathbb{Z}_{2}\) fluxes needed to study the \(\mathbb{Z}_{2}\) anomaly. The anomalies involving the 1-form \(\mathbb{Z}_{N}\) symmetry reduce, for \(N\) even, exactly to the mixed \(\mathbb{Z}_{2}\) anomaly found earlier in the \(\psi\eta\) model. The present work is a first significant step to clarify the meaning of the mixed \(\mathbb{Z}_{2}-[\mathbb{Z}_{N}^{(1)}]^{2}\) anomaly found in the \(\psi\eta\) and in other BY and Georgi-Glashow type \(SU(N)\) models with even \(N\). ###### Contents * 1 Introduction * 2 The model and the color-flavor-locked 1-form \(\mathbb{Z}_{N}\) (center) symmetry * 2.1 Color-flavor locked 1-form \(\mathbb{Z}_{N}\) symmetry * 3 Gauging 1-form \(\mathbb{Z}_{N}\) symmetry: mixed anomalies * 3.1 \(\tilde{A}-\left[B_{c}^{(2)}\right]^{2}\) anomaly * 3.2 \(A_{0}-\left[B_{c}^{(2)}\right]^{2}\) anomaly * 3.2.1 Remarks * 3.3 Chirally symmetric vacuum versus dynamical Higgs phase * 4 Reduction to the \(\psi\eta\) model, \(\mathbb{Z}_{2}\) vortex and the fermion zeromodes * 5 Discussion and Summary * A The confining, symmetric vacua in the extended BY model * \(\psi\eta\) model * C The dynamical Higgs phase ## 1 Introduction The dynamics of two wide classes of chiral \(SU(N)\) gauge theories - the so-called Bars-Yankielowicz (BY) and generalized Georgi-Glashow (GG) models [1]-[6] - has been re-examined recently [7, 8, 9], in the light of a gauged color-flavor locked \(\mathbb{Z}_{N}\) 1-form symmetry1 and of the stronger forms of 't Hooft anomaly matching constraints following from that. In particular, certain mixed anomalies involving a \(\mathbb{Z}_{2}\) symmetry were found to imply, in a class of theories with even \(N\),2 that chirally symmetric confining vacua in these models, where the global symmetries in the infrared are saturated by the hypothetical massless composite fermions were inconsistent. These massless "baryons" reproduce the conventional 't Hooft anomalies but do not match the mixed \(\mathbb{Z}_{2}-[\mathbb{Z}_{N}^{(1)}]^{2}\) anomaly. Footnote 1: From now on, whenever there might be confusion, we will indicate a 1-form symmetry with the apex notation, e.g. the \(\mathbb{Z}_{N}\) 1-form symmetry as \(\mathbb{Z}_{N}^{(1)}\). Footnote 2: More precisely, with even \(N\) and with an even number \(p\) of Dirac pairs of fermions in the fundamental representation [7, 8, 9]. We call this class of models Type I in this note; others will be referred to as Type II. Dynamical Higgs vacua, characterized by color-flavor locked bifermion condensates, are instead found to be compatible with the indications coming from the tighter consistency conditions involving the \(\mathbb{Z}_{2}\) anomaly [7, 8, 9]. An independent argument [10], following from the requirement that the so-called strong anomalies be reproduced correctly in an effective low-energy action in terms of the assumed set of infrared degrees of freedom, provides a solid support for the dynamical Higgs scenario. The arguments based on the mixed \(\mathbb{Z}_{2}-[\mathbb{Z}_{N}^{(1)}]^{2}\) anomalies have been put in question in [11]. The problem boils down to the singular nature of the external "\(\mathbb{Z}_{2}\) gauge field" \(A_{2}\), introduced in [7, 8, 9] to construct the color-flavor 1-form \(\mathbb{Z}_{N}\) symmetry which is due to the intersection3\(SU(N)\cap\{\mathbb{Z}_{2}\times U(1)_{\psi\eta}\}\). The \(\mathbb{Z}_{2}\) gauge field needs to wind Footnote 3: For definiteness, here we consider the case of the “\(\psi\eta\) model” studied in [7] and in [11], and adopt the notation used there. \[\oint_{L}A_{2}=\frac{2\pi m}{2}\;,\qquad m\in\mathbb{Z}\;, \tag{1.1}\] along a closed loop \(L\), to parametrize the holonomy \[\psi\to-\psi\;,\qquad\eta\to-\eta\;, \tag{1.2}\] and to give the color-flavor-locked 1-form \(\mathbb{Z}_{N}\) symmetry.4 Such a field contains necessarily a singularity (i.e., a singular \(\mathbb{Z}_{2}\) vortex) [7] somewhere inside the closed 2D space \(\Sigma_{2}\) bounded by \(L\). Footnote 4: We recall that an appropriate \(U(1)_{\psi\eta}\) holonomy [7] together with this \(\mathbb{Z}_{2}\) transformation, lead to a \(\mathbb{Z}_{N}\) transformation of the fermions fields, undoing their \(\mathbb{Z}_{N}\subset SU(N)\) gauge transformations. See Sec. 2.1 for a more detailed discussion. The authors of [11] show that, by choosing instead a (regular, hence legitimate) "\(\mathbb{Z}_{2}\) gauge field" \(A_{2}\) such that (cfr. (1.1)) \[\int_{\Sigma_{2}}dA_{2}=2\pi\;\mathbb{Z}\;, \tag{1.3}\] the flux carried by the \(\mathbb{Z}_{N}\) gauge field \(B_{\rm c}^{(2)}\) becomes \[\int_{\Sigma_{2}}N\,B_{\rm c}^{(2)}=4\pi k\;,\qquad k\in\mathbb{Z}\;, \tag{1.4}\] twice those used in [7], and accordingly the anomalies found there would disappear. However, (1.3) means that such a background \(\mathbb{Z}_{2}\) gauge field corresponds to the trivial holonomy \[\psi\to\psi\;,\qquad\eta\to\eta\;, \tag{1.5}\] i.e., no transformation (an identity element of \(\mathbb{Z}_{2}\)). To grasp correctly the main issue it is indeed necessary to distinguish the concepts of the _global_ 1-form \(\mathbb{Z}_{N}\) symmetry from the gauged version of it. The former, a color-flavor locked \(\mathbb{Z}_{N}\) symmetry, is a generalization of the familiar center symmetry of pure \(SU(N)\) Yang-Mills theory. This symmetry certainly exists in the \(\psi\eta\) and other models studied in [7, 8, 9], but in itself it does not lead to any consistency condition. It is another story if one tries to _gauge_ this 1-form \(\mathbb{Z}_{N}\) symmetry, by introducing the \(\mathbb{Z}_{N}\) gauge field \(B_{\rm c}^{(2)}\) with a proper \(\mathbb{Z}_{N}\) flux (cfr. (1.4)) [12, 13, 14] \[\int_{\Sigma_{2}}N\,B_{\rm c}^{(2)}=2\pi k\,\qquad k\in\mathbb{Z}. \tag{1.6}\] Such a gauging may encounter a topological obstruction (a 't Hooft anomaly). If it does, then there are new, nontrivial UV-IR matching conditions [15]-[36]. This is indeed what was found in [7, 8, 9]. The question is whether the anomalies and their consequences discussed there are to be trusted, in view of the fact that the argument made use of a singular external (non-dynamical) \(A_{2}\) gauge field, (1.1). The present work aims to clarify the sense of the anomalies found in [7, 8, 9]. We start with the simplest BY model ("\(\psi\eta\)" model) with an extra pair of fermions \((q,\tilde{q})\) in the fundamental representation, which acts as a sort of regulator field. When a gauge-invariant, complex scalar field coupled to them through a Yukawa potential term gets a nonvanishing vacuum expectation value (VEV), \(v\), the fermions \(q,\tilde{q}\) get mass and decouple,5 below \(\sim v\). Namely, this extended model (which we call the X-ray model) reduces, below the decoupling mass scale \(v\), to the previously considered \(\psi\eta\) model.6 Footnote 5: Similarly the NGB, although massless, decouples as it cannot be coupled with the \(\psi\eta\) model with a relevant or marginal operator. Footnote 6: Naturally, we take \(v\) such that \(v\gg\Lambda_{\psi\eta}\), where \(\Lambda_{\psi\eta}\) is the dynamical scale of the \(\psi\eta\) model. This work is organized as follows. In Sec. 2 we introduce the extended model and discuss its symmetries. Before taking into account the scalar VEV, the model is of type II: conventional 't Hooft anomaly matching discussion allows a chirally symmetric, confining vacuum as well as a dynamical Higgs phase characterized by certain bifermion condensates. The model reduces to the previously studied \(\psi\eta\) model at mass scales below the scalar VEV, \(v\), where the extra fermions pair in a Dirac fermion, get massive and decouple. Sec. 3 is dedicated to the gauging of the color-flavor locked 1-form \(\mathbb{Z}_{N}\) symmetry and to the calculation of the consequent mixed anomalies. The generalized anomaly found in the X-ray model, which is free from the subtleties related to the singular \(A_{2}\) field [7], reduces to the \(\mathbb{Z}_{2}-[\mathbb{Z}_{N}^{(1)}]^{2}\) anomaly [7], precisely for even \(N\) (i.e. type I) theories. In Sec. 4 we discuss a few subtle issues related to the decoupling of the fermions \(q,\tilde{q}\). The summary and conclusion are in Sec. 5. The model and the color-flavor-locked 1-form \(\mathbb{Z}_{N}\) (center) symmetry We consider the \(\psi\eta\) model, in which a Dirac pair of fermions in the fundamental representation of \(SU(N)_{\rm c}\), \(q\) and \(\tilde{q}\), are added. In other words, we start with a generalized Bars-Yankielowicz model, with Weyl fermions7 Footnote 7: This model was called \(\{{\cal S},N,p\}\) model (\(p=1\)) in the classification of [8]. \[\psi^{ij}\,,\quad\eta^{A}_{i}\,\quad\xi^{i}\,\qquad(i,j=1,2,\ldots,N\ ;\quad A =1,2,,\ldots,N+5)\;, \tag{2.1}\] in the direct-sum representation (2.2) The global symmetry of the model is \[SU(N+5)\times U(1)_{\psi\eta}\times U(1)_{\psi\xi}\;, \tag{2.3}\] where \(U(1)_{\psi\eta}\) and \(U(1)_{\psi\xi}\) are two anomaly-free combinations of the chiral \(U(1)\) symmetries associated with the fermions, \(\psi\), \(\eta\) and \(\chi\). We shall rename the fields as \(\eta^{N+5}=\tilde{q}\) and \(\xi=q\) below, so that the matter content is \[\psi^{ij}\,,\quad\eta^{A}_{i}\,\quad q^{i},\quad\tilde{q}_{i}\;,\qquad(i,j=1,2,\ldots,N\ ;\quad A=1,2,\ldots,N+4)\;. \tag{2.4}\] We furthermore add a color-singlet complex scalar \(\phi\) coupled to the \((q,\tilde{q})\) pair as, \[\Delta L=g_{Y}\phi\,q\,\tilde{q}+{\rm h.c.}\;. \tag{2.5}\] The Yukawa coupling (2.5) breaks the global symmetry as \[SU(N+4)\times U(1)_{\psi\eta}\times U(1)_{0}\times\tilde{U}(1)\;, \tag{2.6}\] where the charges are given in Table 1. The Yukawa coupling breaks explicitly part of the global symmetry of the original model, (2.1), (2.2). The implications of the conventional 't Hooft anomaly-matching conditions [37], with respect to the unbroken global symmetry, therefore remain the same as in the original generalized Bars-Yankielowicz model, (2.1),(2.2). The model is of Type II: 't Hooft anomaly matching allows both dynamical Higgs phase (with bifermion condensates) and confining, chirally symmetric phase (with no condensate formation). See App. A. We assume that the potential for the \(\phi\) field is such that it acquires a nonvanishing VEV, \[\langle\phi\rangle=v\gg\Lambda_{\psi\eta}\;. \tag{2.7}\] The system at mass scales \(\mu\) below \(v\) \[\mu\ll\langle\phi\rangle \tag{2.8}\] reduces exactly to the \(\psi\eta\) model, studied in [7]-[9], as the fermions \(q\) and \(\tilde{q}\) get mass and decouple. The global \(U(1)_{V}\) and \(\tilde{U}(1)\) symmetries remain unbroken, they reduce respectively to the identity \(\mathbb{1}\) and to \(U(1)_{\psi\eta}\) when the fermions \(q\) and \(\tilde{q}\) decouple. The \(U(1)_{0}\) symmetry is broken as \[U(1)_{0}\to\mathbb{Z}_{2}\;, \tag{2.9}\] where \(\mathbb{Z}_{2}\) acts as \[\psi\to-\psi\;,\qquad\eta\to-\eta\;. \tag{2.10}\] We refer to this model as the X-ray theory. Clearly, besides the \(\psi\eta\) model, the breaking of \(U(1)_{0}\) introduces also a massless NGB. However, the NGB cannot couple to the \(\psi\eta\) degrees of freedom through relevant or marginal operators:8 in the limit \(\Lambda\ll v\), the NGB sector decouples. Footnote 8: As \(v\gg\mu\gg\Lambda\), the theory is perturbative, and we can trust this classical dimensional analysis. As \(U(1)_{0}\) and \(\tilde{U}(1)\) symmetries are free of (strong) anomalies, one may introduce external regular gauge fields, \(A_{0}\) and \(\tilde{A}\), respectively. ### Color-flavor locked 1-form \(\mathbb{Z}_{N}\) symmetry As the idea of color-flavor locked \(\mathbb{Z}_{N}\) 1-form symmetry is central below, let us briefly review it. Let us consider an \(SU(N)\) gauge theory with a set of the massless matter Weyl fermions \(\{\psi^{k}\}\). In general, the color \(\mathbb{Z}_{N}^{(1)}\) symmetry is broken by the fermions (unless the fermions present are all in the adjoint representation of \(SU(N)\)). However the situation changes if some global, nonanomalous \(U(1)\) symmetries, \(U(1)_{i}\), \(i=1,2,\ldots\), are present, such that when \(U(1)_{i}\) are gauged (in the usual sense, by the introduction of external gauge fields \(A_{i}^{\mu}\)), the color \(\mathbb{Z}_{N}\subset SU(N)\) and the \(U(1)_{i}\) transformations can compensate each other for the fermions. This allows to define a global color-flavor locked \(\mathbb{Z}_{N}^{(1)}\) symmetry. \begin{table} \begin{tabular}{|c|c|c|c|c|c||c|} \hline & \(SU(N)_{\rm c}\) & \(SU(N+4)\) & \(U(1)_{\psi\eta}\) & \(U(1)_{V}\) & \(U(1)_{0}\) & \(\tilde{U}(1)\) \\ \hline \(\psi\) & \(\yng(1)\) & \((\cdot)\) & \(\frac{N+4}{2}\) & \(0\) & \(1\) & \(\frac{N+4}{2}\) \\ \(\eta\) & \(\yng(1)\) & \(\yng(1)\) & \(-\frac{N+2}{2}\) & \(0\) & \(-1\) & \(-\frac{N+2}{2}\) \\ \hline \(q\) & \(\yng(1)\) & \((\cdot)\) & \(0\) & \(1\) & \(1\) & \(\frac{N+2}{2}\) \\ \(\tilde{q}\) & \(\yng(1)\) & \((\cdot)\) & \(0\) & \(-1\) & \(1\) & \(-\frac{N+2}{2}\) \\ \(\phi\) & \((\cdot)\) & \((\cdot)\) & \(0\) & \(0\) & \(-2\) & \(0\) \\ \hline \end{tabular} \end{table} Table 1: The fields and charges of the \(X\)-ray model with respect to the nonanomalous symmetries. The last symmetry, \(\tilde{U}(1)\), is not linearly independent, but it is particularly useful to define it for our discussion. The action of a \(\mathbb{Z}_{N}^{(1)}\) generator on Wilson s loops that stretch along the non-contractible loop \(L\) is \[SU(N):\ {\cal P}e^{i\oint_{L}a}\to e^{\frac{2\pi i}{N}}{\cal P}e^{i\oint_{L}a}\,\qquad U(1)_{i}:\ e^{i\oint_{L}A_{i}}\to\left(e^{\frac{2\pi i}{N}p_{i}}\right) \,e^{i\oint_{L}A_{i}}\, \tag{2.11}\] where \(a\equiv a_{\mu}^{A}t^{A}\ dx^{\mu}\) is the \(SU(N)\) gauge field, \(A_{i}\) is the \(U(1)_{i}\) gauge field, and the integers \(p_{i}\) defines an embedding of \(\mathbb{Z}_{N}\hookrightarrow U(1)_{i}\). As, locally, (2.11) can be realized as a gauge transformation, it can fail to be a symmetry only if it ruins the periodicity9 of the fermion fields. To check it, one should compute the action of (2.11) on the \(\psi_{k}\) Wilson loop, i.e. Footnote 9: Or anti-periodicity, if \(L\) is along the thermal cycle. \[W[L]_{k}=\left({\cal P}e^{i\oint_{L}R_{k}(a)}\right)\left(\Pi_{i}\,e^{i\oint_{ L}q_{i}^{k}A_{i}}\right) \tag{2.12}\] (here \(\psi_{k}\) transforms under \(SU(N)\) in the irrep \(R_{k}\) with N-arity \({\cal N}_{k}\), and has charge \(q_{i}^{k}\) under \(U(1)_{i}\)): \[W[L]_{k}\to e^{\frac{2\pi i}{N}{\cal N}_{k}}e^{\frac{2\pi i}{N}\sum_{i}q_{i}^ {k}p_{i}}W[L]_{k} \tag{2.13}\] If the action is trivial, i.e. \[\frac{2\pi i}{N}{\cal N}_{k}+\frac{2\pi i}{N}\sum_{i}q_{i}^{k}p_{i}\in 2\pi \mathbb{Z}\qquad\text{for each $\psi_{k}$}\, \tag{2.14}\] the fermions periodicity conditions are preserved and (2.11) defines a new color-flavor locked \(\mathbb{Z}_{N}\) 1-form symmetry. As the ordinary \(\mathbb{Z}_{N}^{(1)}\) center transformation, such a color-flavor combined \(\mathbb{Z}_{N}^{(1)}\) center symmetry is still just a _global 1-form symmetry_. A more powerful idea is to introduce the _gauging of this 1-form symmetry_ and studying possible topological obstructions in doing so (generalized 't Hooft's anomalies) [15]-[36]. As in the case of conventional gauging of 0-form symmetries, the idea of gauging is that of _identifying_ the field configurations connected by the given symmetry transformations, and of eliminating the double counting in the sum over field configurations. However, as one is now dealing with a 1-form symmetry, the associated gauge transformations are parametrized by a 1-form Abelian gauge function10\(\lambda=\lambda_{\mu}(x)dx^{\mu}\), see (3.9) below. Footnote 10: Here we remember the crucial aspect of higher form symmetries: they are all Abelian. This is the reason why the color-flavor locked 1-form symmetries are possible. ## 3 Gauging 1-form \(\mathbb{Z}_{N}\) symmetry: mixed anomalies We consider now the gauging of the 1-form \(\mathbb{Z}_{N}\) symmetry in the \(X\)-ray model, that arises because the subgroup (see Table 2) \[\mathbb{Z}_{N}=SU(N)_{\rm c}\cap(\tilde{U}(1)\times U(1)_{0}) \tag{3.1}\] acts trivially on any field of the theory.11 Footnote 11: Also, as \[U(1)_{\psi\eta}\times U(1)_{V}\supset\tilde{U}(1)\;;\qquad Q_{\psi\eta}+\frac {N+2}{2}Q_{V}=\tilde{Q} \tag{3.2}\] it is possible to gauge the 1-form \(\mathbb{Z}_{N}\) symmetry together with \(U(1)_{\psi\eta}\), \(U(1)_{V}\) and \(U(1)_{0}\). Here we choose to proceed with gauging \(\mathbb{Z}_{N}\) lying in the intersection (3.1). In other words, the symmetry group that acts faithfully on the fundamental fields is \[\frac{SU(N)_{\rm c}\times\tilde{U}(1)\times U(1)_{0}}{\mathbb{Z}_{N}}\;, \tag{3.3}\] so to get all the 't Hooft anomalies of the theory we should consider a gauge connection of (3.3) rather than by the simple product principal bundle \[SU(N)\times\tilde{U}(1)\times U(1)_{0}\;. \tag{3.4}\] To gauge (3.4) it is enough to introduce the \(U(1)\) gauge connections \(\tilde{C}\) and \(C_{0}\) in addition to the dynamical color gauge \(SU(N)\) field, \(a\). However, by doing so, one obtains only a subset of all the possible gauge connections allowed by the gauging of (3.3): gauging (3.3) one can allow \(\tilde{C}\), \(C_{0}\) and \(a\) not to be proper gauge connection, individually, e.g. one can allow fractional Dirac quantization for \(\tilde{C}\) and \(C_{0}\). A very convenient way to describe a generic gauge connection for (3.3) is by introducing a pair of fields [15]-[36] \[\left(B_{\rm c}^{(2)}\;,B_{\rm c}^{(1)}\right) \tag{3.5}\] where \(B_{\rm c}^{(1)}\) is a well-defined12\(U(1)\) gauge connection, and \(B_{\rm c}^{(2)}\) is a 2-form gauge field that satisfies Footnote 12: With well-defined \(U(1)\) connection we mean that they satisfy the usual Dirac quantization condition. \(U(1)_{\psi\eta}\times U(1)_{V}\supset\tilde{U}(1)\;;\qquad Q_{\psi\eta}+ \frac{N+2}{2}Q_{V}=\tilde{Q}\) (3.2) \[NB_{\rm c}^{(2)}=dB_{\rm c}^{(1)}\;. \tag{3.6}\] thus \[\int_{\Sigma}B_{\rm c}^{(2)}=\frac{2\pi}{N}\mathbb{Z}\;, \tag{3.7}\] for any 2-cycle \(\Sigma\). Then we embed \(a\), \(\tilde{C}\) and \(C_{0}\) into \[\widetilde{a}=a+\frac{1}{N}B_{\rm c}^{(1)}\;,\quad A_{0}=C_{0}+\frac{1}{2}B_{ \rm c}^{(1)}\quad\text{and}\quad\tilde{A}=\tilde{C}-\frac{1}{N}B_{\rm c}^{(1)}\;, \tag{3.8}\] where \(\widetilde{a}\) is a \(U(N)\) connection, and \(A_{0}\) and \(\tilde{A}\) are well-defined \(U(1)\) connections13. Doing so, the \(\mathbb{Z}_{N}\) 1-form symmetry of the original group is embedded in a continuous 1-form symmetry Footnote 13: In this definition, there is an ambiguity, as we could have set \(A_{0}=C_{0}-\frac{1}{2}B_{\rm c}^{(1)}\) instead. The construction would be equivalent, but, to describe the same background, we would need to add some integer flux for \(A_{0}\). The same sign ambiguity is present also for the \(\psi\eta\) model. We will comment on the consequences of this sign choice on anomalies in footnote 16. \[B_{\rm c}^{(2)} \to B_{\rm c}^{(2)}+{\rm d}\lambda_{\rm c}\;,\qquad B_{\rm c}^{(1)} \to B_{\rm c}^{(1)}+N\lambda_{\rm c}\;,\] \[\widetilde{a} \to\widetilde{a}+\lambda_{\rm c}\;,\qquad\tilde{A}\to\tilde{A}- \lambda_{\rm c}\;,\qquad A_{0}\to A_{0}+\frac{N}{2}\lambda_{\rm c} \tag{3.9}\] parameterized by the \(U(1)\) gauge connection \(\lambda_{\rm c}\), which cancel any local degrees of freedom introduced by \(B_{\rm c}^{(1)}\). Local physics is not affected by these global issues, so the fermionic Lagrangian (locally) still reads \[\overline{\psi}\gamma^{\mu}\left(\partial+{\cal R}_{\rm S}(a)+ \frac{N+4}{2}\tilde{C}+C_{0}\right)_{\mu}P_{\rm L}\psi\] \[+\,\overline{\eta}\gamma^{\mu}\left(\partial+{\cal R}_{{\rm F}^ {*}}(a)-\frac{N+2}{2}\tilde{C}-C_{0}\right)_{\mu}P_{\rm L}\eta\] \[+\,\overline{\tilde{q}}\gamma^{\mu}\left(\partial+{\cal R}_{{\rm F }}(a)+\frac{N+2}{2}\tilde{C}+C_{0}\right)_{\mu}P_{\rm L}q\] \[+\,\overline{\tilde{q}}\gamma^{\mu}\left(\partial+{\cal R}_{{\rm F }^{*}}(a)-\frac{N+2}{2}\tilde{C}+C_{0}\right)_{\mu}P_{\rm L}\tilde{q}\;. \tag{3.10}\] However, as the faithful symmetry group is (3.3), we can express this Lagrangian in terms of well-defined geometrical entities (well-defined gauge connection) as \[\overline{\psi}\gamma^{\mu}\left(\partial+{\cal R}_{\rm S}(\tilde{a }_{\rm c}-\frac{1}{N}B_{\rm c}^{(1)})+\frac{N+4}{2}\left(\tilde{A}+\frac{1}{N}B_ {\rm c}^{(1)}\right)+(A_{0}-\frac{1}{2}B_{\rm c}^{(1)})\right)_{\mu}P_{\rm L}\psi\] \[+\,\overline{\eta}\gamma^{\mu}\left(\partial-(\tilde{a}_{\rm c}- \frac{1}{N}B_{\rm c}^{(1)})-\frac{N+2}{2}(\tilde{A}+\frac{1}{N}B_{\rm c}^{(1)} )-(A_{0}-\frac{1}{2}B_{\rm c}^{(1)})\right)_{\mu}P_{\rm L}\eta\] \[+\overline{\tilde{q}}\gamma^{\mu}\left(\partial+(\tilde{a}_{\rm c }-\frac{1}{N}B_{\rm c}^{(1)})+\frac{N+2}{2}(\tilde{A}+\frac{1}{N}B_{\rm c}^{(1 )})+(A_{0}-\frac{1}{2}B_{\rm c}^{(1)})\right)_{\mu}P_{\rm L}q\] \[+\,\overline{\tilde{q}}\gamma^{\mu}\left(\partial-(\tilde{a}_{ \rm c}-\frac{1}{N}B_{\rm c}^{(1)})-\frac{N+2}{2}(\tilde{A}+\frac{1}{N}B_{\rm c }^{(1)})+(A_{0}-\frac{1}{2}B_{\rm c}^{(1)})\right)_{\mu}P_{\rm L}\tilde{q} \tag{3.11}\] which is explicitly invariant under the 1-form symmetry (3.9). The effective field-strength tensors acting on the fermions are accordingly: \[{\cal R}_{\rm S}(F(\widetilde{a})-B_{\rm c}^{(2)})+\frac{N+4}{2} \left(d\tilde{A}+B_{\rm c}^{(2)}\right)+(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)})\;,\] \[{\cal R}_{\rm F^{*}}(F(\widetilde{a})-B_{\rm c}^{(2)})-\frac{N+2} {2}\left(d\tilde{A}+B_{\rm c}^{(2)}\right)-(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)} )\;,\] \[{\cal R}_{\rm F}(F(\widetilde{a})-B_{\rm c}^{(2)})+\frac{N+2}{2} \left(d\tilde{A}+B_{\rm c}^{(2)}\right)+(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)})\;,\] \[{\cal R}_{\rm F^{*}}(F(\widetilde{a})-B_{\rm c}^{(2)})-\frac{N+2} {2}\left(d\tilde{A}+B_{\rm c}^{(2)}\right)+(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)} )\;. \tag{3.12}\] Note that by turning off the 1-form gauge fields \(\left(B_{\rm c}^{(2)}=0,\,B_{\rm c}^{(1)}=0\right)\), one goes back to the standard \(SU(N)\times\tilde{U}(1)\times U(1)_{0}\) gauge theory. The anomalies are compactly expressed by a six-dimensional (6D) anomaly functional [38, 39] \[{\cal A}^{\rm 6D} = \int_{\Sigma_{6}}\frac{2\pi}{3!(2\pi)^{3}}\Big{\{}{\rm tr}_{\rm c }\left({\cal R}_{\rm S}(\tilde{F}_{\rm c}-B_{\rm c}^{(2)})+\frac{N+4}{2}(d \tilde{A}+B_{\rm c}^{(2)})+dA_{0}-\frac{N}{2}B_{\rm c}^{(2)}\right)^{3}\] \[\qquad\qquad+{\rm tr}_{\rm c,f}\left({\cal R}_{\rm F^{*}}(\tilde {F}_{\rm c}-B_{\rm c}^{(2)})-\frac{N+2}{2}\left(d\tilde{A}+B_{\rm c}^{(2)} \right)-(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)})\right)^{3}\] \[\qquad\qquad+{\rm tr}_{\rm c}\left({\cal R}(\tilde{F}_{\rm c}-B_{ \rm c}^{(2)})+\frac{N+2}{2}\left(d\tilde{A}+B_{\rm c}^{(2)}\right)+(dA_{0}- \frac{N}{2}B_{\rm c}^{(2)})\right)^{3}\] \[\qquad\qquad+{\rm tr}_{\rm c}\left({\cal R}_{\rm F^{*}}(\tilde {F}_{\rm c}-B_{\rm c}^{(2)})-\frac{N+2}{2}\left(d\tilde{A}+B_{\rm c}^{(2)} \right)+(dA_{0}-\frac{N}{2}B_{\rm c}^{(2)})\right)^{3}\Big{\}}\;.\] Expanding the 6D anomaly functional (3.13), one finds \[\frac{2\pi}{3!(2\pi)^{3}}\int_{\Sigma_{6}}\big{\{}[(N+4)-(N+4)+1-1] \,{\rm tr}_{\rm c}(\tilde{F}_{\rm c}-B_{\rm c}^{(2)})^{3}\big{\}}\] \[+\frac{1}{8\pi^{2}}\int_{\Sigma_{6}}{\rm tr}_{\rm c}(\tilde{F}_{ \rm c}-B_{\rm c}^{(2)})^{2}\big{\{}(N+2)[\frac{N+4}{2}\,(d\tilde{A}+B_{\rm c} ^{(2)})+dA_{0}-\frac{N}{2}B_{\rm c}^{(2)}]\] \[+(N+4)[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})-(dA_{0}-\frac {N}{2}B_{\rm c}^{(2)})]\] \[+1\cdot[\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+dA_{0}-\frac {N}{2}B_{\rm c}^{(2)}]\] \[+1\cdot[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+(dA_{0}- \frac{N}{2}B_{\rm c}^{(2)})]\big{\}}\] \[+\frac{1}{24\pi^{2}}\int_{\Sigma_{6}}\big{\{}\frac{N(N+1)}{2}[ \frac{N+4}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+dA_{0}-\frac{N}{2}B_{\rm c}^{(2)} ]^{3}\] \[+\,(N+4)N[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})-(dA_{0}- \frac{N}{2}B_{\rm c}^{(2)})]^{3}\] \[+\,N[\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+(dA_{0}-\frac{N }{2}B_{\rm c}^{(2)})]^{3}\] \[+\,N[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+(dA_{0}-\frac{ N}{2}B_{\rm c}^{(2)})]^{3}\big{\}}\;, \tag{3.14}\] by making use of the known formulas for the traces of quadratic and cubic forms in different representations. Note that the terms proportional to \({\rm tr}_{\rm c}(\tilde{F}_{\rm c}-B_{\rm c}^{(2)})^{3}\) and \({\rm tr}_{\rm c}(\tilde{F}_{\rm c}-B_{\rm c}^{(2)})^{2}\) in (3.14) cancel completely as they should. Thus the anomalies are expressed by the last four lines of (3.14) only: \[{\cal A}^{6{\rm D}} = \frac{1}{24\pi^{2}}\int_{\Sigma_{6}}\big{\{}\frac{N(N+1)}{2}[ \frac{N+4}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+dA_{0}-\frac{N}{2}B_{\rm c}^{(2)} ]^{3} \tag{3.15}\] \[+\,(N+4)N[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})-(dA_{0}- \frac{N}{2}B_{\rm c}^{(2)})]^{3}\] \[+\,N[\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+(dA_{0}-\frac{N }{2}B_{\rm c}^{(2)})]^{3}\] \[+\,N[-\frac{N+2}{2}\,(d\tilde{A}+B_{\rm c}^{(2)})+(dA_{0}-\frac{ N}{2}B_{\rm c}^{(2)})]^{3}\big{\}}\;.\] Below we are going to extract the mixed anomalies, involving the \(U(1)_{0}\) or \(\tilde{U}(1)\) gauge fields, \(A_{0}\), \(\tilde{A}\), together with the 1-form \(\mathbb{Z}_{N}\) gauge field, \((B_{\rm c}^{(2)},B_{\rm c}^{(1)})\). To compute such anomalies explicitly it is useful to take as our spacetime manifold the 4-torus, \(T^{4}=T_{1}^{2}\times T_{2}^{2}\), and \[\int_{T_{1}^{2}}B_{c}^{(2)}=\frac{2\pi}{N}\;,\quad\int_{T_{2}^{2}}B_{c}^{(2)} =\frac{2\pi}{N}\;,\quad\int_{T^{4}}\big{(}B_{c}^{(2)}\big{)}=\frac{8\pi^{2}}{ N^{2}}\;. \tag{3.16}\] We recall again that if \((B_{\rm c}^{(2)},B_{\rm c}^{(1)})\) is set to zero, the UV anomalies simply express the conventional 't Hooft anomaly triangles involving the \(U(1)_{0}\times\tilde{U}(1)\) background fields, and by construction those are matched by the assumed set of the massless baryons of a candidate IR theory such as the one discussed in Appendix B. What we shall exhibit below is only the new, stronger anomalies introduced by the gauging of the 1-form \(\mathbb{Z}_{N}\) symmetry. As will be discussed below (Sec. 3.3) the consequence of these is that the confining, symmetric vacuum with just one massless baryon and no other nontrivial sectors, is not consistent. ### \(\tilde{A}-\left[B_{\rm c}^{(2)}\right]^{2}\) anomaly To calculate the anomaly in \(\tilde{U}(1)\) caused by the introduction of the 1-form \(\mathbb{Z}_{N}\) gauge fields, let us briefly recall the procedure for calculating the anomalies in 4D theory according to the Stora-Zumino descent procedure [38, 39, 40], starting from the 6D anomaly functional, (3.15), in our case.14 One collects the terms of the form, \(\left(B_{\rm c}^{(2)}\right)^{2}d\tilde{A}\), integrate to get a 5D functional of the form, Footnote 14: As emphasized in [7] all the calculations can be done staying in 4D, à la Fujikawa. That approach will give directly (3.19), for instance, from the functional Jacobian. \[\propto\int_{\Sigma_{5}}\,\left(B_{\rm c}^{(2)}\right)^{2}\tilde{A}\;. \tag{3.17}\] Now the variation \(\tilde{A}\to\tilde{A}\ +\delta\,\tilde{A}\) \[\delta\tilde{A}=d\,\delta\alpha \tag{3.18}\] yields, by anomaly inflow, the anomalous variation in the (boundary) 4D theory \[\delta S_{\delta\alpha}=\frac{\tilde{K}}{8\pi^{2}}\int_{\Sigma_{4}}\,\left(B_ {\rm c}^{(2)}\right)^{2}\delta\alpha\;. \tag{3.19}\] By collecting terms we find \[\tilde{K}=-\frac{N^{3}(N+3)}{2}\neq 0\;. \tag{3.20}\] The \(\tilde{U}(1)\) symmetry is broken (i.e., gets anomalous) by the generalized 1-form gauging of the \(\mathbb{Z}_{N}\). ### \(A_{0}-\left[B_{\rm c}^{(2)}\right]^{2}\) anomaly An analogous calculation leads to the \(U(1)_{0}\) anomaly due to the 1-form gauging of the \(\mathbb{Z}_{N}\) symmetry, \[\delta S_{\delta\alpha_{0}}=\frac{K_{0}}{8\pi^{2}}\int_{\Sigma_{4}}\,\left(B_ {\rm c}^{(2)}\right)^{2}\delta\alpha_{0}\;,\qquad K_{0}=N^{2}(N+3)\;. \tag{3.21}\] This appears to imply that the \(U(1)_{0}\) symmetry is also broken by the 1-form gauging of the \(\mathbb{Z}_{N}\) symmetry. However, the scalar VEV \(\langle\phi\rangle=v\) breaks spontaneously the \(U(1)_{0}\) symmetry to \(\mathbb{Z}_{2}\). It means that, in contrast to (3.19),(3.20), the variation (3.21) cannot be used to examine the generalized UV-IR anomaly matching check. For that purpose, we can use only the nonanomalous15 and unbroken symmetry operation, i.e., variations corresponding to a nontrivial \(\mathbb{Z}_{2}\) transformation \(\delta\alpha_{0}=\pm\pi\). Taking into account the nontrivial 't Hooft flux (3.7, 3.16)), Footnote 15: In the sense of the standard strong anomaly. \[\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\,\big{(}B_{\rm c}^{(2)}\big{)}^{2}=\frac{n} {N^{2}}\;,\qquad n\in\mathbb{Z}\;, \tag{3.22}\] and the crucial coefficient of the anomaly, \(K_{0}=N^{2}(N+3)\), it is seen that the partition function changes sign for even 16\(N\). We reproduce exactly the \(\mathbb{Z}_{2}\) anomaly found in [7]. Footnote 16: By taking the equivalent definition of \(C_{0}\) in footnote 13, one obtains \(K_{0}=-\frac{1}{2}N^{2}(N+2)(N+3)\), which signals an \(\mathbb{Z}_{2}\) anomaly only for \(N=0\mod 4\). Exactly the same happens in the \(\psi\eta\) model. One might wonder how is it possible that the two constructions lead to different anomalies. However, the puzzle is only apparent, as the system has also a \(A_{0}(dA_{0})^{2}\) anomaly, and, if one takes also it into account, the anomalous phase under a \(\mathbb{Z}_{2}^{F}\) transformation depends only on the background and not on the sign convention chosen. Moreover, the choice of the convention is totally irrelevant to discuss the ’t Hooft anomaly matching with the confining phase, as the \([\mathbb{Z}_{2}]^{3}\) anomaly is matched for every \(N\). #### 3.2.1 Remarks The anomalies found in Sec. 3.1, Sec. 3.2 represent the main result of the present work. As in our earlier work [7, 8, 9], the nontrivial 't Hooft \(\mathbb{Z}_{N}\) flux (1.6),(3.22), mean that one is considering the 4D spacetime compactified in e.g., bi-torus, \(T^{2}\times T^{2}\). See Sec. 4 below for more remarks on \(\mathbb{Z}_{2}\) vortices in such a spacetime, implied by (3.7). ### Chirally symmetric vacuum versus dynamical Higgs phase Now what is the implication of the mixed anomalies found in the \(X\)-ray model, (3.19), (3.22) to the physics in the infrared, that is, the phase of the \(\psi\eta\) model? We consider here two particularly interesting dynamical possibilities, a confining, chirally symmetric vacuum and a dynamical Higgs phase, which are both known to be compatible with the conventional 't Hooft anomaly-matching constraints. If we assume that the infrared system was confining, chirally symmetric one, with no bifermion condensates forming, then the conventional 't Hooft anomalies would be matched by a low-energy theory consisting just of a single color-singlet massless composite fermion, the baryon \({\cal B}_{11}\sim\psi\eta\eta\) (see Appendix B). Knowing its quantum numbers, we can construct the infrared anomaly functional, following the same procedure used at the beginning of this section. The answer is the expression (B.3), which does not contain the 1-form gauge field \(B_{\rm c}^{(2)}\): it reproduce neither of the mixed anomalies, (3.19) or (3.22). We must conclude that such a vacuum, with just \({\cal B}_{11}\sim\psi\eta\eta\) and nothing else, cannot represent the correct IR physics of the \(\psi\eta\) model, as the \(X\)-ray model reduces to it in the infrared. On the other hand, the dynamical Higgs phase (analyzed in Appendix C) is characterized by bifermion condensates \[\langle\psi^{ij}\eta_{i}^{B}\rangle\,=\,c_{\psi\eta}\,\Lambda^{3}\delta^{jB} \neq 0\;,\qquad j,B=1,\ldots,N\;. \tag{3.23}\] Under this assumption, both \(U(1)_{0}\) and \(\tilde{U}(1)\) are broken by the condensate so, if one requires the condensate (3.23) to be everywhere non-vanishing, then, as it is charged under \(U(1)_{0}\) and \(\tilde{U}(1)\), one cannot allow any non-vanishing \(B_{\rm c}^{(2)}\) fields. If, on the other way around, one imposes a non-vanishing \(B_{\rm c}^{(2)}\) field, then \(\psi^{ij}\eta_{i}^{B}\) cannot condense everywhere, and, similarly with \(\phi\) form the X-ray to the UV, there must be vortices where the condensate (3.23) vanishes. We leave a more in-depth description of the matching in this case for subsequent work, but, disregarding the details, the matching must work as one can arrive at the same phase perturbatively, by substituting the composite operator \(\psi^{ij}\eta_{i}^{B}\) by a fundamental scalar field with the same quantum number of it and a suitable potential. This can be understood as a consistent way in which the infrared dynamics reflects the impossibility (an anomaly), (3.19), of gauging the color-flavor locked 1-form symmetry, (3.5), found in the UV theory.17 Footnote 17: The logic of this argument is somewhat similar to the one employed in [19] in the study of the vacuum of the pure \(SU(N)\) Yang-Mills theory at \(\theta=\pi\). ## 4 Reduction to the \(\psi\eta\) model, \(\mathbb{Z}_{2}\) vortex and the fermion zeromodes In order to make the argument of the present work water-tight, let us discuss here a subtle question associated with the reduction of the \(X\)-ray theory to the \(\psi\eta\) model in the infrared. The basic statement is that nonvanishing VEV \(\langle\phi\rangle\) gives mass to the extra Dirac pair of fermions, \(q,\tilde{q}\), and that the system indeed reduces in the infrared to the \(\psi\eta\) model (the simplest BY model), studied in [7, 8, 9]. The point is that the generalized, mixed anomalies (3.19) and (3.22), occur in the background of the external \(\tilde{U}(1)\) and \(U(1)_{0}\) gauge fields with fluxes, (3.7). In the case of the \(\tilde{U}(1)\) gauge field \(\tilde{A}\) this does not present a problem. On the other hand, \(U(1)_{0}\) is spontaneously broken to a \(\mathbb{Z}_{2}\) by the \(\phi\) VEV, see Table 1. This means that the relevant background fields \((A_{0},\phi)\) correspond to a (regular) \(\mathbb{Z}_{2}\) vortex configuration. Again this does not present any issue in itself: there is nothing wrong in considering such a particular (and convenient) background and asking if the gauging of the color-flavor-locked 1-form \(\mathbb{Z}_{N}\) symmetry encounters a topological obstruction (a 't Hooft anomaly). This is what is studied in Sec. 3.1, Sec. 3.2 and Sec. 3.3. A (possible) problem is that \(q,\tilde{q}\) fields are massive everywhere and decouple from the system, except along the vortex core, where \(\phi=0\) and \(m_{q,\tilde{q}}=0\). As is well known, such a system develops a chiral two-dimensional \(q,\tilde{q}\) zero-mode, traveling along the vortex core with light velocity. They will produce an anomaly in the \(\tilde{U}(1)\) gauge symmetry in the 2D vortex worldsheet, as discussed, e.g., by Callan and Harvey [41]. To make the parallelism with the problem discussed in [41] complete, let us for the moment forget about the contribution of the fermions \(\psi\) and \(\eta\) in Table 1. It will be taken care of later. In a 4D system considered in [41], a Dirac fermion \(\Psi\), with an electric charge, is coupled to a complex scalar field \(\Phi\) via a Yukawa interaction, \[{\cal L}_{Y}=g_{Y}\bar{\Psi}\Phi\Psi\, \tag{4.1}\] and \(\Phi\) is assumed to get a nonvanishing VEV, \(\langle\Phi\rangle=v\neq 0\). The axial \(U(1)_{A}\) is spontaneously broken by the condensate, whereas the vector (electromagnetic) symmetry \(U(1)_{\rm em}\) remains exact. Such a system can develop a solitonic vortex, \[\Phi(x)=f(\rho)\,e^{i\theta}\;,\quad f(0)=0\;,\quad f(\infty)=v\qquad x_{2}+ix _{3}=\rho\,e^{i\theta}. \tag{4.2}\] Now the zero-mode for \(\Psi\) which develops on the string (vortex core) turns out to have a chiral nature in the vortex worldsheet \((x_{0},x_{1})\). As \(\Psi\) is charged, such a massless fermion causes a 2D chiral anomaly \[D_{a}J_{a}=\frac{1}{2\pi}\epsilon_{ab}\,\partial^{a}A^{b}\;,\qquad a,b=0,1\;, \tag{4.3}\] where \(A^{\mu}\) and \(J^{\mu}\) are the \(U(1)_{\rm em}\) gauge field and its covariant current. As \(U(1)_{\rm em}\) is supposed to be an exact conserved symmetry of the system, this appears to present a paradox. The solution to this puzzle [41] is the following. As the system suffers from the ABJ anomaly for the axial \(U(1)_{A}\) symmetry \((U(1)_{A}-[U(1)_{\rm em}]^{2}\) triangle), the spontaneous breaking of the \(U(1)_{A}(1)\) means that the low-energy (\(\mu\ll v\)) 4D effective action has an axion-like (or better, \(\pi_{0}-2\gamma\) like) term, \[{\cal L}_{\pi_{0}\gamma\gamma}=\frac{e^{2}}{32\pi^{2}}\int d^{4}x\,\pi(x)\, \epsilon_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}\;, \tag{4.4}\] where \(\pi(x)\) is the pion field, \[\Phi(x)=v\,e^{i\pi(x)/v}\;. \tag{4.5}\] Now, in the presence of the soliton vortex, the pion field \(\pi(x)\) is ill-defined as one goes around the vortex string, see (4.2). As a result, the \(U(1)_{\rm em}\) variation \(\delta A_{\mu}=\partial_{\mu}\omega\) in \({\cal L}_{\pi_{0}\gamma\gamma}\) turns out to be nonvanishing. The nontrivial vorticity in \(\pi(x)\sim\theta(x)\) \[\partial^{\mu}\partial^{\nu}\theta(x)=-2\pi\epsilon^{\mu\nu}\delta(x_{2}) \delta(x_{3})\;,\qquad\mu,\nu=2,3 \tag{4.6}\] indeed gives rise [41] to ("the anomaly-inflow") \(\delta{\cal L}_{\pi_{0}\gamma\gamma}\) in the vortex worldsheet \((x_{0},x_{1})\), which precisely cancels the 2D chiral anomaly (4.3) generated by the fermion zeromode. The Callan-Harvey argument exactly applies to our model, upon identifying (see Table 1), \[\Psi\equiv\left(\begin{array}{c}q\\ \tilde{q}^{c}\end{array}\right)\;,\qquad U(1)_{\rm em}\equiv\tilde{U}(1)\;, \qquad U(1)_{A}\equiv U(1)_{0}\;, \tag{4.7}\] as long as the effects of the other fermions \(\psi\) and \(\eta\) are not considered. In our model \(q^{i}-\bar{q}_{i}\) form, in the bulk, a Dirac fermion fundamental of \(SU(N)_{\rm c}\), meaning that also the 2D world-sheet fermion is fundamental under \(SU(N)_{\rm c}\). Because of that the same mechanism (a local 2D anomaly, canceled by a bulk inflow) happens also for \(SU(N)_{\rm c}\), without any significant difference. More interestingly, the fact that the world-sheet fermions are coupled with the bulk gauge field means that, as we continue to follow the RG-flow and approach \(\mu\sim\Lambda\), something should happen. In this work, we do not prescribe in detail what happens: we assume that what remains of the vortices in IR does not contribute to the 't Hooft anomaly matching of the anomalies found above.18 Footnote 18: If we lift this hypothesis some other interesting possibilities might arise. We will discuss them in a future work. As was recalled at the end of Sec. 3.2, the 't Hooft fluxes (3.7), (3.22) mean that one is working in a bi-torus, \(T_{1}\times T_{2}\) spacetime. The associated fractional flux \(A_{0}\) (3.7) hence the \(\mathbb{Z}_{2}\) vortex, must accordingly be considered both in \(T_{1}\) and in \(T_{2}\). The Callan-Harvey solution of an apparent puzzle associated with the vortex (a point on \(T_{1}\)) and the fermion zeromodes propagating in the vortex worldsheet \(T_{2}\), has been adapted to our problem as explained above. Exactly the same argument eliminates any issue concerning the second vortex punctuating \(T_{2}\) and the chiral fermion zero-mode generating an anomaly in \(T_{1}\). The details will appear elsewhere. As a final remark, we note that the questions (the fermion zeromodes traveling along the vortex core, etc.) discussed here concern perturbative, infinitesimal \(\tilde{U}(1)\) variations of the system. Regarding the \(\mathbb{Z}_{2}-\left[\mathbb{Z}_{N}^{(1)}\right]^{2}\) discussed in subsection 3.2, apparently, the analysis might be more involved, and the 2D chiral fermions might, in principle, contribute to this anomaly. However, this is not the case: by explicit calculation both in the X-ray model (as shown in subsection 3.2) and in the \(\psi\eta\) model (as shown in Ref. [36]) we have found a nontrivial \(\mathbb{Z}_{2}-\left[\mathbb{Z}_{N}^{(1)}\right]^{2}\) anomaly, thus, being them \(\mathbb{Z}_{2}\) anomalies, they must agree, and the overall contribution of the vortex physics must vanish. ## 5 Discussion and Summary All Bars-Yankielowicz (BY) and generalized Georgi-Glashow (GG) models [1]-[6] possess a nonanomalous fermion parity symmetry \((\mathbb{Z}_{2})_{F}\)19 Footnote 19: \((\mathbb{Z}_{2})_{F}\) is equivalent to a subgroup of the proper Lorentz group. The point is whether or not in the non-trivial 2-form gauge background, \(B_{c}^{(2)}\), the symmetry is broken by a (’t Hooft) anomaly. \[\psi_{i}\rightarrow-\psi_{i} \tag{5.1}\] where \(i\) labels the fermions present in the model. In the standard quantization, the instanton analysis tells us that (5.1) is a nonanomalous symmetry of the quantum theory. However, in some cases with even \(N\) (models of type I 20), this statement holds because its anomaly is given by Footnote 20: Among the generalized \(SU(N)\) BY and GG models with \(p\) Dirac pairs of fermions in the fundamental representation, the models of type I are those with \(N\) and \(p\) both even. Other models are called type II in this note. \[\Delta S=\sum_{i}b_{i}\times\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\mathrm{tr}_{F} \left[F(a)^{2}\right]\,\times(\pm\pi)=2\pi\mathbb{Z}\;, \tag{5.2}\] with \[\sum_{i}b_{i}=\text{even integer}\neq 0\;, \tag{5.3}\] whereas \(\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\mathrm{tr}_{F}\left[F(a)^{2}\right]\) is the standard integer instanton number. It is essential to realize that the \((\mathbb{Z}_{2})_{F}\) anomaly is absent because the sum of the anomaly coefficients \(\sum_{i}b_{i}\) is a nonzero even number, _not_ because it vanishes. For the \(\psi\eta\) model, \[G=\frac{SU(N)_{\text{c}}\times SU(N+4)\times U(1)_{\psi\eta}\times(\mathbb{Z }_{2})_{F}}{\mathbb{Z}_{N}\times\mathbb{Z}_{N+4}}=\frac{\tilde{G}}{\mathbb{Z} _{N}\times\mathbb{Z}_{N+4}}\;, \tag{5.4}\] The group \(\tilde{G}\) is doubly-connected (\(\Pi_{0}(\tilde{G})=\mathbb{Z}_{2}\)) [8]. This always happens in models of type I. Instead, in type II models, where \((\mathbb{Z}_{2})_{F}\) is a subset of a continuous \(\tilde{G}\). In general, in a type I theory, the gauging of the 1-form \(\mathbb{Z}_{N}\) symmetry leads to the \((\mathbb{Z}_{2})_{F}\) anomaly, given by a master formula [9]21 Footnote 21: \(c_{i}\) is the \(\mathbb{Z}_{2}\) charge, \(R\) is the fermion representation, \(\mathcal{N}(R)\), \(d(R)\), \(D(R)\) are the associated \(N\)-ality, the dimension, and the Dynkin index, respectively. \[\Delta S^{\text{(Mixed anomaly)}}=(\pm\pi)\cdot\sum_{i}c_{i}\left(d(R_{i}) \mathcal{N}(R_{i})^{2}-N\cdot D(R_{i})\right)\frac{1}{8\pi^{2}}\int_{\Sigma_{ 4}}\left(B_{\text{c}}^{(2)}\right)^{2}\;. \tag{5.5}\] The calculation gives \[\sum_{i}c_{i}\left(d(R_{i})\mathcal{N}(R_{i})^{2}-N\cdot D(R_{i})\right)=N^{2 }\;, \tag{5.6}\] but (see (3.16)) \[\frac{1}{8\pi^{2}}\int_{\Sigma_{4}}\left(B_{\text{c}}^{(2)}\right)^{2}=\frac{ 1}{N^{2}}\;, \tag{5.7}\] therefore \[\Delta S^{\text{(Mixed anomaly)}}=\pm\pi\;. \tag{5.8}\] The partition function changes sign under \((\mathbb{Z}_{2})_{F}\), in the \(\psi\eta\) model with \(N\) even, and in all other type I models: the mixed \((\mathbb{Z}_{2})_{F}-[\mathbb{Z}_{N}]^{2}\) anomaly. As the candidate massless baryons do not support this generalized anomaly (see (B.3) in the simplest, \(\psi\eta\) model), such a confining vacuum cannot represent a correct phase in type I models. The aim of the present work was to cure the defect of the original analysis [7], i.e., the use of a singular \((\mathbb{Z}_{2})_{F}\) gauge field. In a theory with a regulator Dirac pair of fields \(q,\tilde{q}\) (the \(X\)-ray theory), the singular \(\mathbb{Z}_{2}\) vortex background needed in [7] is replaced by a regular \(\mathbb{Z}_{2}\) vortex, without affecting the crucial holonomy, (1.1). The 1-form \(\mathbb{Z}_{N}\) symmetry lies now in the intersection between \(SU(N)\) and two nonanomalous \(U(1)\) symmetries, (3.1). In other words, the model is described by a well-defined principal bundle, (3.3). The generalized cocycle condition is met exactly as in [25]. In the \(X\)-ray theory the new anomalies are of the type, \(\tilde{A}-\left[B_{\rm c}^{(2)}\right]^{2}\) and \(A_{0}-\left[B_{\rm c}^{(2)}\right]^{2}\). In particular, the \(\tilde{U}(1)-\left[\mathbb{Z}_{N}^{(1)}\right]^{2}\) mixed anomaly (3.19) and its UV-IR mismatch occur both for even and odd \(N\) (of the \(SU(N)\) color group). Therefore the statement in the \(X\)-ray model is somewhat stronger than in the \(\psi\eta\) model.22 As for the \(U(1)_{0}-\left[\mathbb{Z}_{N}^{(1)}\right]^{2}\) anomaly, (3.21), \(U(1)_{0}\) is spontaneously broken by the scalar VEV, therefore only the variations \(\mathbb{Z}_{2}\subset U(1)_{0}\) can be used in the UV-IR anomaly matching algorithm. For \(N\) even, the anomaly found here reduces to the \(\mathbb{Z}_{2}\) anomaly found in [7]. Footnote 22: The argument based on the strong anomaly [10] which also favors the color-flavor locked dynamical Higgs phase, is equally valid for both even and odd \(N\), too. ## Acknowledgment This work is supported by the INFN special initiative grants, "GAST" (Gauge and String Theories).
2308.14503
Time calibration studies for the Timepix3 hybrid pixel detector in electron microscopy
Direct electron detection is currently revolutionizing many fields of electron microscopy due to its lower noise, its reduced point-spread function, and its increased quantum efficiency. More specifically to this work, Timepix3 is a hybrid-pixel direct electron detector capable of outputting temporal information of individual hits in its pixel array. Its architecture results in a data-driven detector, also called event-based, in which individual hits trigger the data off the chip for readout as fast as possible. The presence of a pixel threshold value results in an almost readout-noise-free detector while also defining the hit time of arrival and the time the signal stays over the pixel threshold. In this work, we have performed various experiments to calibrate and correct the Timepix3 temporal information, specifically in the context of electron microscopy. These include the energy calibration, and the time-walk and pixel delay corrections, reaching an average temporal resolution throughout the entire pixel matrix of $1.37 \pm 0.04$ ns. Additionally, we have also studied cosmic rays tracks to characterize the charge dynamics along the volume of the sensor layer, allowing us to estimate the limits of the detector's temporal response depending on different bias voltages, sensor thickness, and the electron beam ionization volume. We have estimated the uncertainty due to the ionization volume ranging from about 0.8 ns for 60 keV electrons to 8.8 ns for 300 keV electrons.
Yves Auad, Jassem Baaboura, Jean-Denis Blazit, Marcel Tencé, Odile Stéphan, Mathieu Kociak, Luiz H. G. Tizei
2023-08-28T11:23:27Z
http://arxiv.org/abs/2308.14503v1
# Time calibration studies for the Timepix3 hybrid pixel detector in electron microscopy ###### Abstract Direct electron detection is currently revolutionizing many fields of electron microscopy due to its lower noise, its reduced point-spread function, and its increased quantum efficiency. More specifically to this work, Timepix3 is a hybrid-pixel direct electron detector capable of outputting temporal information of individual hits in its pixel array. Its architecture results in a data-driven detector, also called event-based, in which individual hits trigger the data off the chip for readout as fast as possible. The presence of a pixel threshold value results in an almost readout-noise-free detector while also defining the hit time of arrival and the time the signal stays over the pixel threshold. In this work, we have performed various experiments to calibrate and correct the Timepix3 temporal information, specifically in the context of electron microscopy. These include the energy calibration, and the time-walk and pixel delay corrections, reaching an average temporal resolution throughout the entire pixel matrix of \(1.37\pm 0.04\) ns. Additionally, we have also studied cosmic rays tracks to characterize the charge dynamics along the volume of the sensor layer, allowing us to estimate the limits of the detector's temporal response depending on different bias voltages, sensor thickness, and the electron beam ionization volume. We have estimated the uncertainty due to the ionization volume ranging from about 0.8 ns for 60 keV electrons to 8.8 ns for 300 keV electrons. electron microscope; electron energy-loss spectroscopy; event-based; hybrid pixel direct detector; timepix3; temporal resolution + Footnote †: preprint: APS/123-QED ## I Introduction In recent years, scanning transmission electron microscopy (STEM) has been profoundly transformed by the improvements of multiple technologies, such as aberration correction and electron monochromators. Electron detection followed the revolution, mostly by the advent of direct electron detectors, providing a reduced point-spread-function and an increased quantum efficiency relative to their predecessors that used a scintillator layer. Today, the superiority of direct electron detectors is indisputable, confirmed by the extensive and fast-growing number of results concerning imaging [1], 4D STEM [2], and electron energy loss spectroscopy (EELS) [3, 4, 5]. One kind of direct electron detector is the so-called hybrid pixel detector, named as this because the semiconductor sensor layer and the application-specific integrated circuit (ASIC) are independently manufactured [6]. For the concern of this paper, the Timepix3 (TPX3) is an event-based detector, capable of outputting temporal and positional information of individual electron hits. Each pixel possesses its individual electronics, comprising an analog and a digital processing circuitry [7]. A threshold value defines the minimal input signal intensity the pulse must have to be considered a pixel hit, and it can be set, pixel-by-pixel, on the analog processing part of the pixel electronics, allowing a virtually complete suppression of the readout noise of the detector. The temporal information of the pixel hit is given by the instant the analog signal surpasses the pixel threshold value, called time of arrival (ToA), and the time duration the analog signal is kept over the pixel threshold value, called time over threshold (ToT). The ToA and ToT are after products of the digital processing part, and are latched on the distributed clock in the pixel array, as can be seen in Figure 1. While the ToT reaches a time bin of 25 ns from the 40 MHz clock frequency, the ToA is further refined by a 640 MHz voltage-controlled oscillator, reaching thus a 1.5625 ns time bin. Additionally, panel 1B exemplifies how the ToA value obtained is longer than the actual charge arrival time, which can be properly corrected with the combined knowledge of both ToA and ToT, as discussed later. These properties have recently enabled readout-free, live-processing EELS data reconstruction at the speed of typical imaging detectors (\(\sim 40\) ns per pixel in our case) by synchronizing the scanning unit and the TPX3 clocks [5]. Such technology makes possible nanosecond-resolved temporal resolution in EELS, but can also provide a robust solution for sensitive samples, in which custom scan patterns have been suggested to help [8, 9]. Additionally, Timepix3 has also enabled the performance of the so-called cathodoluminescence excitation spectroscopy (CLE), in which the temporal correlation of electrons and infrared/visible/ultraviolet photon pairs can circumvent the absence of resonant experiments with fast electrons due to their broadband excitation spectra [10, 11]. These techniques can be combined together, providing hyperspectral imaging of correlated electrons and thus the spatial information of the excitation pathways [10]. Although coincidence experiments can also be performed with X-rays photons, x-rays detectors have a poor temporal response, typically two orders of magnitude higher than the minimal bin of Timepix3. For visible-range photons, as in CLE, on the contrary, photon counting with photomultiplier tubes can reach sub-nanosecond temporal resolution, which gives access to the dynamics of the process in the range of the TPX3 time bin [12]. Pushing the temporal resolution of TPX3 can also be interesting for performing electron energy-gain spectroscopy (EEGS) [13] in continuous-gun electron microscopes, in which typical approaches rely on the usage of electrostatic beam blankers, and, in some cases, high voltages are needed, undermining the design of high-repetition rates switching circuits [14; 15]. With proper-calibrated TPX3, repetition rates of tens of MHz should be possible, and energy-gain experiments can be performed very similarly as in CLE, with the distinction that pairs are between the injected photons and the inelastically scattered electrons. Unfortunately, approaching the nominal TPX3 temporal resolution uniformly throughout the entire pixel matrix is not straightforward [16; 17; 18; 19; 20]. It requires a good understanding of both parts of HPDs. The fast electrons impinging in the silicon sensor create electron-hole pairs that will drift towards the opposite side of the layer due to an applied bias. For fast electrons typically within the 30 - 200 keV energy range, theses charges are often collected by distinct pixels, creating thus clusters: multiple hits originated by the same incident electron. This process can be readily identified during data processing by spatially and temporally comparing pixel hits. The drift time depends on the electric field profile inside the silicon slab, and hence on the voltage bias applied. Additionally, the charges are created in a spatial profile that depends on the electron energy (the so-called "ionization pear" or ionization volume model), which consequently can result in slightly different charge collection intervals. Upon arrival in the individual pixel readout electronics, in the ASIC part of the detector, the digital conversion of this time of arrival can reach the aforementioned nominal value of 1.5625 ns. Understanding all these steps, from the impinging electron to the digital conversion of the charge time of arrival is important to reach the detector's best possible temporal response. In particular, one of the major time calibration steps is the correction of the time-walk effect, a consequence of the roughly constant rise time \(\tau_{rise}\) of the analog part of the ASIC circuit and that produces a temporal shift \(\Delta T\) between the latched ToA and the actual charge arrival time in the ASIC, as illustrated in Figure 1B. Comparing the orthogonal triangles with heights \(Th\) (pixel threshold) and \(E\) (pixel deposited energy), the expected time interval is \(\Delta T=\tau_{rise}Th/E\). In a more generalized way, time-walk can be modeled with the following equation: \[\Delta T(x,y,E)=\frac{a(x,y)}{E-b(x,y)}+c(x,y) \tag{1}\] where \(a,b,c\) are the constants that must be determined, and \(x\) and \(y\) are the pixel coordinates. Finally, the distributed clock net along the pixel array is imperfect and not instantaneous; thus, spatially-dependent pixel relative times can also happen. The temporal resolution of the detector, roughly speaking, is thus the propagated uncertainties of both the uncertainty associated with the time-walk correction and the uncertainty related to the time delay estimate. For the former, the contribution is multi-factorial: it depends on the discriminator jitter (a temporal uncertainty linked to when the signal went over the established threshold), on the bin size of the fine ToA, Figure 1: **General schematic of the detector and the different measured quantities.** (**A**) The TPX3 HPD consists of a sensor layer and an ASIC. In an electron microscope, charges are created at one side of the detector, which moves towards the ASIC side upon an applied bias (V). Different ionizing particles produce distinct mechanisms of charge creation. As an example, a light energetic particle, such as a muon, traverses the detector and creates charges throughout the sensor layer thickness. The charges collected at the ASIC is detected by an analog circuit depending on the pixel threshold (Th), and are timestamped by the digital part of the circuit, in particular, the ToA and ToT, using the distributed clock (Clk) signal through the pixels. (**B**) Schematic of the ToA and the ToT. The roughly constant rise time \(\tau_{rise}\) of the impulsion in the analog circuit produces a ToA value dependent on the signal intensity (or, equivalently, on the ToT or the total energy (E) deposited by the hit). This effect, known as time-walk, produces major discrepancies between the received ToA and the actual electron arrival time. and, particularly for our study, the uncertainty related to the ionization volume of the electron beam. To the best of our knowledge, there are no complete TPX3 temporal calibration studies in the context of electron microscopy, and all aforementioned time calibration works are performed with X-rays photons or highly energetic particles (\(>\) MeV). In this work, we present a methodological study of the impact of the temporal calibration of the Timepix3 detector for electron microscopy using fast electrons (20 - 100 keV) as the source of the charge creation in the sensor layer. Besides, we stick with calibration procedures that primarily rely on data/cluster analyses from an electron beam illumination dataset without using more intricate methods, such as test pulsing calibration [19], that, although very precise, requires more hardware manipulation. We begin by analyzing a method for the energy calibration of the detector, i.e., the relation between the ToT and the deposited energy. Next, the time-walk is corrected using a flat-field electron illumination dataset. Finally, electron-photon pairs are used to compensate for the non-uniform clock distribution net and also for verifying the calibration after the aforementioned steps. In the conclusion, we attempt to estimate the ultimate temporal resolution for the Timepix family of detectors in electron microscopy by analyzing cosmic rays tracks of light energetic particles and relating them with the ionization volume of fast electrons. A scheme of the complete experimental setup is shown in Figure 2. A finely focused electron beam between 20 keV - 100 keV in energy is transmitted through the sample, and reaches a magnetic sector which disperses the electron beam in energy in the Timepix3 detector, a technique called electron energy-loss spectroscopy (EELS). Upon crossing the sample, light may be emitted, a process known as cathodoluminescence (CL), and photons can be guided either to a unique single-photon-counting photomultiplier tube (PMT), either to a beamsplitter and thus to two PMTs, in which Hanbury-Brown-Twiss (HBT) interferometry can be performed. With HBT, photon bunching processes [21] can be used to extract the optical excitation's lifetime with a better temporal resolution than TPX3, thus providing a benchmark for the measured value. A home-made multichannel time-to-digital converter (TDC) with a temporal bin of 120 ps is interfaced between the Timepix3 and the two PMTs, allowing to compare all these sources of events temporally. For this work, we have used the Timepix3 commercialized by Amsterdam Scientific Instruments (ASI) called Chee-Tah, an array of 4 chips disposed as 1024 x 256 pixels. The detector is mounted in a Vacuum Generators HB501, a STEM dedicated microscope with a cold field emission gun and a typical spectral resolution of 300 meV in EELS. The CheeTah solution also has 2 TDC inputs capable of reaching a time bin of 260 ps. Note that for some calibration steps, no sample is needed, and sometimes a single PMT may be used directly in the TPX3 TDC input. In any case, the exact experimental condition for each step is detailed in the text. Otherwise stated, curves are generally fitted with Gaussians, and the referred temporal resolution in this work is used as a synonym for the standard deviation of the fitting result. Finally, we have used our own software for cluster identification and raw data processing. The MIT-licensed open-source software is entirely coded in Rust programming language and can also be used for live data processing of several acquisition modes [22]. ## Results & Discussion ### Energy calibration To perform the energy calibration, the electron beam in vacuum is uniformly spread throughout the entire detector for the electron energies of 20 keV, 60 keV, 80 keV, and 100 keV. The cluster identification algorithm is used to sort hits with a unity cluster size, roughly assuring that the energy deposited has not been shared with nearby pixels, and thus allowing the correspondence, pixel-by-pixel, of the deposited energy and the ToT. This is exemplified by the Cluster 1 in Figure 3A. The histogram of these hits in the pixel array matrix is fitted by Gaussians, and the average ToT per pixel per electron energy is extracted. Figure 3B shows the result of these means for three different pixels. In the 20 - 100 keV energy Figure 2: **Schematic of the experiment set-up**. The fast electron (20 - 100 keV) is transmitted through the sample and the electron beam is sent to TPX3 after being deflected by an electron spectrometer (magnetic section). The emitted light is collected by a parabolic reflector coupled to an optical fiber. Light can be sent to one or two single-photon counting photomultiplier tubes. A multichannel time-to-digital converter unit allows us to temporally correlate electrons and photon events. SPC-PMT: single-photon counting photomultiplier tube. BS: beamsplitter. TDC: time-to-digital converter. range, the relationship of the ToT and electron energy is linear, and a linear fit is used to extract the angular and the offset component per pixel (additional information can be found in the supplementary material, SM). This energy-ToT relationship gives us a glimpse of the data processing of the ASIC, and ultimately allows us to make a better correspondence between the received digitalized ToT and the expected signal amplitude received in the analog input. For even smaller deposited energies, this relationship is no longer linear [19] and the deposited energy approaches the pixel threshold value as the ToT approaches zero. ### Time-walk & pixel delay calibration To correct the time-walk, a uniformly illuminated detector dataset is used once more. The electron energy is fixed at 60 keV, which provides a good compromise between sufficiently low electron energy to reduce the ionization volume and sufficiently high energy to produce clusters between 1 to 6 hits. To have a controlled dataset, clusters are then post-selected and must have one pixel hit with exactly \(E_{ref}=30\) keV of deposited energy, and the cluster size must have 3 or 4 hits, as shown by the Cluster 2 in Figure 3A. The 30 keV hit works as a reference time of arrival value (ToA\({}_{ref}\)) [17]. The other 2 or 3 pixels are used to create a histogram of the electron energy as a function of the time shift be Figure 3: **Energy, time-walk and delay calibration results.****(A)** Energy calibration is done by analyzing clusters with a single hit for multiple incident electron energies (\(E_{e}\)) ranging from 20 keV to 100 keV, as shown by cluster 1. For time-walk calibration, exemplified by cluster 2, clusters of 3 and 4 hits containing necessarily an electron hit with energy \(E_{ref}=30\) keV are used, for an incident electron energy of 60 keV. **(B)** The relation between ToT and the hit deposited energy for three distinct pixels. Although there is a linear relationship between the three pixels, the angular and the linear coefficients differ. **(C)** The time-walk effect integrated along an entire chip array (256 x 256). The hit arrival time at 30 keV is defined as 0, and the relative time is plotted as a function of the deposited energy. For each energy, a Gaussian fit is used to extract the central time (\(t_{0,tw}\)) and the standard deviation (\(\sigma_{tw}\)). **(D)** The values extracted from (C) of the distribution. For deposited energies above 15 keV, the fitted standard deviation is approximately \(\sim 0.83\) ns, or roughly half a time bin of 1.5625 ns. **(E)** The time delay (\(t_{0,delay}\)) calibration as a function of the detector pixel array, measured by performing temporal coincidences between electrons and photons. tween their own time of arrival and the reference value (\(\mathrm{ToA_{2,3,4}-ToA_{ref}}\)), as illustrated in Figure 3C for an entire chip array (256 x 256 pixels), promptly exposing the time-walk effect, i.e., the large time differences for low energy electrons. The 2D histogram of Figure 3C is fitted with a Gaussian for every deposited energy, in which the values of the standard deviation (\(\sigma_{tw}\)) can be seen in Figure 3D, reaching a constant value of approximately 0.83 ns. The center of the Gaussians (\(t_{0,tw}\)) is shown in the inset, demonstrating a hyperbolic relationship. To calibrate our data, we have done a similar procedure, but the fitting was also performed pixel-by-pixel, and we have used equation 1 for the interval \(5\leq E<30\) keV to extract the a, b, and c coefficients, which are later used to correct the raw data. Note that although the time shift can be easily corrected, the standard deviation value represents an intrinsic uncertainty of the instrument in these experimental and data processing conditions. There are already many insights on obtaining a relatively good time response with TPX3 with a low-to-none effort on its calibration. Figure 3C shows the average result for an entire chip array, and, even without a pixel-by-pixel calibration, the deviation of the time-walk reaches \(\sim 0.83\) ns, almost half the fine ToA sampling for \(E>15\) keV. As mentioned, the effects of time-walk are strongly mitigated by a large charge deposition. Increasing the microscope acceleration voltage is not directly a good option, considering the number of hits per cluster will increase with a big charge sharing between them; additionally, and related, the ionization volume will grow accordingly, increasing the uncertainty of the charge creation. Reducing the TPX3 threshold is, on the contrary, a better reaction. For a given signal amplitude, the time-walk effect will be reduced as the threshold approaches zero, as seen in Figure 1B. By post-selecting hits with high deposited energies, e.g. higher than 15 keV, the standard deviation is smaller than the ToA sampling and the Gaussian center displacement is less pronounced, as can be seen in the inset of Figure 3D. The time-walk calibration discussed above is a relative method, as it only uses ToA values from nearby pixels in the procedure, which leaves unaccounted net propagation delays in the clock signal distribution pixel-by-pixel. To do this calibration, a common reference signal must be used, allowing indirect comparison of this propagation delay. In our case, we have used UV photons and electron correlations by performing CLE experiments in a hexagonal boron nitride sample (_h_-BN). The photons are sent to a single photon-counting PMT and coincident histograms are plotted as a function of the pixel matrix coordinate, and the center position of the Gaussian (\(t_{0,delay}\)) fit provides the propagation delay values. The obtained delay calibration array is shown in Figure 3E. Further details are also present in the SM of this work. ### Impact of the calibration using electron-photon temporal correlations Figure 4A shows experimental results of the time delay between a photon and an electron as a function of the deposited energy after the time-walk and the delay calibration averaged through all the pixels. Measurements were taken in a _h_-BN flake in a region of approximately 125 x 125 nm\({}^{2}\), highlighted by the white rectangle in the annular dark-field image of the sample, as shown in Figure 4B. As we are interested in the averaged temporal resolution throughout the entire pixel matrix, the electron beam has been rastered in the Timepix3 detector in order to increase the pixel occupancy. Post-selecting high energetic hits (\(E\geq 30\) keV) after the time-walk and delay calibration produce the best possible detector's time response, showing a standard deviation of \(\sigma_{res}=1.37\pm 0.04\) ns, smaller than the bin width of the electron ToA fine timestamping. From Figure 4A, we can see that there are more hits with low deposited energy (\(E<15\) keV). However, they are usually associated with a high-energy hit within the same cluster, meaning that data loss after hit post-selection is not too critical. Finally, it is important to note that _h_-BN sample's lifetime is convoluted in this result. To discern this contribution, we have performed HBT interferometry at the sample, temporally correlating two photons instead of one electron and one photon. The results are shown in Figure 4D, and the decay's lifetime has been determined as \(\tau=0.8\pm 0.1\) ns. As recently demonstrated [12], the _h_-BN lifetime can be seen in our electron-photon correlations by fitting exponential decays in both sides of the time delay curves, and further discussions can be found in the SM. ### Cosmic rays tracks Multiple ionizing particles can hit the TPX3 detector during data acquisition. These produce a variety of shapes and sizes, as can be see in Figure 5A. Large blobs are typically associated with heavy and short-range ionizing particles, such as \(\alpha\) particles. When these heavy tracks are elongated, they are typically associated with protons or atom nuclei [23; 24]. More interesting for this work are highly energetic (\(\sim\) GeV) and light particles such as muons. As these particles entirely cross the sensor layer, it is possible to identify their precise path by the initial and final pixel position values and by the detector thickness (300 \(\mu\)m in our case), as illustrated in Figure 1A. The charge collection dynamics can thus be studied by the obtained values of the ToA \(t_{track}\), in which the first detected charge is taken as a reference value \(t_{0,track}\). Two of these tracks with cluster sizes greater than 150 hits are shown in Figure 5B for a detector's bias voltage of 140 V and 50 V. By changing from 140 V to 50 V bias, charges created at the surface of the sensor layer arrive 21.20 ns later, determined by fitting a charge drift model [18] to the experimental data. Additionally, this model can be confronted with photon-electron correlation measurements. Between the two measured biases, the time delay between the photon and electron correlation increases by roughly 20.12 ns for the 50 V bias, as shown in Figure 6. The value is slightly smaller than the 21.20 ns measured from the muon track, and the observed difference is presumably due to the fact that charges created from fast electrons Figure 4: **The delay calibration and the impact of the described correction methods accessed by electron-photon coincidences**. (**A**) The 2D histogram of the time delay between electron-photon pairs as a function of the energy deposited in the pixel hit after the time-walk and the pixel delay calibration. (**B**) The high-angular dark-field image of the used _h_–BN flake. Data has been averaged by scanning in the white rectangle area of approximately 125 x 125 nm\({}^{2}\). (**C**) The time distributions for the non-corrected data, with a long tail towards the positive time delay direction. After the calibration and post-selecting hits with \(>30\) keV deposited energy, the curve approaches a Gaussian distribution with a fitted sigma of \(1.37\pm 0.04\) ns. (**D**) The photon-photon correlation curve done by HBT interferometry. The curve is plotted by a exponential decay symmetric with respect to the zero time delay. The obtained value from the fitting is \(\tau=0.8\pm 0.1\) ns. Figure 5: **Cosmic rays tracks can provide further insights in the charge dynamics of Timepix3**. (**A**) Example of different ionization particles that can hit the TPX3 detector. Data is filtered by post-selecting clusters with more than 80 hits. Muon tracks are thin and can cross the 300 \(\mu m\) sensor thickness. The start and end point’s pixel values allow us to determine its trajectory inside the sensor layer. (**B**) Muon tracks analyzed for two different detector biases. A larger bias, and hence a larger electric field, permit the charges to be collected faster. The fitting uses a simple charge drift model [18]. are not precisely induced at the surface of the sensor layer but rather at a few microns inside the bulk Si material. To sustain this, we have performed experiments between 60 keV and 100 keV acceleration voltages. For both biases, the slower electron arrives later, which is expected considering the reasoning that slower electrons are absorbed closer to the sensor surface. Equally interesting, the observed standard deviation for the Gaussian fitting of the curves in Figure 6 changes significantly. Due to the reduced ionization volume, smaller electron voltages produce smaller standard deviations. Analogously, a reduced bias also degrades the temporal resolution by the more significant skewness of the charge collection curve (Figure 5B). For 60 keV at 140 V bias, the standard deviation is \(1.37\pm 0.04\) ns. If the acceleration voltage is 100 keV, this value increases to \(1.56\pm 0.04\) ns. For the 50 V bias, these values are \(2.57\pm 0.06\) ns and \(1.61\pm 0.04\) ns for 100 keV and 60 keV, respectively. Finally, the tools above allow us to try to estimate the achievable temporal resolution as a function of the electron energy, the sensor thickness, and the applied voltage, provided that the temporal calibration is correctly performed. For this, we have used a Monte Carlo simulation software, CASINO [25], to study the spatial distribution of the deposited energy when fast electrons hit a silicon slab. The reference values were extracted from the silicon slab depth in which the cathodoluminescence probability is maximum. These values are roughly 11 \(\mu\)m, 24 \(\mu\)m, 74 \(\mu\)m, and 140 \(\mu\)m for 60 keV, 100 keV, 200 keV, and 300 keV respectively, and the corresponding uncertainties are 0.8 ns, 1.7 ns, 5.0 ns, and 8.8 ns, all of them considering a 140 V detector's bias voltage. Indeed, this estimate is very simplistic, and further analysis must be performed to retrieve more accurate values. For this, better well-suited Monte Carlo toolkits must be used, such as Geant4 [26], actively developed by CERN for particle-matter interaction simulation and detector development, in which more recent frameworks consist of a complete simulation of hybrid-pixel detectors, including the charge transport dynamics, the pre-amplifier response, and the expected values of ToA and ToT [27]. ## IV Conclusions & Perspectives In this work, we have applied well-known but also developed new tools for time calibration of the Timepix3 HPD in the context of electron microscopy. In particular, we have accounted for the energy calibration, the time-walk effect, and the time delay between the pixel array matrix. Additionally, we have shown how photon-electron coincidence events can help the calibration but also to verify the impact of previous steps in the final processed data. Further, we have used highly energetic cosmic rays tracks to unveil the charge deposition mechanism experimentally under different sensor biasing voltages. The obtained values were confronted with the photon-electron coincidence experiments, showing a remarkable similarity between the obtained values. With these experiments, we were able to show that higher energetic electrons produce charges deeply in the sensor layer because of the reduced drift time but also degrade the maximum attainable temporal resolution probably due to the increased ionization volume. Unfortunately, the microscope used has a maximum acceleration voltage of 100 keV, undermining further investigation, and more systematic studies must be performed to confirm if indeed, the ionization volume can not be corrected. Finally, we have deduced from the experiments as mentioned above that the uncertainty related to the ionization volume is approximately 0.8 ns at 60 keV electrons, while this value increases to 1.7 ns for 100 keV electrons, to 5.0 ns for 200 keV, and 8.8 ns to 300 keV electrons. Although we have presented a relatively easy calibration method, this is far from ideal. Because our time-walk calibration depends on a uniform electron illumination, the obtained time intervals have contributions from both the charge dynamics in the sensor layer and the digital time conversion provided by the ASIC. A better way of calibration is to rely on test pulsing [7; 19], which depends solely on the ASIC, and then afterward perform what has been described in our work to account and access residual charge dynamics contributions. Additionally, the time delay calibration procedure here depends on a very large data acquisition and is prone to uncertainties if the lifetime of the material is comparable to the expected maximum attainable temporal resolution, which in our work have been measured by HBT inter Figure 6: **Electron-photon temporal correlation as a function of the electron energy and the Timepix3 voltage bias.** Performing photon-electron coincidences for 60 keV and 100 keV acceleration voltages for 50 V and 140 V detector’s bias. Electrons with 100 keV reach the ASIC earlier but have a worsened temporal resolution. Reducing the detector’s bias delays the charge arrival time and degrades the detector’s temporal resolution. ferometry [21]. An interesting solution for the time delay calibration is using ultrafast electron microscopes, in which electron pulses with sub-picosecond temporal resolution are routinely achieved [28, 29]. A final but significant source of uncertainty is due to our energy calibration measurement, in which the low energy region (\(<20\) keV) has been considered linear although this is not correct. Timepix4, the successor of Timepix3 capable of achieving sub 200 ps time binning is already under tests [30, 31, 32]. It would be able to be operated in event-based or frame-based mode. In this later, the 16-bit counter will provide the necessary electron dynamics to be able to potentially establish itself as a standard electron detector in many electron microscopes. Although the expected \(\sim 100\) ps temporal resolution may not be directly feasible in electron microscopy for reasons already discussed in this work, the much higher expected data flux will be able to perform a plethora of experiments without worrying about electron beam saturation [31]. The theoretical total readout bandwidth of such a detector can reach as high as 160 Gbps of data transfer, which will definitely trigger not only a new way of data storage but also different ways of interfacing them with the electron microscope. ## Acknowledgements The present project has received funding from the European Union's Horizon 2020 research and innovation programme un- deferment agreement No 823717 (ESTEEM3) and 101017720 (EBEAM). Amsterdam Scientific Instruments (ASI) is acknowledged for many fruitful technical discussions.
2308.04710
Compact Petawatt-Class Laser Wakefield Acceleration with Plasma Telescope
The compactness of laser wakefield acceleration (LWFA) is limited by its long focal length for high power lasers, e.g., more than 10 meters for 1-peatawatt (PW) laser pulse and up to hundreds of meters for 10-100 PW lasers. The long focal length originates from the low damage threshold of the optical off-axial parabolic (OAP) mirror and consequent large spot size. We propose implementing an OAP plasma mirror (PM) to form a telescope geometry, reducing the beam size and hence constraining the focal length to meter-range for LWFA driven by lasers beyond 1PW. Three-dimensional particle-in-cell simulations are performed to characterize the reflection of a 1-PW laser by the plasma OAP and find that optimal condition is achieved within only 1-m optical length. The new method successfully generates 9GeV electron bunch in the subsequent LWFA stage with consistent acceleration gradients to that of the 1-PW laser via ordinary focusing. The proposed geometry provides a solution of compact LWFAs available for even 100-PW laser systems.
Xuesong Geng, Liangliang Ji, Baifei Shen
2023-08-09T05:04:21Z
http://arxiv.org/abs/2308.04710v2
# Compact Petawatt-Class Laser Wakefield Acceleration with Plasma Telescope ###### Abstract The compactness of laser wakefield acceleration (LWFA) is limited by its long focal length for high power lasers, e.g., more than 10 meters for 1-petawatt (PW) laser pulse and up to hundreds of meters for 10-100 PW lasers. The long focal length originates from the low damage threshold of the optical off-axial parabolic (OAP) mirror and consequent large spot size. We propose implementing an OAP plasma mirror (PM) to form a telescope geometry, reducing the beam size and hence constraining the focal length to meter-range for LWFA driven by lasers beyond 1PW. Three-dimensional particle-in-cell simulations are performed to characterize the reflection of a 1-PW laser by the plasma OAP and find that optimal condition is achieved within only 1-m optical length. The new method successfully generates 9GeV electron bunch in the subsequent LWFA stage with consistent acceleration gradients to that of the 1-PW laser via ordinary focusing. The proposed geometry provides a solution of compact LWFAs available for even 100-PW laser systems. Introduction Laser wakefield acceleration (LWFA) [1] is able to generate high-energy electrons within a short distance, promising the ability to build compact particle accelerators. The state-of-the-art laser wakefield accelerator is able to accelerate electrons to the order of 10GeV within tens of centimeters using 850TW laser [2]. Recent 10-100PW laser systems worldwide [3] have enabled LWFA with higher laser powers. In general, higher laser power permits longer accelerating distance and higher electron energy, i.e. \(\Delta E_{k}\sim P^{1/3}\)[4] where \(\Delta E_{k}\) is the energy gain and \(P\) the laser power. The 10-100PW laser facilities will raise the energy limit to the level of 100GeV within a single stage, as shown in Fig. 1(a), which can be boosted to TeV level with the help of multistage LWFA acceleration [5]. However, LWFA in the blow-out regime [6; 7] requires that the laser field strength should be slightly above relativistic threshold, and the spot size will be at the order of 100\(\mu\)m for 10-100PW laser pulses, which requires focusing mirrors of large f-numbers, resulting in several hundred-of-meter focal length, as shown by the dashed line in Fig. 1(b) and 1(c), where the laser spot is assumed to be focused to a fixed field strength above relativistic threshold. The extremely long focal length hinders the application of high-power laser systems to multi-stage LWFA since the actual size of the LWFA system is not only the acceleration length but also the size of the whole optical system. It is therefore crucial to shorten the optical length for LWFA if the system is to remain compact and capable of multi-stage acceleration [8; 9]. Due to the damage threshold of the amplification media of the laser system, the diameter of the laser spot before the final off-axial parabolic (OAP) mirror is large for high power lasers, prohibiting the reduction of the focal length. For 1PW laser, the optimized acceleration length is about 20cm, while the focal length is at the order of 10m, as shown in Fig. 1(b). Plasma, as a media free of damage threshold, provides a promising approach to manipulate high power lasers. Plasma optics provides multiple tools to control intense laser pulses, e.g., reflection of laser pulses of ultra-high intensities [10; 11] and improving the beam contrast for laser-solid interactions [10] by plasma mirror (PM), compression intense laser pulses by plasma gratings [12; 13; 14] and plasma lenses for high power lasers [15; 16]. Here, we propose using plasma telescope to transform a tightly focused pulse to a quasi-plane-wave beam of spot size suitable for LWFA. By using a telescope system, the size of the focusing system can be reduced to a few meters even for 100PW lasers, which is one order of magnitude smaller than the ordinary focusing system. As shown in Fig. 1(d), plasma telescope is basically composed of an ordinary OAP mirror of small f-number and an OAP plasma mirror that reflects intense laser focused by the first OAP. Curved PMs have been experimentally employed to focus intense laser to higher intensities [17; 18; 19; 20; 21; 22], which is a demonstration of the practicality of the focusing ability of the curved PMs. By using the telescope system, the jittering of laser spot on the target, e.g., the plasma channel for LWFA, can be better stabilized than the hundreds-of-meter-long ordinary focusing system since the distance between the channel and the OAP-PM is much shorter than the ordinary focusing system. In the following, we carry out numerical validation of the reflection of 1PW laser pulse by an OAP-PM via 3-dimesional (3D) particle-in-cell (PIC) simulations. The laser wavefront, reflection efficiency, presence of preplasma and high-order harmonics are investigated in the 3D simulations. The reflected pulse is then utilized as the driving pulse of the subsequent LWFA stage to qualify its availability, which is simulated in the quasi-cylindrical coordinate. The laser pulse reflected by plasma telescope shows consistent acceleration gradient as compared to the pulse focused by Figure 1: (a) Scaling of electron energy gain in LWFA driven by PW-class laser systems. The electron energy gain depends on both the laser power and the plasma density. (b) Scaling law of focal lengths of LWFA for ordinary focusing system and the proposed plasma telescope. (c) PW-class laser focused by an OAP mirror with long focal length. (d) PW-class laser focused by plasma telescope composed of an ordinary OAP mirror with short focal length and a plasma OAP mirror behind the focal spot. It compresses the size of focusing system of PW-class lasers from 10-100 meters to a few meters. ordinary OAP, where no significant modification is induced by laser-plasma interaction, demonstrating 9GeV electron acceleration with 1PW laser power in a 1m optical length. The scenario can be extended to the 10-100PW lasers since the field strengths during reflection and the interaction geometry are almost the same. ## II Reflection by OAP-PM In the following simulations, we assume the field strength of the circularly polarized driving pulse at LWFA stage is about \(a_{0}=4\), i.e. \(a_{0}\approx 2.83\) in y and z directions for laser propagating along x, and peak power of 1PW, where \(a_{0}=eE_{0}/mc\omega\) is the normalized field strength with \(e\) the electron charge, \(E_{0}\) the peak electric field amplitude, \(m\) the electron mass, \(c\) the speed of light and \(\omega\) the angular frequency of the laser. In order to achieve short focal lengths, the f-number of the first ordinary OAP used to focus the incident laser is fixed to \(f_{\#}\approx 6\) which will generate a laser spot of \(w_{0}=4\mu\)m for 800nm lasers at the first focus, as shown in Fig. 2(a). The f-number of the first OAP mirror is flexible by adjusting the subsequent OAP-PM, as long as the combined plasma telescope transforms the incident laser to desired spot size. In order to get reflected pulse of \(a_{0}=4\), the OAP-PM is placed behind the laser focus at \(z_{0}\), where the field strength decreases to around \(a_{0}=4\) and spot size increases to \(w_{0}\approx 43\mu\)m, corresponding to curvature radius of the wavefront of \(R\approx 680\mu\)m. To transform the focused laser to a nearly-plane-wave for later LWFA, the distance between the laser focus and the OAP-PM satisfies \(z_{0}\approx f\approx 675\mu\)m, where the curvature radius of the OAP-PM is approximately twice of that of the incident wavefront, which guarantees the transformation of curved wavefront to flat wavefront. The chosen off-axial angle is \(\theta=0.15\) rad for computational efficiency consideration, which will be discussed in method section. As a result, the f-number of the OAP-PM is about \(f_{\#}\approx 10.6\). The reflection and propagation of the pulse is shown in Fig. 2(b-d). During the reflection, as shown in Fig. 2(b), the field strength is periodically modulated along the y-direction by the interference between the reflected and incident pulse due to the extra optical path induced by the curved surface. On the other hand, no visible distortion of the PM is observed. After the pulse is reflected and the simulation window starts to move, slight modifications and noises induced by laser-plasma interaction can be observed in the behind of the pulse in Fig. 2(c), which become absent after propagation of \(ct=500\mu\)m shown in Fig. 2(d). It should be noted that the density distribution and laser pulse are rotated by the off-axial angle of \(\theta=0.15\) so that the reflected pulse propagates along the x-direction. The reflected pulse and the transverse profile are shown in Fig. 3(a-b) and the modification to the pulse is quantified in Fig. 3(c) in terms of transverse profile of the electric fields and the transverse phase relative to the pulse center. The relatively flat phase curve indicates that the wavefront is nearly planar. But the lowered field profile (solid-red/-blue) indicates that a part of the pulse energy is lost after the reflection, which is 83.6% of the incident pulse, i.e., 24.0J, due to the absorption and heating of the PM electrons. However, in more realistic situations, the pre-pulse or pedestals of the laser pulse may produce preplasma on the front surface of the PM, which will influence the energy absorption of PM from the laser pulse. Thus, an exponentially distributed preplasma of \(\exp(-x^{\prime}/l_{\text{pre}})\) is added to the PM surface, where \(x^{\prime}\) is the coordinate vertical to the rotated OAP-PM and \(l_{\text{pre}}\) is the scale of the preplasma [21]. In our modelling, we choose \(l_{\text{pre}}=0.1\mu\)m, which is a typical situation that can be realized by adjusting the laser pre-pulse. We notice that the reflectivity can be boosted to 93.4% at the presence of preplasma of \(l_{\text{pre}}=0.1\mu\)m, but at the expense of modified pulse profile, as shown in Fig. 3(d-f), where the squeezed transverse profile and convex phase indicate that the laser is more focused than that without preplasma. It is because the presence of preplasma amplifies the denting of the PM [23] which further shortens the focal length. In our modelling, larger \(l_{\text{pre}}\) will further amplify the denting effect and degrade reflectivity. In fact, the formation and evolution Figure 2: (a) Geometry of reflection of the incident pulse by OAP-PM. (b-d) Electric fields (red and blue) and PM density (black) in the xy-plane at \(ct=60\mu\)m, \(ct=100\mu\)m and \(ct=500\mu\)m. should be well controlled by adjusting the strength and delay of the pre-pulse whenever PM is utilized [23]. To mitigate the modification, we adjust the focal length of the OAP-PM to \(f=750\mu\)m from the designed \(f=675\mu\)m. The transverse profile and the relative phase get recovered, as shown in Fig. 3(i), where the pulse profile is closer to the expected Gaussian pulse profile (dashed-gray line) and the laser phase is as flat as Fig. 3(c), indicating that the wavefront is almost planar and is suitable for the LWFA stage. As a result, by increasing the focal length in the presence of preplasma, the OAP-PM is able to reflect the incident laser pulse with high reflectivity without significant distortion of the wavefront. When the incident laser is linearly polarized (LP), the reflectivity, however, drops to around 73% for both s- and p-polarizations in our modelling due to stronger ponderomotive oscillation of electrons in the LP laser fields than in the CP laser fields. Therefore, focusing of LP PW-class lasers with the proposed plasma telescope comes with energy loss of about 30%, which Figure 3: (a) The reflected pulse at \(ct=500\mu\)m in xy-plane for \(f=675\mu\)m without preplasma. (b) The Ey field in the yz-plane sliced at the black-dashed lines in (a-c). (c) The profile of Ey at the red-/blue-dashed lines in (d-e) (red-/blue-solid) and the corresponding phases (red-/blue-dashed) relative to the pulse center. The gray-dashed lines are the expected Ey profile without energy loss. (d-f) The results of \(f=675\mu\)m with preplasma. (g-i) The results of \(f=750\mu\)m with preplasma. becomes a trade-off between long focal lengths and lowered laser energies when reflecting LP lasers. During the laser-plasma interaction, high-order harmonics could also be generated via the relativistic oscillating mirror mechanism [24; 25]. The normalized field strengths of the second and third order harmonics are \(a_{0}(2\omega)\approx 0.25\) and \(a_{0}(3\omega)\approx 0.15\) in our modelling, which are negligible compared to the main pulse of \(a_{0}(\omega)\approx 4\) in the context of LWFA. This is because the high-order harmonics efficiency is suppressed for CP lasers [26]. However, for s-/p-polarized lasers, the strength of the 3rd order harmonic can reach \(a_{0}(3\omega)\approx 1.2\), which could potentially affect LWFA stage. Thus, the role of HHG in LWFA remains to be investigated for LP lasers. In the following LWFA stage, only CP laser is considered. ## III Acceleration by reflected pulse Plasma channel can guide laser pulse without changing the spot size when the laser spot size matches the channel [27], which is an effective guiding method for long-distance LWFA [28; 2]. The density profile of the channel is expressed as \(n_{e}(r)=n_{0}+r^{2}/\pi r_{e}w_{m}^{4}\) where \(n_{0}\) is the central electron density, \(r_{e}\approx 2.8\times 10^{-15}\)m the classical electron radius and \(w_{m}\) the matched spot size. For demonstration of the acceleration capability of the reflected laser pulse, we carry out LWFA simulation for both the reflected pulse and the pulse injected from simulation boundary, i.e., without reflection by PM. The laser pulses are injected into the plasma channel with central density of \(n_{0}\approx 2.4\times 10^{17}\)cm\({}^{-3}\) and matched spot size of \(w_{m}\approx 43\mu\)m for \(w_{m}=w_{0}\). The LWFA stage is simulated via FBPIC [29] in r-z coordinates with the reflected pulse as the initial condition. Simulation details are shown in the method section. The accelerating field \(E_{z}\) and bubble structures are compared in Fig. 4(a-c). One can see that the strengths of the stimulated accelerating fields \(E_{z}\) are almost identical for the reflected and injected pulses, which remains true in the long-term evolution shown in Fig. 4(d) and 4(e), where the bubble and injected electron are represented by the variation of the on-axis acceleration field \(E_{z}\) and the on-axis electron density. The yellow trajectories indicate the electrons of high density, i.e., the tail of the bubble and the injected electron bunch. It can be inferred from the electron bunch trajectory and the evolution of \(E_{z}\) that the electrons experience similar acceleration gradients. The consequent electron energy evolution and final spectrum are shown in Fig. 4(f) and 4(g). It can be seen that the electrons injected at similar positions are accelerated to similar energies despite the different bubble evolution. For example, electrons injected at \(ct\approx 60\)mm are accelerated to \(E_{k}\approx 6\)GeV and \(ct\approx 40\)mm to \(E_{k}\approx 8\)GeV in both cases, which can be inferred from the gray lines in Fig. 4(f) and 4(g). As for electron bunch emittance, both the reflected pulse and injected pulse generate well-collimated electrons as indicated by the curves in Fig. 5. The emittance is calculated by \(\varepsilon_{x}=\sqrt{\langle x^{2}\rangle\langle\theta_{x}^{2}\rangle- \langle x\theta_{x}\rangle^{2}}\), where \(\langle\cdot\rangle\) denotes the standard deviation and \(\theta_{x}=\tan^{-1}(p_{x}/p_{z})\), and \(\varepsilon_{y}\) is calculated in the same way. In other words, for bunch size of \(5\mu\)m the angular divergence is about \(0.2\)mrad when \(\varepsilon_{x}\approx\varepsilon_{y}\approx 1\mu\mathrm{m}\cdot\mathrm{mrad}\). In both situations the evolutions of the bunch emittance are almost identical after the electrons are significantly injected, i.e., after \(ct=50\)mm, except that the bunch emittance driven by the reflected pulse is slightly higher than the injected pulse due to much Figure 4: (a-c) Accelerating field \(E_{z}\) (red and blue) and electron density (gray) at (a) \(z\approx 20\)mm, (b) \(z\approx 50\)mm and (c) \(z\approx 80\)mm of the pulse reflected by plasma OAP (up) and the pulse injected from boundary (down). (d-e) Variation of the on-axis accelerating field \(E_{z}\) in the simulation window (red and blue) and the on-axis electron density (yellow) for (d) the reflected and (e) the injected pulse. (f-g) Evolution of the electron energy spectrum (gray) and the final spectrum (blue) for (f) the reflected and (g) the injected pulse. higher bunch charge. However, since LWFA in the blowout regime is highly nonlinear [6; 7; 30] and is sensitive to initial conditions, any modification of the driving laser could induce disparate results, especially for long-distance acceleration [6; 31; 32]. The shot-to-shot instability could be even more significant than the distortion induced by reflection. In other words, considering the similar acceleration gradient in our modelling, reflection by OAP-PM will not induce more significant instability than the shot-to-shot instability of the laser system itself. Therefore, slight mismatch between the evolution of the bubble structures can be observed in Fig. 4(a) and 4(b) where the bubble generated by the reflected pulse is larger than the injected pulse in Fig. 4(a) and smaller in Fig. 4(b). The inconsistent bubble oscillation during pulse propagation results in different self-injection moments, as indicated by the start of the yellow slashes in Fig. 4(d) and 4(e). The reflected pulse induces longer bubble oscillation time whereas the injected pulse produces continuously expanding bubble after a few oscillations. Therefore, the reflected pulse induces discontinuous self-injection, but the electrons driven by the injected pulse are continuously injected into the end of the bubble. This difference results in disparate electron bunches as shown by the variation of energy spectrums in Fig. 4(f-g). The discontinuous injection produces several energy spikes with significantly higher beam charge but lower cutoff energy at \(E_{k}\approx 9\)GeV since a few electrons are accelerated over 10GeV by the injected pulse in Fig. 4(g) as indicated by the gray lines. Figure 5: Electron bunch emittance \(\varepsilon_{x}\) (red) and \(\varepsilon_{y}\) (blue) driven by the pulse reflected by plasma OAP (solid) and the pulse injected from boundary (dashed). Electrons below 1GeV are excluded. Discussion The proposed plasma telescope aims to reduce the focal lengths of 1-100PW laser systems when driving LWFA since higher laser power generates higher electron energies, which is shown in Fig. 1(a) according to the scaling law in [4]. When using ordinary OAP focusing geometry, for specific OAP damage threshold and specific focal intensity, the required spot size onto the OAP scales with \(w_{0}\sim\sqrt{P}\) and the f-number scales with \(f_{\#}\sim\sqrt{P}\) resulting in linear scaling of focal length with \(f\sim P\), as shown by the black-dashed line in Fig. 1(b). It means generating 100GeV electron bunch requires hundreds of meters of focal length. By using the proposed plasma telescope, the f-number of the OAP can be fixed to, for example, \(f_{\#}\approx 6\) in all scenarios in our modelling, and the scaling law becomes \(f\sim\sqrt{P}\), as shown by the black-solid line in Fig. 1(b), which reduces the focal length by 2 orders when generating 100GeV electrons. In terms of laser pointing stability, since the plasma channel can be situated much closer to the OAP-PM, typically just a few centimeters away, the positional jittering can be effectively managed. In contrast, conventional systems place the plasma channel tens to hundreds of meters away from the focusing mirror. For instance, a 1\(\mu\)rad angular jittering of the first OAP mirror results in only about 10\(\mu\)m of positional jittering when the plasma channel is 1cm away from the OAP-PM, as illustrated in Fig. 6. The positional jittering can be further reduced to 1\(\mu\)m if the plasma channel can be placed 1mm away from the OAP-PM. However, in conventional systems, the same angular jittering would cause positional jittering of tens to hundreds of micrometers at the plasma channel due to the longer focal length that extends up to hundreds of meters for higher power lasers. The plasma telescope thus alleviates the stringent requirements on the pointing stability of the OAP mirror at the plasma channel entrance for high-power laser systems. It provides a more compact and efficient solution, enhancing the overall performance and precision of laser systems. For manufacturing considerations, in the investigated geometry, the microscopic OAP-PM can be manufactured via 3D-printing technique [33] or rotating liquid that forms parabolic surface [34]. On the other hand, the OAP-PM can be replaced by an ellipsoidal PM (EPM) [17; 19]. The foci of the ellipsoid form a focus-to-focus imaging system where the spot size is magnified by \(\beta/\alpha\) where \(\beta\) and \(\alpha\) are the distance of the foci to the reflection point of EPM [35]. The EPM can be manufactured in macroscopic scale by tuning \(\alpha\), \(\beta\) and the ellipticity of the EPM. ## V Conclusion The proposed OAP-PM effectively transforms a 1PW laser pulse focused by a short-focal-length OAP to a pulse with long Rayleigh length, forming a plasma telescope. The pulse reflected by OAP-PM successfully generates 9GeV electron bunch in the subsequent LWFA stage. Though slight modifications are introduced to the reflected pulse, the acceleration gradient and bunch emittance are similar to the ordinary focusing system. The proposed method essentially provides a new option to reduce the focal lengths of 1-100PW laser systems when large spot size is required like LWFA. Compact LWFA based on PW-class laser system paves way for multi-stage acceleration towards TeV electrons. ## VI Methods Considering the laser-plasma interaction in strong fields, the reflection of tightly focused laser pulse by OAP-PM is simulated via PIC method in 3D space, which is carried out using the EPOCH code [36]. Then the LWFA stage is simulated in quasi-cylindrical coordinate via the FBPIC code [29] based on the reflected pulse from the reflection stage, which significantly reduces the simulation time and makes it possible to simulate the 15-centimeter-long acceleration stage. The 3-D simulations are carried out in the \(80\mu\mathrm{m}\times 200\mu\mathrm{m}\times 200\mu\mathrm{m}\) box with cell size of \(0.05\lambda\times 0.2\lambda\times 0.2\lambda\). The OAP-PM is placed at the left boundary of the simulation box with 8 macro-electrons and 1 macro-proton in each cell with density of \(20n_{c}\) where \(n_{c}\) is the critical plasma density. For off-axial angle \(\theta\), the parabolic PM is rotated by \(\theta\) along the z-axis, so that the reflected laser pulse propagates along the x-axis, as shown in Fig. 1. Considering the simulation efficiency, the electron density of the OAP-PM is set to Figure 6: Sketch of the positional jittering induced by angular jittering of the first OAP mirror. 1\(\mu\)rad of angular jittering will result in about 10\(\mu\)m of positional jittering at 1cm away from the OAP-PM. \(3.48\times 10^{22}\)cm\({}^{-3}\) and its off-axial angle \(\theta=0.15\)rad. The reflectivity could benefit from higher plasma density in the modelling, but it will require higher spatial resolution to resolve smaller plasma wavelength. Larger off-axial angles do not change the interaction picture but may need high resolution along transverse directions. The reflected pulse in the 3-D simulation is then transformed into the cylindrical coordinate via azimuthal Fourier decomposition [37]. First, the electromagnetic field components \(F(x,y,z)\) in the Cartesian grid is converted to \(F(z,r,\theta)\). The latter can be expressed by the summation of the Fourier components \[F(z,r,\theta)=\sum_{m}F^{(m)}(z,r)e^{im\theta}, \tag{1}\] where \(F^{(m)}(z,r)\) is the Fourier components and m is the mode number. The Fourier components are calculated via Fourier transformation \[F^{(m)}(z,r)=\frac{1}{2\pi}\int_{0}^{2\pi}F(z,r,\theta)e^{im\theta}d\theta. \tag{2}\] Such conversion of fields is possible when the symmetry of fields can be resolved by azimuthal Fourier expansion of \(e^{im\theta}\), which is accurate when the fields are highly symmetric along the \(\theta\)-axis. In our simulation, 3 modes is sufficient to model the reflected pulse. The quasi-cylindrical simulation is carried out in the \(4096\times 800\) window with cell size of \(0.05\lambda\times 0.2\lambda\) in z and r directions. Each cell is filled with \(1\times 1\times 12\) macro-particles in z, r and \(\theta\) directions. The above calculated Fourier components are loaded into the simulation window as the initial conditions. ###### Acknowledgements. The authors acknowledge insightful discussions with Prof. A. Pukhov and Prof. I. Kostyukov. The work is supported by the China Postdoctoral Science Foundation (2022M713258), the Shanghai Science and Technology Development Foundation (22YF1455100), CAS Project for Young Scientists in Basic Research (YSBR060), National Natural Science Foundation of China (11935008), National Key Research and Development Program of China (2022YFE0204800), and the International Partnership Program of Chinese Academy of Sciences (181231KYSB20200040). ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request.
2307.14386
The effect of axisymmetric confinement on propulsion of a three-sphere microswimmer
Swimming at the microscale has recently garnered substantial attention due to the fundamental biological significance of swimming microorganisms and the wide range of biomedical applications for artificial microswimmers. These microswimmers invariably find themselves surrounded by different confining boundaries, which can impact their locomotion in significant and diverse ways. In this work, we employ a widely used three-sphere swimmer model to investigate the effect of confinement on swimming at low Reynolds numbers. We conduct theoretical analysis via the point-particle approximation and numerical simulations based on the finite element method to examine the motion of the swimmer along the centerline in a capillary tube. The axisymmetric configuration reduces the motion to one-dimensional movement, which allows us to quantify how the degree of confinement affects the propulsion speed in a simple manner. Our results show that the confinement does not significantly affect the propulsion speed until the ratio of the radius of the tube to the radius of the sphere is in the range of $\mathcal{O}(1)-\mathcal{O}(10)$, where the swimmer undergoes substantial reduction in its propulsion speed as the radius of the tube decreases. We provide some physical insights into how reduced hydrodynamic interactions between moving spheres under confinement may hinder the propulsion of the three-sphere swimmer. We also remark that the reduced propulsion performance stands in stark contrast to the enhanced helical propulsion observed in a capillary tube, highlighting how the manifestation of confinement effects can vary qualitatively depending on the propulsion mechanisms employed by the swimmers.
Ali Gürbüz, Andrew Lemus, Ebru Demir, On Shun Pak, Abdallah Daddi-Moussa-Ider
2023-07-26T06:22:53Z
http://arxiv.org/abs/2307.14386v1
# The effect of axisymmetric confinement on propulsion of a three-sphere microswimmer ###### Abstract Swimming at the microscale has recently garnered substantial attention due to the fundamental biological significance of swimming microorganisms and the wide range of biomedical applications for artificial microswimmers. These microswimmers invariably find themselves surrounded by different confining boundaries, which can impact their locomotion in significant and diverse ways. In this work, we employ a widely used three-sphere swimmer model to investigate the effect of confinement on swimming at low Reynolds numbers. We conduct theoretical analysis via the point-particle approximation and numerical simulations based on the finite element method to examine the motion of the swimmer along the centerline in a capillary tube. The axisymmetric configuration reduces the motion to one-dimensional movement, which allows us to quantify how the degree of confinement affects the propulsion speed in a simple manner. Our results show that the confinement does not significantly affect the propulsion speed until the ratio of the radius of the tube to the radius of the sphere is in the range of \(\mathcal{O}(1)-\mathcal{O}(10)\), where the swimmer undergoes substantial reduction in its propulsion speed as the radius of the tube decreases. We provide some physical insights into how reduced hydrodynamic interactions between moving spheres under confinement may hinder the propulsion of the three-sphere swimmer. We also remark that the reduced propulsion performance stands in stark contrast to the enhanced helical propulsion observed in a capillary tube, highlighting how the manifestation of confinement effects can vary qualitatively depending on the propulsion mechanisms employed by the swimmers. ## I Introduction The study of locomotion in fluids at the microscopic scale has attracted significant attention in recent decades. This growing interest is not only driven by the motivation to better understand the motility of swimming microorganisms [1; 2; 3] but also the potential biomedical applications of artificial microswimmers such as targeted drug delivery and minimally invasive microsurgery [4; 5; 6; 7; 8; 9]. Locomotion of biological and artificial microswimmers occurs at negligibly small Reynolds numbers (Re), where viscous forces largely dominate inertial forces. In the inertialess regime, the ability to self-propel is severely constrained owing to kinematic reversibility. In particular, Purcell's scallop theorem [10] states that in the absence of inertia, deformations exhibiting time-reversal symmetry (e.g., the motion of a single-hinged scallop opening and closing its shell), also known as reciprocal motion, are unable to produce any net self-propulsion. Common macroscopic swimming strategies such as rigid flapping motion hence become largely ineffective at low Re. Microorganisms such as bacteria and spermatozoa have evolved strategies that utilize biological appendages called flagella with the action of molecular motors to swim in their microscopic world. Extensive studies in the past decades have elucidated the physical principles underlying their motility [11; 12; 13; 14; 15; 16]. In parallel efforts, researchers have sought simple and effective mechanisms to develop artificial microswimmers [17; 18; 19; 20]. In his pioneering work, Purcell demonstrated how a three-link swimmer [10], now known as Purcell's swimmer [21; 22; 23; 24; 25], can generate net translation with kinematically irreversible cyclic motions. This elegant example has inspired subsequent development of mechanisms that can overcome the fundamental challenge of generating self-propulsion in the inertialess regime. In particular, Najafi and Golestanian [26] developed a swimmer consisting of three spheres connected by two extensible rods, which adjust their lengths in a cyclic manner to ingeniously exploit hydrodynamic interactions between the spheres for self-propulsion. The mechanism has also engendered a variety of variants [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] and their experimental realizations [38; 39; 40; 41]. For its simplicity, the three-sphere swimmer has gained popularity as a useful model for examining different fundamental aspects of locomotion at low Re, including the effect of complex rheology [42; 43], optimized locomotion [44; 35], interactions of swimmers [43; 44; 45], and swimming near walls [46; 47; 48; 49]. The three-sphere model has further been used to investigate the reorientation dynamics of microswimmers with respect to flow gradients (rheotaxis) [50], finding that payloads can be exploited to enhance their motion against flows. More recently, the model has also been employed to explore the integration with machine learning in realizing smart microswimmers [51; 52; 53; 54; 55]. Here we utilize the three-sphere swimmer model to probe the effect of confinement on swimming at low Re. Microswimmers invariably find themselves surrounded by different confining boundaries. Extensive studies have demonstrated how swimming near planar boundaries can impact locomotion in significant and diverse manners [46; 47; 48; 49; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68]. Microorganisms also encounter more complex confinements than planar boundaries, such as spermatozoa swimming through
2303.09146
Sets with dependent elements: A formalization of Castoriadis' notion of magma
We present a formalization of collections that Cornelius Castoriadis calls ``magmas'', especially the property which mainly characterizes them and distinguishes them from the usual cantorian sets. It is the property of their elements to {\em depend} on other elements, either in a one-way or a two-way manner, so that one cannot occur in a collection without the occurrence of those dependent on it. Such a dependence relation can be represented by a pre-order relation $\preccurlyeq$ Then, working in a mild strengthening of the theory ${\rm ZFA}$, where $A$ is an infinite set of atoms equipped with a primitive pre-ordering $\preccurlyeq$, the class of magmas over $A$ is represented by the class $LO(A,\preccurlyeq)$ of nonempty open subsets of $A$ with respect to the lower topology of $\langle A,\preccurlyeq\rangle$. Next the pre-ordering $\preccurlyeq$ is shifted (by a kind of simulation) to a pre-ordering $\preccurlyeq^+$ on ${\cal P}(A)$, which turns out to satisfy the same non-minimality condition as well, and which, happily, when restricted to $LO(A,\preccurlyeq)$ coincides with $\subseteq$. This allows us to define a hierarchy $M_\alpha(A)$, along all ordinals $\alpha\geq 1$, the``magmatic hierarchy'', such that $M_1(A)=LO(A,\preccurlyeq)$, $M_{\alpha+1}(A)=LO(M_\alpha(A),\subseteq)$, and $M_\alpha(A)=\bigcup_{\beta<\alpha}M_\beta(A)$, for a limit ordinal $\alpha$. For every $\alpha\geq 1$, $M_\alpha(A)\subseteq V_\alpha(A)$, where $V_\alpha(A)$ are the levels of the universe $V(A)$ of ${\rm ZFA}$. The class $M(A)=\bigcup_{\alpha\geq 1}M_\alpha(A)$ is the ``magmatic universe above $A$.''
Athanassios Tzouvaras
2023-03-16T08:21:54Z
http://arxiv.org/abs/2303.09146v1
# Sets with dependent elements: A formalization of Castoriadis' notion of magma ###### Abstract We present a formalization of collections that Cornelius Castoriadis calls "magmas", especially the property which mainly characterizes them and distinguishes them from the usual cantorian sets. It is the property of their elements to _depend_ on other elements, either in a one-way or a two-way manner, so that one cannot occur in a collection without the occurrence of those dependent on it. Such a dependence relation on a set \(A\) of atoms can be naturally represented by a pre-order relation \(\preccurlyeq\) of \(A\) with the extra condition that it contains no minimal elements. Then, working in a mild strengthening of the theory ZFA, where \(A\) is an infinite set of atoms equipped with a primitive pre-ordering \(\preccurlyeq\), the class of magmas over \(A\) is represented by the class \(LO(A,\preccurlyeq)\) of nonempty open subsets of \(A\) with respect to the lower topology of \(\langle A,\preccurlyeq\rangle\). The non-minimality condition for \(\preccurlyeq\) implies that all sets of \(LO(A,\preccurlyeq)\) are infinite and none of them is \(\subseteq\)-minimal. Next the pre-ordering \(\preccurlyeq\) is shifted (by a kind of simulation) to a pre-ordering \(\preccurlyeq^{+}\) on \(\mathcal{P}(A)\), which turns out to satisfy the same non-minimality condition as well, and which, happily, when restricted to \(LO(A,\preccurlyeq)\) coincides with \(\subseteq\). This allows us to define a hierarchy \(M_{\alpha}(A)\), along all ordinals \(\alpha\geq 1\), the"magmatic hierarchy", such that \(M_{1}(A)=LO(A,\preccurlyeq)\), \(M_{\alpha+1}(A)=LO(M_{\alpha}(A),\subseteq)\), and \(M_{\alpha}(A)=\bigcup_{\beta<\alpha}M_{\beta}(A)\), for a limit ordinal \(\alpha\). For every \(\alpha\geq 1\), \(M_{\alpha}(A)\subseteq V_{\alpha}(A)\), where \(V_{\alpha}(A)\) are the levels of the universe \(V(A)\) of ZFA. The class \(M(A)=\bigcup_{\alpha\geq 1}M_{\alpha}(A)\) is the "magmatic universe above \(A\)." The axioms of Powerset and Union (the latter in a restricted version) turn out to be true in \(\langle M(A),\in\rangle\). Besides it is shown that three of the five principles about magmas that Castoriadis proposed in his writings are true of \(M(A)\). A selection of excerpts from these writings, in which the concept of magma was first introduced and elaborated, is presented in the Introduction. _Mathematics Subject Classification (2020)_: 00A69, 06A06 _Keywords:_ Cornelius Castoriadis' notion of magma, dependence relation, pre-ordering, lower topology of a pre-ordered set, exponential shifting of pre-ordering. ## 1 Introduction In his seminal book [2] Cornelius Castoriadis1 is concerned, among many other things, with the way collections of things are presented to (or created by) human mind. In particular he aims to make us rethink of the "self-evident" belief of western rationalistic tradition that any collection of things needs to be a _cantorian_ collection, that is a totality of distinct, definite and ontologically independent elements. His favorite (counter) examples are the "totality of meanings" of a natural language, the "totality of one's memories", and the like. The elements of such totalities are not quite definite, and not fully differentiated and independent from one another, so one could hardly call them "sets" in the ordinary sense of the word, and thus include them in the cantorian universe. If, for instance, \(a\) is a particular meaning or memory, one could not fully separate it from related meanings or memories, respectively, in the sense that whenever we think of \(a\) as an element of some collection, \(a\) inevitably brings to mind other similar meanings or memories as members of the same collection, and as a result \(a\) cannot exist in isolation. A typical consequence of this is that the one-element set \(\{a\}\) can hardly make sense, as the cantorian tradition requires. Since such collections abound around us, it would be natural to try to accommodate and comprehend them in the framework of a theory that differs from that of Cantor. Footnote 1: Cornelius Castoriadis (1922-1997) was a prominent social and political thinker of 20th century. He spent most of his life in Paris. From 1979 until his death he was Director of Studies at the École des Hautes Études en Sciences Sociales (EHESS). His monograph [2] is widely considered his main work. Although a humanitarian philosopher by training, he had got an impressive solid background in science, especially in economics, mathematics and theoretical physics. C. Castoriadis seems to be the first thinker who felt the need for some kind of theory that would embrace, even without full rigor, the study of such collections. He believes that the specific cantorian tradition, which requires collections to comply with the rules of standard set theory and rejects those that do not as non-existent, is rather accidental and due to the early adoption by the western thought of what he calls "identifary ensemblistic" logic (roughly, the two-valued classical logic interacting with naive set theory of predicate extensions). "For the past 25 centuries, Greco-Western thinking has constituted, developed, amplified and refined itself on the basis of this thesis: being is being something determined (einai ti), speaking is saying something determined (ti legein). And, of course, speaking the truth is determining speaking and what is said by the determinations of being or else determining being by the determinations of speaking, and, finally, observing that both are but one and the same. This evolution, instigated by the requirements of one dimension of speaking and amounting to the domination or the autonomization of this dimension, was neither accidental nor inexorable; it corresponded to the institution by the West of thinking as Reason. I call the logic described above identity logic and also, aware of the anachronism and the stretching of words involved here, set-theoretical logic, for reasons that will soon be apparent. (...) The logical rudiments of set-theory are important in this respect for, regardless of what may happen in the future from the perspective of mathematics, they condense, clarify, and exemplify in a pure manner what, all the while, was underlying identity logic, and what, long before this logic was sketched out, constituted an essential and unexpungible dimension of all activity and all social life. These rudiments, indeed, posit and constitute explicitly both the type of logic, in its greatest generality, required by identity logic and the relations necessary and almost sufficient for this logic to function unhumpered and without limit." ([2], pp. 222-223) Castoriadis believes that identity-ensemblist logic affects greatly our grasping of the reality through the "creation" of sets out of something pre-existent and rather undifferentiated. This undifferentiated reality out of which the identity-ensemblist logic generates sets, classes, objects and properties, is, roughly, what he calls _magma_. "What we seek to understand is the mode of being of what gives itself before identitary or ensemblist logic is imposed; what gives itself in this way in this mode of being, we are calling a _magma_. It is obviously not a question of giving a formal defi nition of it in received language or in any language whatsoever. The following statement, however, may not be unhelpful: A magma is that from which one can extract (or in which one can construct) an indefinite number of ensemblist organizations but which can never be reconstituted (ideally) by a (finite or infinite) ensemblist composition of these organizations." ([2], p. 343) As mentioned above, Castoriadis' favorite examples of magmas are the "multiplicity of meanings/significations" of a natural language and the "multiplicity of one's representations." "Let us try then, by means of an accumulation of contradictory metaphors, to give an intuitive description of what we mean by magma (the best intuitive support the reader can present to himself is to think of 'all the significations of the English language' or 'all the representations of his life'). We have to think of a multiplicity which is not one in the received sense of the term but which we mark out as such, and which is not a multiplicity in the sense that we could actually or virtually enumerate what it 'contains' but in which we could mark out in each case terms which are not absolutely jumbled together. (...) And we have to think of the operations of identity logic as simultaneous, multiple dissections which transform or actualize these virtual singularities, these components, these terms into distinct and definite elements, solidifying the pre-relation of referral into relation as such, organizing the holding together, the being-in, the being-on, the being-proximate into a system of determined and determining relations (identity, difference, belonging, inclusion), differentiating what they distinguish in this way into 'entities' and 'properties', using this differentiation to constitute'sets' and 'classes."' ([2], p. 344) Objects and properties of the world seem to be the outcome of the ability of our mind for separation, partitioning and individuation. Sets and classes are also products of this very mental mechanism. Castoriadis mentions times and again Cantor's well-known "definition" of set: "A set is a collection into a whole of definite and separate objects of our intuition or our thought. These objects are called 'elements' of the set." Any specification of a set is clearly an act of separation and individuation. When we say "let \(X\) be the set of all \(x\) such that...," we focus on a specific part of the reality, we individualize it and cut it off as a separate object by an act of saying, that is, a linguistic construct (a formula). On the other hand, the basic quality of magma, which differentiates it from ordinary sets and classes, is the fact that its "elements" are neither fully determined nor fully distinguishable and separable from one other. "As a magma, the significations of a language are not the elements of an ensemble subject to determinacy as their mode and their criterion of being. A signification is indefinitely determinable (and the 'indefinitely' is obviously essential) without thereby being determined. It can always be marked out, provisionally assigned as an identity element to an identity relation with another identity element (this is the case in designation), and as such be 'a something' as the starting point for an open series of successive determinations. These determinations, however, in principle never exhaust it. What is more, they can, and always do, force us to reconsider the initial'something' and lead us to posit it as'something else,' overturning by this very fact, or in order to bring it about, the relations by means of which the initial determination had been made." ([2], p. 346) It is remarkable that a very similar position is expressed by John Searle about mental states and the content of our consciousness in general: "One has conscious states such as pains and thoughts only as a part of living a conscious life, and each state has the identity it has only in relation to other such states. My thought, for example, about a ski race I ran long ago, is only that very thought because of its position in a complex network of other thoughts, experiences, and memories. My mental states are internally related to each other in the sense that in order for a mental state to be that state with that character it has to stand in certain relation to the real world." ([5], p. 42) The above speculations about magma are clearly vague. However in [3] Castoriadis devotes a whole chapter to the subject entitled "The logic of magmas and the problem of autonomy." He starts the chapter with a quotation from a letter of G. Cantor to R. Dedekind: "Every multiplicity is either a shaky multiplicity or a set." On this Castoriadis comments: "To say of a multiplicity that it is inconsistent obviously implies that this multiplicity _is,_ it is in a certain fashion that remains to be specified and that Cantor does not specify. Clearly, we are not dealing here with an empty set, which is a set in full right, with its place in set theory. It is toward these inconsistent multiplicities - inconsistent from the standpoint of a logic that claims to be consistent or rigorous - that I turned, starting from the moment, in 1964-1965, when the importance of what I have called the radical imaginary in the human world became apparent to me. Noting that the human psychism cannot be 'explained' by biological factors or considered as a logical automaton of no-matter-what richness and complexity. (...) After various terminological peregrinations - cluster, conglomerate, and others - for this mode of being, as well as the logico-ontological organization it bears, I have ended up with the term _magma_. I was later to discover that from 1970 on the editions of Nicolas Bourbaki's _Algebre_ utilized the term with an acceptation that bears no relation at all to the one I have tried to give it and that is, of course, strictly ensemblistic-identiary in character. As the term, by its connotations, admirably lends itself to what I want to express, and as, dare I say, its utilization by Bourbaki seems to me both rare and superfluous, I have decided to retain it." ([3], pp. 366-368) A little later, after recalling the definition of magma, Castoriadis says: "I note in passing that Jean-Pierre Dupuy remarked to me that the 'definition' cited above is unsatisfactory, for it would cover just as well what, to avoid Russell's Paradox, has been called in mathematics a 'class.' The objection is formally correct. It does not trouble me much, for I have always thought, and still think, that the 'class,' in this acceptation of the word, is a logical artifact constructed _ad hoc_ to get around Russell's Paradox, and that it succeeds in doing so only by means of an infinite regress. Rather than comment on this 'definition,' however, we are going to try here to illuminate other aspects of the idea of magma by exploring the paths (and the impasses) of a more 'formal' language. For this, one must introduce a primitive (indefinable and undecomposable) term/relation: the marking (reperer) term/relation, whose valence is at once unary and binary. So, let us suppose that the reader unambiguously understands the expressions: 'to mark X;' 'X marks Y;' 'to mark X in Y' (to mark a dog; the collar marks the dog; to mark or locate the dog in the field). In using this term/relation, I 'define' a magma by the following properties: _M1_: If \(M\) is a magma, one can mark, in \(M\), an indefinite number of ensembles. _M2_: If \(M\) is a magma, one can mark, in \(M\), magmas other than \(M\). _M3_: If \(M\) is a magma, \(M\) cannot be partitioned into magmas. _M4_: If \(M\) is a magma, every decomposition of M into ensembles leaves a magma as residue. _M5_: What is not a magma is an ensemble or is nothing." ([3], pp. 379-380) Unfortunately, the meaning of "marking" in M1 and M2 is unclear. However we guess that M1 most likely means: for every magma \(M\) there is an indefinite number of sets \(x\) such that \(x\subseteq M\). While M2 means: for every magma \(M\) there is a magma \(N\neq M\) such that \(N\subseteq M\). If we interpret M1 this way, then we easily see that M1 and M4 are contradictory. This is pointed out by Castoriadis himself ([3], p. 383): given a magma \(M\), let \(X\) be the union of all sets contained in \(M\). By M4, \(M\backslash X\) is a magma. But then, by M1, there is a set \(x\) such that \(x\subseteq M\backslash X\). This contradicts the fact that \(X\) is the union of all sets contained in \(M\). I think that the problem with the principles M1 and M4 arises from the fact that they both relate magmas to _sets_, which by definition are collections of a different kind, and in doing so they contradict each other. In contrast M2 and M3 describe how magmas relate to other magmas _alone_, specifically their submagmas. As for M5, I construe it as saying: "What is not a magma is an ensemble and nothing but an ensemble." In other words, it suggests that the classes of magmas and sets exhaust the content of the universe and are complementary. However in the real world, as well as in that of ZFA, which will be used below, there exist objects/atoms that are non-collections. Therefore M5 could more realistically be reformulated as follows: "What is not a magma is a set or an atom." Castoriadis thinks that M3 is the most crucial of the above properties of magmas. He says: "The third property (M3) is undoubtedly the most decisive. It expresses the impossibility of applying here the schema/operator of separation - and, above all, its irrelevance in this domain. In the magma of my representations, I cannot rigorously separate out those that'refer to my family' from the others. (In other words, in the representations that at first sight 'do not refer to my family,' there always originates at least one associative chain that, itself, leads to'my family.' This amounts to saying that a representation is not a 'distinct and well-defined being,' but is everything that it brings along with it.) In the significations conveyed by contemporary English, I cannot rigorously separate out those that (not in my representation, but in this tongue [langue] itself) refer in any way at all to mathematics from the others." ([3], p. 381) Let me sum up. The description of magma by Castoriadis through the principles M1-M5 is not sufficiently clear and, most important, two of these principles, namely M1 and M4, are straightforwardly contradictory. Nevertheless there is an aspect of the idea that deserves further elaboration. This is the real fact that we often come across collections that differ considerably from those e.g. containing mathematical entities. The primary point of deviation of these collections is that their members come up not in full separation from one other, but rather as unbreakable _chains_ or _bunches_ of _dependent_ objects, so that one cannot add or subtract one element without adding or subtracting the elements depending on it. This dependence is vividly described in the last excerpt above through the example of things "that refer to my family", on the one hand, and "those that do not", on the other, and our inability to completely separate one kind from the other. It is exactly this type of _collections with dependent elements_ that we are going to consider and formalize in this paper. As for M1-M5, I propose that, firstly, M1 and M4 be left out of consideration because of their inconsistency, and, secondly, the rest principles M2, M3, M5 be slightly reformulated as follows: _M2*_: If \(M\) is a magma, there is a magma \(N\neq M\) such that \(N\subseteq M\). _M3*_: If \(M\) is a magma, there is no partition of \(M\) into submagmas \(M_{1}\), \(M_{2}\). _M5*_: What is not a magma is a set or an atom. It turns out that the formalization of magmas that we develop below succeeds in capturing M2*, M3* and M5*. ## 2 Formalizing dependence of objects in an extension of ZFA In all Castoriadis's examples of magmas (the collection of meanings of a natural language, the collection of one's mental representations, etc), every one of their members shows a clear "ontological" dependence on other members: every one of them cannot occur in one's mind without the simultaneous occurrence of others. We find this notion of dependence interesting and challenging, and it is our purpose in this paper to try and capture it mathematically. The idea, very roughly, is to work in the theory ZFA, which consists of the axioms of ZF plus an infinite set \(A\) of non-sets, which throughout will be referred to as _atoms_ (see [4, p. 250] for the formal treatment of this theory). We shall equip \(A\) with a binary relation which can adequately capture the most basic properties of dependence, which are just two: reflexivity (every object \(a\) depends on \(a\)) and transitivity (if \(a\) depends on \(b\) and \(b\) depends on \(c\), then \(a\) depends on \(c\)). A binary relation with these two properties is a very familiar mathematical object, is called a _pre-order relation_, or just a _pre-ordering_, and is usually denoted \(\preccurlyeq\).2 So we assume that \(A\) comes up with such a relation \(\preccurlyeq\). The intended meaning of \(a\preccurlyeq b\) is: "\(a\) depends on \(b\)", or "\(b\) points to \(a\)," or "\(b\) reminds \(a\)," all of which practically mean that every occurrence of \(b\) is followed by the occurrence of \(a\). Footnote 2: Of course a more general relation of dependence is sensible, where an object \(a\) depends not on a single element \(b\), but rather on a group of elements \(\{b_{1},\ldots,b_{n}\}\), but such a relation does not fit to our context. It follows by the preceding discussion, that given the relation \(\preccurlyeq\) on \(A\), "magmas over \(A\)" (with respect to \(\preccurlyeq\)) are just the collections \(x\subseteq A\) that are _downward closed under_\(\preccurlyeq\), i.e., have the property: \[(\forall a,b)(a\preccurlyeq b\wedge b\in x\Rightarrow a\in x). \tag{1}\] **Definition 2.1**: Given a pre-ordered set \(\langle A,\preccurlyeq\rangle\), the class \(m(A)\) of _magmas over_\(A\) (with respect to \(\preccurlyeq\)) consists of the nonempty subsets of \(A\) having property (1), namely, \[m(A)=\{x\subseteq A:x\neq\emptyset\wedge(\forall a,b\in A)(a\in x\wedge b \preccurlyeq a\to b\in x)\}.\] In my view, the following three conditions should be met in the treatment of magmas. Firstly, magmas must coexist together with ordinary sets, as well as with atoms, in a "mixed" universe. (This after all was explicitly stated as property M5* above.) Secondly, the magmas of the bottom level of the universe must consist exclusively of _atoms_, not sets. (Nevertheless, magmas of higher ranks can be constructed inductively, having as elements magmas of lower ranks.) Thirdly, the dependence relation \(\preccurlyeq\) of atoms should be a _primitive_ one, that is, not definable from or reducible to other relations of the ground theory.3 Footnote 3: The third condition, concerning non-definability of \(\preccurlyeq\), could be possibly skipped if we assumed that \(\preccurlyeq\) can be constructed on \(A\) by _choice_, but in this case we should work in ZFCA rather than ZFA. However we think that working in ZFA, even mildly augmented, is a simpler and more natural option. Below we shall treat magmas along the lines of the above three conditions. So starting with an _infinite_ set of atoms \(A\) and a primitive pre-ordering \(\preccurlyeq\) on it, we shall build the _class of magmas \(M(A)\) above \(\langle A,\preccurlyeq\rangle\),_ as a subclass of the universe \(V(A)\) of the theory ZF with atoms, ZFA. Recall that the language of ZFA is \(L=\{\in,S(\cdot),A(\cdot)\}\), where \(S(\cdot)\) and \(A(\cdot)\) are the unary predicates (sorts) for sets and atoms, respectively. Given the set \(A\) of atoms, the universe \(V(A)\) of ZFA is the class \(V(A)=\bigcup_{\alpha\in Ord}V_{\alpha}(A)\), where: \(V_{0}(A)=A\), \(V_{\alpha+1}(A)=V_{\alpha}(A)\cup{\cal P}(V_{\alpha}(A))\), and \(V_{\alpha}(A)=\bigcup_{\beta<\alpha}V_{\beta}(A)\), for limit \(\alpha\). Here, in addition, we need \(A\) to carry a pre-ordering, so we introduce \(\preccurlyeq\) as a new primitive binary relation symbol, besides \(\in\), and extend \(L\) to \(L(\preccurlyeq)=L\cup\{\preccurlyeq\}\). As usual, in order to avoid using the sorts \(S(\cdot)\) and \(A(\cdot)\), we use variables \(a,b,c,\ldots\) for atoms, variables \(x,y,z,\ldots\) for sets, and also variables \(u\), \(v\), \(w,\ldots\) that range over both sets and atoms. The atomic formulas of \(L(\preccurlyeq)\) are those of \(L\), plus the formulas \(a\preccurlyeq b\). For every \(a\in A\), let \(pr(a)\) denote the set of its _predecessors,_ \[pr(a)=\{b\in A:b\preccurlyeq a\}.\] In view of the preceding discussion, \(pr(a)\) represents the set of elements of \(A\) which _depend_ on \(a\). It is well-known that over every pre-ordered set \(\langle A,\preccurlyeq\rangle\), the sets \(pr(a)\), \(a\in A\), form the basis of one of the natural topologies induced by \(\preccurlyeq\), usually called "lower topology" for obvious reasons (the corresponding "upper topology" has as basis the sets \(suc(a)=\{b:a\preccurlyeq b\}\)). A set \(x\subseteq A\) is said to be _open_ w.r.t. the lower topology, or _lower open_, if for every \(a\in x\), \(pr(a)\subseteq x\). (It is easy to check that a \(x\subseteq A\) is a lower open set if and only if \(A\backslash x\) is an upper open set.) By the transitivity of \(\preccurlyeq\), for every \(b\in pr(a)\), \(pr(b)\subseteq pr(a)\), so every \(pr(a)\) is open, and we refer to them as "basic open" sets, or b.o. sets for short. Let \(LO(A,\preccurlyeq)\) denote the set of all _nonempty_ lower open subsets of \(A\). It is obvious that for every family \((x_{i})_{i\in I}\) of elements of \(LO(A,\preccurlyeq)\), \(\cup_{i\in I}x_{i}\) belongs to \(LO(A,\preccurlyeq)\), and so does also \(\cap_{i\in I}x_{i}\) whenever it is nonempty. In particular \(A\in LO(A,\preccurlyeq)\). A moment's inspection shows that the sets in \(LO(A,\preccurlyeq)\) are exactly those of the class \(m(A)\) defined in Definition 2.1, that is, \[m(A)=LO(A,\preccurlyeq)=\{x\subseteq A:x\neq\emptyset\wedge(\forall a\in x)( pr(a)\subseteq x)\}.\] Note that there can be \(a\neq b\) in \(A\) such that \(a\preccurlyeq b\) and \(b\preccurlyeq a\). This is in accordance with the intuitive meaning of \(\preccurlyeq\), and expresses the fact that \(a,b\) are _mutually dependent_. We write then \(a\sim b\), \(\sim\) is an equivalence relation on \(A\) and we denote by \([a]_{\sim}\), or just \([a]\), the equivalence class of \(a\). Obviously, \([a]=[b]\) if and only if \(pr(a)=pr(b)\). A set \(x\in LO(A,\preccurlyeq)\) is said to be \(\subseteq\)_-minimal_ or just _minimal_, if there is no \(y\in LO(A,\preccurlyeq)\) such that \(y\varsubsetneq x\). **Lemma 2.2**: _Let \(x\in LO(A,\preccurlyeq)\). The following are equivalent._ _(i) \(x\) is minimal._ _(ii) \((\forall a\in x)(x=pr(a))\)._ _(iii) \((\forall a\in x)(x=[a])\)._ _Proof._ (i) \(\Rightarrow\) (ii) Let \(x\) be open and minimal and assume that for some \(a\in x\), \(x\neq pr(a)\). Since, by openness \(pr(a)\subseteq x\), it means that \(pr(a)\varsubsetneq x\), which contradicts the minimality of \(x\). (ii) \(\Rightarrow\) (iii). Assume (ii) is true, and let \(a\in x\). Since \([a]\subseteq pr(a)\subseteq x\), it is always true that \([a]\subseteq x\). For the other inclusion, pick \(a\in x\). If \(x=\{a\}\), obviously \(pr(a)=\{a\}=[a]=x\). If there is \(b\in x\) such that \(b\neq a\), by (ii) \(x=pr(b)=pr(a)\), so \(x=[a]=[b]\). (iii) \(\Rightarrow\) (i). We show the contrapositive. Suppose (i) is false, that is there is an open \(y\) such that \(y\varsubsetneq x\). Picking \(a\in y\), we have \([a]\subseteq pr(a)\subseteq y\varsubsetneq x\), therefore \([a]\neq x\), so (iii) is false. \(\dashv\) Now for an arbitrary pre-ordering \(\preccurlyeq\) it is clear that we may have \(pr(a)=\{a\}\), for some \(a\). In that case \(\{a\}\) should be included in the class of magmas over \(A\), a fact which is counter intuitive according to our previous discussion. But even if \(pr(a)\neq\{a\}\) but \(pr(a)\) is _minimal_, and hence \(pr(a)=[a]\), according to Lemma 2.2, we shall have the same problem later, with respect to the pre-ordering \(\preccurlyeq^{+}\), which will be defined in the next section on \(LO(A,\preccurlyeq)\). Namely, in that case the singleton \(\{pr(a)\}\) will be open in the lower topology induced by \(\preccurlyeq^{+}\). So it is necessary to avoid the existence of minimal open sets not only in \(LO(A,\preccurlyeq)\), but also in all topologies induced by the shiftings of \(\preccurlyeq\) to the higher levels of the magmatic hierarchy. **Lemma 2.3**: _If \(pr(a)\) is finite, then it contains minimal open subsets._ _Proof._ Let \(pr(a)=\{b_{1},\ldots,b_{n}\}\) for some \(n\geq 1\). For each \(i=1,\ldots,n\), \(pr(b_{i})\subseteq pr(a)\) is finite, so we can pick one \(pr(b_{k})\) with the _least number_ of elements. Then \(pr(b_{k})\) is minimal. For otherwise \(pr(b_{k})\) should contain a \(b_{j}\) such that \(pr(b_{j})\varsubsetneq pr(b_{k})\). But then \(|pr(b_{j})|<|pr(b_{k})|\), a contradiction. \(\dashv\) It follows from the preceding lemma that in order to avoid existence of minimal open sets, it is necessary to impose a condition to \(\preccurlyeq\) which implies that every b.o. set \(pr(a)\) is infinite. Given a pre-ordering \(\preccurlyeq\) on \(A\), let us define \[a\prec b\ \mbox{iff}\ (a\preccurlyeq b\ \wedge\ b\not\preccurlyeq a)\Leftrightarrow(a \preccurlyeq b\ \wedge\ a\not\prec b).\] A reasonable condition which will guarantee the absence of minimal open sets in \(LO(A,\preccurlyeq)\) is the following: (*) \((\forall a\in A)(\exists b\in A)(b\prec a).\) **Proposition 2.4**: _(i) If \(\preccurlyeq\) satisfies (*), \(LO(A,\preccurlyeq)\) does not contain minimal sets. A fortiori, in view of Lemma 2.3, all \(pr(a)\) are infinite, and hence all sets in \(LO(A,\preccurlyeq)\) are infinite._ _(ii) The converse is also true. That is, if (*) fails, then \(LO(A,\preccurlyeq)\) has minimal open sets._ _Proof._ (i) Suppose \(\preccurlyeq\) satisfies (*). It suffices to show that no b.o. set \(pr(a)\) is minimal. Take a set \(pr(a)\). By (*) there is \(b\) such that \(b\prec a\). Then \(b\in pr(a)\), so \(pr(b)\subseteq pr(a)\), while \(a\notin pr(b)\), because \(a\not\preccurlyeq b\), so \(a\in pr(a)\backslash pr(b)\), and therefore \(pr(b)\varsubsetneq pr(a)\). (ii) Assume that (*) fails, i.e. there is \(a\) such that \((\forall b)(b\not\prec a)\), or \[(\forall b)(b\preccurlyeq a\to a\preccurlyeq b).\] The latter means that \((\forall b)(b\in pr(a)\to b\in[a])\), or that \(pr(a)\subseteq[a]\). Since always \([a]\subseteq pr(a)\), it means that for this specific \(a\), \(pr(a)=[a]\). Thus \((\forall c\in pr(a))(pr(a)=[c])\), i.e., condition (iii) of Lemma 2.2 holds, and therefore \(pr(a)\) is minimal. \(\dashv\) It follows that condition (*) for \(\preccurlyeq\) is necessary and sufficient in order for \(LO(A,\preccurlyeq)\) not to contain minimal open sets. It is exactly such pre-orderings, satisfying (*), that we are going to deal with below. Moreover this property will be incorporated in the formal system we shall adopt for the treatment of magmas. Namely, the formal system in which we work below differs from ZFA in the following points: (a) We strengthen the schemes of Separation and Replacement of ZFA so that they hold for the formulas of \(L(\preccurlyeq)\) rather than just \(L\). We denote this system ZFA\({}_{\preccurlyeq}\). (Notice that without this strengthening, we could not guarantee, for example, that the collections \(pr(a)=\{b\in A:b\preccurlyeq a\}\) and \(LO(A,\preccurlyeq)\) are sets.) (b) We add to the axioms of ZFA\({}_{\preccurlyeq}\) the following statements about \(\preccurlyeq\): \((D_{1})\)\((\forall a)(a\preccurlyeq a)\) (reflexivity). \((D_{2})\)\((\forall a,b,c)(a\preccurlyeq b\wedge b\preccurlyeq c\to a\preccurlyeq c)\) (transitivity). \((D_{3})\)\((\forall a)(\exists b)(b\prec a)\) (no minimal elements).4 Footnote 4: Axiom \(D_{3}\) suggests that _every_ element of \(A\) depends on other elements, whereas one may object that even in magmas, e.g. in collections like those of meanings, memories, etc, there may exist _independent_ elements, i.e., \(a\in A\) such that \((\forall b\neq a)(a\preccurlyeq b\wedge b\preccurlyeq a)\). This is a reasonable objection, provided we agree that such elements are rather exceptions to the rule, and that the subset of \(A\) that consists of the _dependent_ elements is still infinite. But then it suffices simply to take as set of atoms the set \(A^{\prime}=\{a\in A:(\exists b\neq a)(a\preccurlyeq b\lor b\preccurlyeq a)\}\) instead of \(A\), and work with it as before. So the theory we shall be working in below is \[\mbox{ZFA}_{D}=\mbox{ZFA}_{\preccurlyeq}+\{D_{1},D_{2},D_{3}\}.\] In this theory we shall define the class of magmas \(M(A)\) as a subclass of the universe \(V(A)\). And as the universe of ZFA is made of levels \(V_{\alpha}(A)\), for \(\alpha\in Ord\), the class \(M(A)\) will be made also of levels \(M_{\alpha}(A)\subseteq V_{\alpha}(A)\). As first level we take the set \[M_{1}(A)=LO(A,\preccurlyeq), \tag{2}\] so indeed \(M_{1}(A)\subseteq V_{1}(A)\). **Remarks 2.5**: (i) Since \(A\in M_{1}(A)\), \(A\) itself is a magma. If \((x_{i})_{i\in I}\) is any family of magmas, then so is \(\bigcup_{i}x_{i}\), as well as \(\bigcap_{i}x_{i}\) if it is nonempty. (ii) In view of axiom \(D_{3}\), which is identical to condition (*), and Proposition 2.4 (i), all sets of \(M_{1}(A)\) are infinite. (iii) Every open set \(x\) is _saturated_ with respect to the equivalence relation \(\sim\) induced by \(\preccurlyeq\). That is, for every \(a\in x\)\([a]_{\sim}\subseteq x\). This follows from the fact that the b.o. sets \(pr(a)\) are saturated. Our next step is to define higher levels \(M_{\alpha}(A)\), for \(\alpha\geq 1\), of the magmatic hierarchy. For this task we need to shift the pre-ordering \(\preccurlyeq\) of \(A\) to the levels \({\cal P}^{\alpha}(A)\) appropriately. ## 3 Shiftings of pre-orderings to powersets Given a pre-ordered set \(\langle A,\preccurlyeq\rangle\), a natural way to shift \(\preccurlyeq\) to the set \({\cal P}(A)\) is by means of the relation \(\preccurlyeq^{+}\) of "simulation" defined as follows: For \(x,y\subseteq A\), let \[x\preccurlyeq^{+}y:\Leftrightarrow(\forall a\in x)(\exists b\in y)(a\preccurlyeq b). \tag{3}\] We call \(\preccurlyeq^{+}\)_exponential shifting_, or just _shifting_, of \(\preccurlyeq\).5 Footnote 5: I borrowed the notation \(\preccurlyeq^{+}\) from [1] although, given a relation \(R\subseteq X\times X\), Aczel denotes by \(R^{+}\) the relation on \({\cal P}(X)\) defined by: \[xR^{+}y\Leftrightarrow(\forall u\in x)(\exists v\in y)(uRv)\ \&\ (\forall v\in y)( \exists u\in x)(uRv).\] If in addition \(X\) is a transitive set and \(R\subseteq R^{+}\), \(R^{+}\) is said to be a "bisimulation." So our definition of \(\preccurlyeq^{+}\) is “half” of that of \(R^{+}\). This is because the above definition of \(R^{+}\) is appropriate for symmetric relations \(R\), especially equivalences, while \(\preccurlyeq\) is a nonsymmetric relation. **Lemma 3.1**: _(i) If \(\preccurlyeq\) is a pre-ordering on \(A\), \(\preccurlyeq^{+}\) is a pre-ordering on \({\cal P}(A)\). (However, if \(\preccurlyeq\) is an order, \(\preccurlyeq^{+}\) need not be so.)_ _(ii) If \(\preccurlyeq\) is a total pre-order, i.e., \(a\preccurlyeq b\) or \(b\preccurlyeq a\) for all \(a,b\in A\), so is \(\preccurlyeq^{+}\)._ _Proof._ (i) It is straightforward that \(\preccurlyeq^{+}\) is reflexive and symmetric. (As a counterexample for the case of orderings, take \(\preccurlyeq\) to be a total order on \(A\), and let \(x_{1}\), \(x_{2}\) be distinct cofinal subsets of \((A,\preccurlyeq)\), i.e., \((\forall a\in A)(\exists b\in x_{i})(a\preccurlyeq b)\), for \(i=1,2\). Then clearly \(x_{1}\preccurlyeq^{+}x_{2}\) and \(x_{2}\preccurlyeq^{+}x_{1}\), while \(x_{1}\neq x_{2}\).) (ii) Assume \(\preccurlyeq\) is total and that for \(x,y\in{\cal P}(A)\), \(x\not\preccurlyeq^{+}y\). Then \((\exists a\in x)(\forall b\in y)(a\not\preccurlyeq b)\). Since \(\preccurlyeq\) is total, it follows that \((\exists a\in x)(\forall b\in y)(b\preccurlyeq a)\). But then, by logic alone, \((\forall b\in y)(\exists a\in x)(b\preccurlyeq a)\), so \(y\preccurlyeq^{+}x\). \(\dashv\) Since \(\preccurlyeq^{+}\) is a pre-ordering on \({\cal P}(A)\), the sets \[pr^{+}(x)=\{y\subseteq A:y\preccurlyeq^{+}x\}\] form the b.o. sets of the lower topology \(LO({\cal P}(A),\preccurlyeq^{+})\). Recall that \(LO(A,\preccurlyeq)\subseteq{\cal P}(A)\), so the pre-ordering \(\preccurlyeq^{+}\) applies also to the elements of \(LO(A,\preccurlyeq)\). Then the following remarkable relation holds between the sets \(pr^{+}(x)\) and the powersets of \(x\). **Proposition 3.2**: _(i) For any \(x,y\in{\cal P}(A)\), \(y\subseteq x\Rightarrow y\preccurlyeq^{+}x\), so \({\cal P}(x)\subseteq pr^{+}(x)\)._ _(ii) If \(x\in LO(A,\preccurlyeq)\), then the converse of (i) holds, i.e., for every \(y\in{\cal P}(A)\), \(y\preccurlyeq^{+}x\Rightarrow y\subseteq x\), so \(pr^{+}(x)\subseteq{\cal P}(x)\)._ _(iii) Therefore for every \(x\in LO(A,\preccurlyeq)\), \(pr^{+}(x)={\cal P}(x)\)._ _(iv) In particular, \(\preccurlyeq^{+}\upharpoonright LO(A,\preccurlyeq)=\subseteq\), and_ \[LO(LO(A,\preccurlyeq),\preccurlyeq^{+})=LO(LO(A,\preccurlyeq),\subseteq).\] _Proof._ (i). Let \(y\subseteq x\) and pick \(a\in y\). Then \(a\in x\) and since \(a\preccurlyeq a\), it means that there is \(b\in x\) such that \(a\preccurlyeq b\). Therefore \(y\preccurlyeq^{+}x\). (ii) Suppose \(x\) is open, let \(y\preccurlyeq^{+}x\) and pick \(a\in y\). Since \(y\preccurlyeq^{+}x\), there is \(b\in x\) such that \(a\preccurlyeq b\), i.e., \(a\in pr(b)\). But \(pr(b)\subseteq x\), since \(x\) is open, so \(a\in x\) and therefore \(y\subseteq x\). (iii) and (iv) follow immediately from (i) and (ii). \(\dashv\) **Corollary 3.3**: _If \(\preccurlyeq\) satisfies \(D_{3}\), i.e., does not contain minimal elements, then so does \(\preccurlyeq^{+}\) on \(LO(A,\preccurlyeq)\)._ _Proof._ Let \(x\in LO(A,\preccurlyeq)\). We have to show that there is \(y\in LO(A,\preccurlyeq)\) such that \(y\prec^{+}x\), i.e., \(y\preccurlyeq^{+}x\) and \(x\not\preccurlyeq^{+}y\). By Proposition 3.2 (iv), this amounts to finding an open \(y\) such that \(y\subseteq x\) and \(x\not\subseteq y\), or equivalently \(y\varsubsetneq x\). Pick an \(a\in x\). Then \(pr(a)\subseteq x\). By \(D_{3}\), there is a \(b\) such that \(b\prec a\), hence \(pr(b)\varsubsetneq pr(a)\subseteq x\). Therefore \(pr(b)\varsubsetneq x\) and \(pr(b)\) is open. \(\dashv\) Proposition 3.2 is crucial for the construction of the levels \(M_{\alpha}(A)\) of the magmatic hierarchy, for every \(\alpha\geq 1\). It says that, whatever the starting pre-ordering \(\preccurlyeq\) of \(A\) is, the restriction of \(\preccurlyeq^{+}\) to \(LO(A,\preccurlyeq)\) is \(\subseteq\). That is, if we set \(M_{1}=LO(A,\preccurlyeq)\), then \(LO(M_{1},\preccurlyeq^{+})=LO(M_{1},\subseteq)\). And for the same reason, if we set \(M_{2}=LO(M_{1},\subseteq)\), the shifting \(\subseteq^{+}\) of the relation \(\subseteq\) of \(M_{1}\) to the sets of \(M_{2}\) is \(\subseteq\) again. And so on with \(\subseteq^{++}\), \(\subseteq^{+++}\) etc, for all subsequent levels \(M_{3}\), \(M_{4},\ldots\), which are defined similarly. As a result, every such level consists of infinite sets only, ordered by \(\subseteq\) with no minimal element. Moreover, when we reach a limit ordinal \(\alpha\), we can take as \(M_{\alpha}\) just the union of all previous levels, ordered again by \(\subseteq\). And then we can proceed further by setting \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\). This way \(M_{\alpha}\) is defined for every ordinal \(\alpha\geq 1\). ## 4 The magmatic universe Fix a set \(A\) of atoms and a pre-ordering \(\preccurlyeq\) of it which satisfies the axioms of ZFA\({}_{D}\), in particular \(\preccurlyeq\) has no minimal element. In view of Proposition 3.2 and the remarks at the end of the last section, we can define the levels \(M_{\alpha}(A)\) of the _magmatic hierarchy above_\(A\), by induction on \(\alpha\) as follows. (For simplicity we write \(M_{\alpha}\) instead of \(M_{\alpha}(A)\).) **Definition 4.1**: \(M_{1}=LO(A,\preccurlyeq)\)_._ \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\)_, for every \(\alpha\geq 1\)._ \(M_{\alpha}=\bigcup_{1\leq\beta<\alpha}M_{\beta}\)_, if \(\alpha\) is a limit ordinal._ \(M=M(A)=\bigcup_{\alpha\geq 1}M_{\alpha}\)_._ \(M\) is said to be the _magmatic universe above_\(A\) (with respect to the preordering \(\preccurlyeq\)). Here are some basic facts about \(M_{\alpha}\)'s and \(M\). **Lemma 4.2**: _(i) \(M_{\alpha}\subseteq V_{\alpha}(A)\), \(M_{1}\subseteq{\cal P}(A)\backslash\{\emptyset\}\), \(M_{\alpha+1}\subseteq{\cal P}(M_{\alpha})\backslash\{\emptyset\}\) and \(M_{\alpha}\in M_{\alpha+1}\), for every \(\alpha\geq 1\)._ _(ii) The b.o. sets of the space \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\) are the sets_ \[pr_{\alpha}(x)=\{y\in M_{\alpha}:y\subseteq x\}={\cal P}(x)\cap M_{\alpha}.\] _Therefore:_ \[x\in M_{\alpha+1}\ \Leftrightarrow\ x\subseteq M_{\alpha}\wedge x\neq\emptyset \wedge(\forall y\in x)({\cal P}(y)\cap M_{\alpha}\subseteq x).\] _(iii) The class \(M\) is "almost" transitive, in the sense that for every \(x\in M_{\alpha}\) for \(\alpha\geq 2\), \(x\subseteq M\). However for \(x\in M_{1}\), \(x\subseteq A\). So \(M\cup A\) is transitive._ _(iv) \(M\) is a proper subclass of \(V(A)\), while \(M\cap V=\emptyset\) where \(V\) is the subclass of pure sets of \(V(A)\) (that is of \(x\)'s such that \(TC(x)\cap A=\emptyset\))._ _(v) The inductive definition of \(M_{\alpha}\)'s does not reach any fixed point, that is for every \(\alpha\geq 1\), \(M_{\alpha+1}\neq M_{\alpha}\)._ _(vi) All sets of \(M\) are infinite._ _(vii) There is no \(\subseteq\)-minimal set in \(M\)._ _Proof._ (i), (ii) and (iii) follow immediately from the definitions. (iv) Let \(rank(x)\) be the usual rank of sets in \(V(A)\), i.e. \(rank(x)=\min\{\alpha:x\in V_{\alpha+1}(A)\}\). It is easy to see by induction that for every \(\alpha\geq 1\), \(rank(M_{\alpha})=\alpha\). Concerning the other claim, let \(x\in V\cap M\) be a pure set of least rank \(\alpha\). Then \(x\in V_{\alpha+1}\cap M\). If \(\alpha\geq 1\), \(x\subseteq V_{\alpha}\cap M\), and \(x\neq\emptyset\). But then there is \(y\in x\), \(y\in V\cap M\) and \(rank(y)<rank(x)\), a contradiction. So \(rank(x)=0\), which means that \(x=\emptyset\) and \(x\in V_{1}\cap M\), a contradiction again since \(\emptyset\notin M\). (v) If \(M_{\alpha+1}=M_{\alpha}\), then \(M_{\alpha}=LO(M_{\alpha},\subseteq)\), hence by (i) \(M_{\alpha}\in M_{\alpha}\), a contradiction. (A bit differently: if \(M_{\alpha+1}=M_{\alpha}\), then \(M_{\beta}=M_{\alpha}\) for every \(\beta>\alpha\), and hence \(M=M_{\alpha}\), so \(M\) would be a set, which contradicts (iv).) (vi) and (vii) follow from the fact that the initial pre-ordering \(\preccurlyeq\) on \(A\) does not contain minimal elements (by \(D_{3}\)), so by Corollary 3 so does the relation \(\preccurlyeq^{+}=\subseteq\) on \(M_{1}\), and by induction, so does \(\subseteq\) on every \(M_{\alpha}\). Therefore, by Proposition 2.4 for every \(\alpha\geq 0\), (a) all sets of \(M_{\alpha+1}\) are infinite, and (b) \(M_{\alpha+1}\) does not contain \(\subseteq\)-minimal sets. \(\dashv\) **Remark 4.3**: _Notice that as follows from Lemma 4.2 (i), \(M_{\alpha}\subseteq V_{\alpha}(A)\cap M\). However in general \(M_{\alpha}\neq M\cap V_{\alpha}(A)\)._ _Proof._ Consider, for example, the levels \(M_{1}\), \(M_{2}\) of \(M\) which are disjoint (see Lemma 4.8 below). Then \(M_{1}\subseteq{\cal P}(A)\) and \(M_{2}\subseteq{\cal P}(M_{1})\subseteq{\cal P}^{2}(A)\). Since \({\cal P}(A)\cup{\cal P}^{2}(A)\subseteq V_{2}(A)\), we have \(M_{1}\cup M_{2}\subseteq M\cap V_{2}(A)\). However \(M_{1}\cup M_{2}\not\subseteq M_{2}\) because \(M_{1}\cap M_{2}=\emptyset\). Therefore \(M_{2}\neq M\cap V_{2}(A)\). \(\dashv\) By the next three results it is shown that the collection \({\cal P}(x)\cap M\) of submagmas of a magma \(x\) is a magma too, that occurs exactly at the next level of that of \(x\). **Lemma 4.4**: _(i) If \({\cal P}(x)\cap M_{\alpha}\neq\emptyset\), then \({\cal P}(x)\cap M_{\alpha}\in M_{\alpha+1}\)._ _(ii) For every \(x\in M\), there is a limit ordinal \(\beta\) such that \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\beta}\)._ _Proof._ (i) Notice that if \(x\in M_{\alpha}\), then, by Lemma 4.2 (ii), \({\cal P}(x)\cap M_{\alpha}=pr_{\alpha}(x)\), which is a b.o. set of \(LO(M_{\alpha},\subseteq)=M_{\alpha+1}\). However we can prove the claim without this assumption. So let \(u={\cal P}(x)\cap M_{\alpha}\neq\emptyset\). Then \(u\subseteq M_{\alpha}\), so it suffices to show that \(u\in LO(M_{\alpha},\subseteq)\), i.e., \((\forall z\in u)({\cal P}(z)\cap M_{\alpha}\subseteq u)\). Pick a \(z\in u\). Then \(z\subseteq x\), so \({\cal P}(z)\subseteq{\cal P}(x)\), and therefore \({\cal P}(z)\cap M_{\alpha}\subseteq{\cal P}(x)\cap M_{\alpha}=u\), as required. (i) \(M\) is a (definable) subclass of \(V(A)\) and \({\cal P}(x)\) is a set, so by the Separation scheme of ZFA\({}_{D}\), \({\cal P}(x)\cap M\) is a set. For each \(y\in{\cal P}(x)\cap M\), let \(\beta_{y}=\min\{\gamma\in Ord:y\in M_{\gamma}\}\). If \(X=\{\beta_{y}:y\in{\cal P}(x)\cap M\}\), \(X\) is a set of ordinals (by the Replacement axiom), so \(\sup X\) exists, and let \(\beta\) be the first limit ordinal such that \(\sup X\leq\beta\). Then \(M_{\beta}=\bigcup_{1\leq\gamma<\beta}M_{\gamma}\), and therefore \({\cal P}(x)\cap M\subseteq M_{\beta}\). But then also \({\cal P}(x)\cap M\subseteq{\cal P}(x)\cap M_{\beta}\), so finally \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\beta}\) since the reverse inclusion holds trivially. \(\dashv\) **Proposition 4.5**: _(i) If \(x\in M_{1}\), then \({\cal P}(x)\cap M\subseteq M_{1}\), and hence \({\cal P}(x)\cap M={\cal P}(x)\cap M_{1}\). In particular \({\cal P}(A)\cap M=M_{1}\)._ _(ii) If \(x\in M_{\alpha+1}\), then \({\cal P}(x)\cap M\subseteq M_{\alpha+1}\), for every \(\alpha\geq 1\), and hence \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\alpha+1}\). In particular \({\cal P}(M_{\alpha})\cap M=M_{\alpha+1}\)._ _Proof._ (i) Let \(x\in M_{1}\) and pick a \(y\in{\cal P}(x)\cap M\). Then \(y\subseteq x\subseteq A\). If \(y\notin M_{1}\), then \(y\in M_{\alpha+1}\) for some \(\alpha\geq 1\). So \(y\subseteq M_{\alpha}\), and hence \(A\cap M_{\alpha}\neq\emptyset\), which contradicts the fact that \(A\cap M=\emptyset\). For the case of \(x=A\), we have \(A\in M_{1}\), so \({\cal P}(A)\cap M\subseteq M_{1}\), but besides \(M_{1}=LO(A,\preccurlyeq)\subseteq{\cal P}(A)\), therefore \({\cal P}(A)\cap M=M_{1}\). (ii) Let \(x\in M_{\alpha+1}\), and fix a \(y_{0}\in{\cal P}(x)\cap M\). We have to show that \(y_{0}\in M_{\alpha+1}\), i.e., \(y_{0}\subseteq M_{\alpha}\) and \(y_{0}\) is open in \(M_{\alpha}\). Since \(y_{0}\subseteq x\subseteq M_{\alpha}\), already \(y_{0}\subseteq M_{\alpha}\). Towards a contradiction, suppose that \(y_{0}\) is not open in \(M_{\alpha}\). It means that the following holds: (a) \((\exists z\in y_{0})({\cal P}(z)\cap M_{\alpha}\not\subseteq y_{0})\). Now by Lemma 4.4 (ii), there is limit \(\beta>\alpha+1\) such that \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\beta}\), so by Lemma 4.4 (i), \({\cal P}(x)\cap M\in M_{\beta+1}\). It follows that \(y_{0}\in M_{\beta+1}\), i.e. \(y_{0}\) is an open subset of \(M_{\beta}\). So (b) \((\forall z\in y_{0})({\cal P}(z)\cap M_{\beta}\subseteq y_{0})\). But since \(\beta>\alpha+1\) and \(\beta\) is limit, \(M_{\alpha}\subseteq M_{\beta}\), so (c) \((\forall z\in y_{0})({\cal P}(z)\cap M_{\alpha}\subseteq{\cal P}(z)\cap M_{ \beta})\). Obviously (b) and (c) contradict (a), and this proves the claim. In particular for \(x=M_{\alpha}\), \({\cal P}(M_{\alpha})\cap M\subseteq M_{\alpha+1}\), since \(M_{\alpha}\in M_{\alpha+1}\), but also \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\subseteq{\cal P}(M_{\alpha})\), so \({\cal P}(M_{\alpha})\cap M=M_{\alpha+1}\). \(\dashv\) **Corollary 4.6**: _(i) For every \(\alpha\geq 0\), if \(x\in M_{\alpha+1}\) then \({\cal P}(x)\cap M\in M_{\alpha+2}\)._ _(ii) If \(x\in M\) and \(x\subseteq M_{\alpha}\), then \(x\in M_{\alpha+1}\) (that is, every subset of \(M_{\alpha}\) that belongs to \(M\), is an open subset of \(M_{\alpha}\))._ _Proof._ (i) By Proposition 4.5 (ii), for every \(\alpha\geq 0\), if \(x\in M_{\alpha+1}\), then \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\alpha+1}\). In addition, by Lemma 4.4 (i), \({\cal P}(x)\cap M_{\alpha+1}\in M_{\alpha+2}\). (ii) It follows again from clause (ii) of Proposition 4.5, in particular from the equality \({\cal P}(M_{\alpha})\cap M=M_{\alpha+1}\). \(\dashv\) Proposition 4.5 helps us also to compare the levels of the form \(M_{\alpha+n}\), where \(\alpha\) is a limit ordinal and \(n\geq 0\). **Lemma 4.7**: _For all limit ordinals \(\alpha\), \(\beta\) such that \(\alpha<\beta\), and all \(n\geq 0\), \(M_{\alpha+n}\subseteq M_{\beta+n}\)._ _Proof._ That \(M_{\alpha}\subseteq M_{\beta}\) follows from the definition of \(M_{\alpha}\) for limit \(\alpha\). We prove the claim for \(n=1\), i.e., \(M_{\alpha+1}\subseteq M_{\beta+1}\). Since \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\) and \(M_{\beta+1}=LO(M_{\beta},\subseteq)\), it suffices to show that every b.o. subset of \(M_{\alpha}\), \(pr_{\alpha}(x)\) for some \(x\in M_{\alpha}\), is also a b.o. subset of \(M_{\beta}\). Pick a \(x\in M_{\alpha}\), so \(x\in M_{\beta}\) too. Since \(\alpha\) is a limit ordinal, \(x\in M_{\gamma+1}\) for some \(\gamma<\alpha\), and by Proposition 4.5 (ii), \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\gamma+1}\). Then \(M_{\gamma+1}\subseteq M_{\alpha}\subseteq M_{\beta}\), which implies that \({\cal P}(x)\cap M={\cal P}(x)\cap M_{\alpha}={\cal P}(x)\cap M_{\beta}\), so \[pr_{\alpha}(x)={\cal P}(x)\cap M_{\alpha}={\cal P}(x)\cap M_{\beta}=pr_{\beta}( x). \tag{4}\] This proves that every set \(pr_{\alpha}(x)\) is also an open subset \(M_{\beta}\), and proves that \(M_{\alpha+1}\subseteq M_{\beta+1}\). That \(M_{\alpha+n}\subseteq M_{\beta+n}\), for every \(n\geq 1\), is shown by an easy induction and the help of Proposition 4.5 (ii) as before. \(\dashv\) We give next some results concerning the non-limit levels of \(M\). First notice that the levels \(M_{n}\), for finite \(n\), have the peculiarity to be pairwise disjoint. **Lemma 4.8**: \(M_{n}\cap M_{m}=\emptyset\)_, for all \(n\neq m\)._ _Proof._ Let us set \({\cal P}^{*}(X)={\cal P}(X)\backslash\{\emptyset\}\), for any set \(X\). By Lemma 4.2 (i), \(M_{1}\subseteq{\cal P}^{*}(A)\), and \(M_{2}\subseteq{\cal P}^{*}(M_{1})\subseteq{\cal P}^{*}({\cal P}^{*}(A))={\cal P }^{*2}(A)\). So by induction, for every \(n\geq 1\), \[M_{n}\subseteq{\cal P}^{*n}(A). \tag{5}\] On the other hand, \(A\cap{\cal P}^{*k}(A)=\emptyset\), for every \(k>0\), since \(A\) contains non-sets, therefore \({\cal P}^{*n}(A)\cap{\cal P}^{*n+k}(A)=\emptyset\), for every \(n\) and every \(k>0\), or for all \(n\neq m\), \[{\cal P}^{*n}(A)\cap{\cal P}^{*m}(A)=\emptyset. \tag{6}\] By (5) and (6), \(M_{n}\cap M_{m}=\emptyset\) for all \(n\neq m\). \(\dashv\) We can give now a uniform characterization of the sets of \(M_{\alpha+1}=LO(M_{\alpha},\subseteq)\), for any limit ordinal \(\alpha\). First observe that if \(\alpha\) is limit and \(x\subseteq M_{\alpha}=\bigcup_{1\leq\beta<\alpha}M_{\beta}\), then \(x\) can be written \(x=\bigcup_{\beta<\alpha}(x\cap M_{\beta+1})\), because every \(y\in x\) belongs to some level of the form \(M_{\beta+1}\), for a \(\beta<\alpha\). **Proposition 4.9**: _Let \(\alpha\) be a limit ordinal. Let \(\emptyset\neq x\subseteq M_{\alpha}\), \(I=\{\beta<\alpha:x\cap M_{\beta+1}\neq\emptyset\}\), and \(x_{\beta+1}=x\cap M_{\beta+1}\), for every \(\beta\in I\). Then \(x\in M_{\alpha+1}\) iff_ \[I\neq\emptyset\ \wedge\ x=\bigcup_{\beta\in I}x_{\beta+1}\ \wedge\ (\forall\beta\in I)(x_{\beta+1}\in M_{\beta+2}). \tag{7}\] _Proof._ Given \(\emptyset\neq x\subseteq M_{\alpha}\), clearly \(I\neq\emptyset\) and \(x=\bigcup_{\beta\in I}x_{\beta+1}\). So in order to prove (7) it suffices to prove the equivalence \[x\in M_{\alpha+1}\Leftrightarrow(\forall\beta\in I)(x_{\beta+1}\in M_{\beta+2 }). \tag{8}\] \(\Rightarrow\) of (8): Assume \(x\in M_{\alpha+1}\), let \(\beta\in I\), and pick \(y\in x_{\beta+1}\). We have to show that \({\cal P}(y)\cap M_{\beta+1}\subseteq x_{\beta+1}\). Now \(y\in x\) and the assumption \(x\in M_{\alpha+1}\) implies \({\cal P}(y)\cap M_{\alpha}\subseteq x\). The last inclusion yields \({\cal P}(y)\cap M_{\alpha}\cap M_{\beta+1}\subseteq x\cap M_{\beta+1}\), which is identical to the required inclusion \({\cal P}(y)\cap M_{\beta+1}\subseteq x_{\beta+1}\). \(\Leftarrow\) of (8): Assume \((\forall\beta\in I)(x_{\beta+1}\in M_{\beta+2})\), where \(x_{\beta+1}=x\cap M_{\beta+1}\). It means that for every \(\beta\in I\), \((\forall y\in x_{\beta+1})({\cal P}(y)\cap M_{\beta+1}\subseteq x_{\beta+1})\). Pick a \(y\in x\). We have to show that \({\cal P}(y)\cap M_{\alpha}\subseteq x\). Now \(y\in x_{\beta+1}\), for some \(\beta\in I\), hence \(y\in M_{\beta+1}\). By Proposition 4.5 (ii), \(y\in M_{\beta+1}\) implies that \({\cal P}(y)\cap M\subseteq M_{\beta+1}\), so \({\cal P}(y)\cap M_{\alpha}={\cal P}(y)\cap M_{\beta+1}\), and since by assumption \({\cal P}(y)\cap M_{\beta+1}\subseteq x_{\beta+1}\), and \(x_{\beta+1}\subseteq x\), it follows \({\cal P}(y)\cap M_{\alpha}\subseteq x\) as required. This completes the proof of (8) and the Proposition. \(\dashv\) **Corollary 4.10**: _Let \(\alpha\) be a limit ordinal. Then:_ _(i) For all \(1\leq i\leq n\), \(M_{i}\cap M_{\alpha+n}=\emptyset\). That is, \((\bigcup_{1\leq i\leq n}M_{i})\cap M_{\alpha+n}=\emptyset\)._ _(ii) For every \(n\geq 1\), \(M_{\alpha}\setminus(\bigcup_{1\leq i\leq n}M_{i})=\bigcup_{n+1\leq\beta< \alpha}M_{\beta}\subseteq M_{\alpha+n}\)._ _(iii) For all \(n\geq 0\), \(M_{\alpha+n}\not\subseteq M_{\alpha+n+1}\)._ _Proof._ (i) It suffices to show that given limit \(\alpha\), for all \(\kappa\geq 1\) and \(n\geq 0\), \(M_{k}\cap M_{\alpha+n+k}=\emptyset\). By induction on \(k\). Clearly \(A\cap M_{\alpha+n}=\emptyset\), so \({\cal P}(A)\cap{\cal P}(M_{\alpha+n})=\emptyset\). Since \(M_{1}\subseteq{\cal P}(A)\) and \(M_{\alpha+n+1}\subseteq{\cal P}(M_{\alpha+n})\) we have \(M_{1}\cap M_{\alpha+n+1}\), so the claim holds for \(k=1\). Assume \(M_{k}\cap M_{\alpha+n+k}=\emptyset\). Then \(M_{k+1}\subseteq{\cal P}(M_{k})\) and \(M_{\alpha+n+k+1}\subseteq{\cal P}(M_{\alpha+n+k})\). By the assumption the larger sets in these two relations are disjoint, thus so are the smaller ones. (ii) The claim holds for \(n=1\), because every \(x\in M_{\beta}\), for some \(2\leq\beta<\alpha\), is a subset of \(M_{\alpha}\) that satisfies condition (7) of Proposition 4.9, so \(x\in M_{\alpha+1}\). Then we can continue using induction on \(n\). For simplicity, for \(n\geq 1\), assume \(M_{n+1}\subseteq M_{\alpha+n}\), and prove that \(M_{n+2}\subseteq M_{\alpha+n+1}\). Let \(x\in M_{n+2}\). Then \(x\subseteq M_{n+1}\), and \((\forall y\in x)({\cal P}(y)\cap M_{n+1}\subseteq x)\). Now \(y\in x\) implies \(y\in M_{n+1}\), so by Proposition 4.5 (ii), \({\cal P}(y)\cap M\subseteq M_{n+1}\). By this fact and the induction assumption \(x\subseteq M_{n+1}\subseteq M_{\alpha+n}\), it follows that \(x\in M_{\alpha+n+1}\). (iii) By (i) and (ii) above, \(M_{\alpha}\not\subseteq M_{\alpha+1}\) because \(M_{1}\subseteq M_{\alpha}\) while \(M_{1}\cap M_{\alpha+1}=\emptyset\), and \(M_{\alpha+n}\not\subseteq M_{\alpha+n+1}\) because \(M_{n+1}\subseteq M_{\alpha+n}\), while \(M_{n+1}\cap M_{\alpha+n+1}=\emptyset\). \(\dashv\) It follows from clause (iii) of the preceding corollary that, in contrast to the limit levels \(M_{\alpha}\), which, by definition, contain the elements of all lower levels, the successor levels do not behave cumulatively. Each level \(M_{\alpha+n}\), for a limit \(\alpha\) and \(n\geq 1\), always omits elements of lower levels. One may infer from the comparison given above that \(M_{\alpha+n}\) and \(M_{\alpha+n+1}\) differ only with respect to elements of the initial levels \(M_{n}\) of the hierarchy, i.e. elements of \(M_{\omega}\). But this is not true. For example pick a \(x\in M_{2}\) and a \(y\in M_{3}\). Then clearly \(x\cup y\in M_{\omega+1}\), according to the characterization given in Proposition 4.9, while \(x\cup y\notin M_{\omega}\), so \(x\cup y\in M_{\omega+1}\backslash M_{\omega}\). But also \(x\cup y\notin M_{\omega+2}\) either, because otherwise \(x\cup y\subseteq M_{\omega+1}\). Since \(x\subseteq M_{1}\), that would mean that \(M_{1}\cap M_{\omega+1}\neq\emptyset\), which is false according to clause (i) of the preceding Corollary. This shows that \(M_{\omega+1}\backslash M_{\omega}\not\subseteq M_{\omega+2}\). We turn now to another more standard point of view from which the class \(M\) could be looked at: the point of view from which \(M\) is seen as an \(\in\)-structure and so questions are raised as to what set-theoretic properties of the language \(L_{0}=\{\in\}\) could be satisfied in \(\langle M,\in\rangle\). Such a question about the truth of some of the axioms of ZF in \(\langle M,\in\rangle\) is reasonable. There is however a technical difficulty with sentences of \(L_{0}\) in \(M\), because of the lack of transitivity due to the bottom level \(M_{1}\), since for every \(x\in M_{1}\), \(x\cap M=\emptyset\). In view of this, given \(x,y\in M_{1}\), the truth of simple properties like \(x=y\) and \(x\subseteq y\) cannot be expressed inside \(M\) by the usual formulas. The problem is fixed if we add to \(M\) the atoms and work in \(M^{*}=M\cup A\) rather than \(M\), with language \(L=\{\in,S(\cdot),A(\cdot)\}\), or with sorted variables. Still, of course, we do not expect \(M^{*}\) to satisfy many of the closure properties expressed through the axioms of ZFA. Of these axioms Extensionality and Foundation do hold in \(\langle M^{*},\in\rangle\) because the latter is a transitive substructure of \(\langle V(A),\in\rangle\). But since by construction \(\emptyset\notin M\), the Emptyset axiom fails. So does the Pairing axiom because every \(x\in M\) is an infinite set. The Infinity axiom fails too, since the "measure" by which we decide infinity of a set is \(\omega=\{0,1,\ldots\}\), and this is not a resident of \(M^{*}\). Finally the truth of Separation and Replacement in \(M^{*}\) is obviously out of the question. Nevertheless, the rest two axioms, Powerset \((Pow)\) and Union \((Un)\), are indeed true (the second one not quite). We begin with \(Pow\) which is a direct consequence of a result established previously. **Proposition 4.11**: \(M^{*}\models Pow\)_. Intuitively, the collection of submagmas of a magma is again a magma (of the next higher rank.)_ _Proof._ We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y)(y={\cal P}(x))\). We have to show that \(M^{*}\models(\forall x)(\exists y)(y={\cal P}(x))\), or \((\forall x\in{\cal P}(x))(\exists y={\cal P}(x))\). \(M\))\((\exists y\in M)(y={\cal P}(x)\cap M)\). However this follows immediately from Corollary 4.6 (i): if \(x\in M_{\alpha+1}\), then \({\cal P}(x)\cap M\in M_{\alpha+2}\). \(\dashv\) In contrast to the powerset operation, which "goes upward", the union operation "goes downward" and may lead out of \(M\) if \(\cup x\) hits the bottom level that consists of atoms. For example if \(x\in M_{1}\) then \(x\subseteq A\), so according to the formal definition of \(\cup x\), \(\cup x=\emptyset\notin M\). Perhaps one may guess that this concerns the elements of \(M_{1}\) only and that \((\forall x\in M\backslash M_{1})(\cup x\in M)\). But still this is not true. For example, as follows from Proposition 4.9, \(M_{1}\cup M_{2}\in M_{\omega+1}\) while \(\cup(M_{1}\cup M_{2})=(\cup M_{1})\cup(\cup M_{2})=A\cup M_{1}\). This does not belong to \(M\) either, because on the one hand clearly \(A\cup M_{1}\notin M_{1}\), and on the other if we assume \(A\cup M_{1}\in M_{\alpha+1}\), for some \(\alpha\geq 1\), then \(A\cup M_{1}\subseteq M_{\alpha}\subseteq M\), which is false because \(A\cap M=\emptyset\). In fact, since the elements of every \(x\notin M_{1}\) are open sets, and every union of open sets of the _same space_\(LO(M_{\alpha},\subseteq)\) is open again, it follows that if \(x\subseteq LO(A,\preccurlyeq)\), or \(x\subseteq LO(M_{\alpha},\subseteq)\), for some \(\alpha\geq 1\), then \(\cup x\in LO(A,\preccurlyeq)\), or \(\cup x\in LO(M_{\alpha},\subseteq)\), respectively. This situation occurs exactly when \(x\in M_{\alpha+2}\), for some \(\alpha\geq 0\), and gives a sufficient condition in order for \(\cup x\) to belong to \(M\). It turns out that this condition is also necessary. In fact the next proposition describes precisely, with two equivalent conditions, the elements of \(M\) whose unions belong to \(M\). **Proposition 4.12**: _Let \(x\in M\). The following conditions are equivalent._ _(i) \(x\in M_{\alpha+2}\), for some \(\alpha\geq 0\)._ _(ii) \(\cup x\in M\)._ _(iii) \(x\subseteq M_{1}\) or \(x\cap M_{1}=\emptyset\)._ _Proof._ (i)\(\Rightarrow\)(ii) Assume first that \(\alpha\neq 0\) and \(x\in M_{\alpha+2}\). Then \(x\subseteq M_{\alpha+1}=LO(M_{\alpha},\subseteq)\), so \(x\) is a family of open subsets of \(M_{\alpha}\). The union \(\cup x\) of this family is open too, so \(\cup x\in M_{\alpha+1}\). If \(\alpha=0\), then \(x\in M_{2}\), or \(x\subseteq M_{1}=LO(A,\preccurlyeq)\), so \(\cup x\in M_{1}\). In both cases \(\cup x\in M\). (ii)\(\Rightarrow\) (iii): We prove the contrapositive. Suppose \(x\not\subseteq M_{1}\) and \(x\cap M_{1}\neq\emptyset\). If \(x_{1}=x\cap M_{1}\), then \(x=x_{1}\cup x_{2}\), where \(x_{1},x_{2}\neq\emptyset\), \(x_{1}\subseteq M_{1}\) and \(x_{2}\subseteq M\backslash M_{1}\). Therefore \(\cup x=(\cup x_{1})\cup(\cup x_{2})\), where \(\cup x_{1}\subseteq\cup M_{1}=A\) and \(\cup x_{2}\cap A=\emptyset\). So if \(\cup x\in M\), necessarily \(\cup x\subseteq M_{\alpha}\) for some \(\alpha\geq 1\), and hence \(\cup x_{1}\subseteq M_{\alpha}\). But then \(A\cap M_{\alpha}\neq\emptyset\), a contradiction. (iii)\(\Rightarrow\) (i): Let \(x\subseteq M_{1}\). By Corollary 4.6 (ii), \(x\in M\) and \(x\subseteq M_{1}\) imply \(x\in M_{2}\), so we are done. Let now \(x\cap M_{1}=\emptyset\). Then \(x\subseteq M\backslash M_{1}\), and if \(x\in M_{\alpha+1}\), then \(x\subseteq M_{\alpha}\backslash M_{1}\). Without loss of generality we may take \(\alpha\) to be a limit ordinal. Then by Corollary 4.10 (ii), \(M_{\alpha}\backslash M_{1}\subseteq M_{\alpha+1}\), so \(x\subseteq M_{\alpha+1}\). By Corollary 4.6 (ii) again, \(x\in M_{\alpha+2}\). \(\dashv\) On the other hand every level of the form \(M_{\alpha+1}\), for limit \(\alpha\), contains sets \(x\) such that \(\cup x\notin M\). **Fact 4.13**: _If \(\alpha=0\), or \(\alpha\) is a limit ordinal, there exists \(x\in M_{\alpha+1}\) such that \(\cup x\notin M\)._ _Proof_. We saw above that for every \(x\in M_{1}\), \(\cup x\notin M\). Also if, for example, \(x=M_{1}\cup M_{2}\), then \(x\in M_{\omega+1}\) and \(\cup x\notin M\). By Lemma 4.7\(M_{\omega+1}\subseteq M_{\alpha+1}\), for every limit \(\alpha\), so \(M_{1}\cup M_{2}\in M_{\alpha+1}\) too. \(\dashv\) We close here the description of the technical features of the magmatic universe and come to the question about the extent to which the class \(M\) actually satisfies some, or all, of Castoriadis' intuitive principles M1-M5, which he proposed as main properties of magmas. Recall however (see Introduction) that we decided to leave out M1 and M4 as inconsistent, and reformulate slightly the rest of them into M2*, M3*, M5*. Given this adjustment, the answer to the question is yes, M2*, (a weak form of) M3* and M5* are true in \(M\). In our formalization some magmas can be called _basic_, if they generate all the rest of the same level. They are just the b.o. sets \(pr(a)\in M_{1}\), for \(a\in A\), and \(pr_{\alpha}(x)\in M_{\alpha+1}\), for \(x\in M_{\alpha}\) and \(\alpha\geq 1\). **Proposition 4.14**: _The following statements are true about \(M\):_ _(i) (M2*): For every magma \(x\) there is a magma \(y\neq x\) such that \(y\subseteq x\)._ _(ii) (weak M3*): If \(x\) is a basic magma, there is no finite partitioning of \(x\) into submagmas._ _(iii) (M5*): What is not a magma is a set or an atom._ _Proof_. (i) Let \(x\in M\) be a magma, and let \(x\in M_{\alpha+1}=LO(M_{\alpha},\subseteq)\), for some \(\alpha\geq 1\), or \(x\in M_{1}=LO(A,\preccurlyeq)\). We know (see Lemma 4.2) that none of these topologies contains minimal open sets, so there is always a \(y\in M_{\alpha+1}\) such that \(y\varsubsetneq x\). Such a \(y\) is a proper submagma of \(x\). (ii) Given a basic magma \(pr_{\alpha}(x)\), assume for simplicity that \(pr_{\alpha}(x)=y_{1}\cup y_{2}\), where \(y_{1}\), \(y_{2}\) are disjoint submagmas, that is disjoint open sets \(y_{1}\), \(y_{2}\). But then either \(x\in y_{1}\) or \(x\in y_{2}\), hence either \(pr_{\alpha}(x)\subseteq y_{1}\), or \(pr_{\alpha}(x)\subseteq y_{2}\), both of which contradict the fact that \(pr_{\alpha}(x)=y_{1}\cup y_{2}\) and \(y_{1}\cap y_{2}=\emptyset\). (iii) This claim follows simply from the fact that the class \(M=M(A)\) of magmas above \(A\) is a subclass of the set theoretic universe with atoms \(V(A)\). So if \(u\notin M(A)\), then necessarily either \(u\) is a classical set of \(V(A)\), or \(u\in A\). \(\dashv\)
2303.11709
Excited electronic states of Sr$_2$: ab initio predictions and experimental observation of the $2^1Σ^{+}_{u}$ state
Despite its apparently simple nature with four valence electrons, the strontium dimer constitutes a challenge for modern electronic structure theory. Here we focus on excited electronic states of Sr$_2$, which we investigate theoretically up to 25000 cm$^{-1}$ above the ground state, to guide and explain new spectroscopic measurements. In particular, we focus on potential energy curves for the $1^1\Sigma^{+}_{u}$, $2^1\Sigma^{+}_{u}$, $1^1\Pi_{u}$, $2^1\Pi_{u}$, and $1^1\Delta_{u}$ states computed using several variants of advanced \textit{ab initio} methods to benchmark them. In addition, a new experimental study of the excited $2^1\Sigma^{+}_{u}$ state using polarisation labelling spectroscopy is presented, which extends knowledge of this state to high vibrational levels, where perturbation by higher electronic states is observed. The available experimental observations are compared with the theoretical predictions and help to assess the accuracy and limitations of employed theoretical models. The present results pave the way for future more accurate theoretical and experimental spectroscopic studies.
Jacek Szczepkowskia, Marcin Gronowski, Anna Grochola, Włodzimierz Jastrzebski, Michał Tomza, Paweł Kowalczyk
2023-03-21T10:01:55Z
http://arxiv.org/abs/2303.11709v1
Excited electronic states of Sr\({}_{2}\): _ab initio_ predictions and experimental observation of the \(2^{1}\Sigma_{u}^{+}\) state ###### Abstract Despite its apparently simple nature with four valence electrons, the strontium dimer constitutes a challenge for modern electronic structure theory. Here we focus on excited electronic states of Sr\({}_{2}\), which we investigate theoretically up to 25000 cm\({}^{-1}\) above the ground state, to guide and explain new spectroscopic measurements. In particular, we focus on potential energy curves for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), \(2^{1}\Pi_{u}\), and \(1^{1}\Delta_{u}\) states computed using several variants of advanced _ab initio_ methods to benchmark them. In addition, a new experimental study of the excited \(2^{1}\Sigma_{u}^{+}\) state using polarisation labelling spectroscopy is presented, which extends knowledge of this state to high vibrational levels, where perturbation by higher electronic states is observed. The available experimental observations are compared with the theoretical predictions and help to assess the accuracy and limitations of employed theoretical models. The present results pave the way for future more accurate theoretical and experimental spectroscopic studies. ## 1 Introduction Diatomic molecules at ultralow temperatures are a perfect platform for research touching upon the very fundamentals of quantum physics and chemistry [1]. Ultracold polar molecules have been proposed and employed for a plethora of ground-breaking experiments ranging from quantum-controlled collisions and chemical reactions [2] to quantum simulations [3] and precision measurements of fundamental constants and their spatiotemporal variation [4]. After spectacular successes with alkali-metal molecules, which can be efficiently formed from ultracold atoms using magnetoassociation [5] followed by optical stabilization [6], the production of ultracold molecules containing alkaline-earth-metal atoms has emerged as another important research goal. Recently, an ultracold gas of Sr\({}_{2}\) dimers in their absolute ground state was obtained using all-optical methods, where weakly bound singlet-state molecules were formed in an optical lattice by narrow-line photoassociation and transferred to the ground rovibrational level by the stimulated Raman adiabatic passage (STIRAP) [7]. Fast chemical reactions between such dimers were observed close to the universal limit. Nevertheless, ultracold Sr\({}_{2}\) molecules have already been employed in a series of exciting experiments ranging from studying asymptotic physics in subradiant states [8] to photodissociation with quantum state control [9]. Very recently, a new type of molecular lattice clock based on ultracold Sr\({}_{2}\) dimers with long vibrational coherence has also been established [10; 11]. This paves the way for upcoming applications of these molecules in quantum simulation [12], quantum metrology [13], and precision measurements probing the fundamental laws of nature [14; 15]. Exciting developments and applications of ultracold molecules described above would not have been feasible without thorough experimental spectroscopic analysis and substantial theoretical _ab initio_ electronic structure evaluations of the underlying molecular structure. The required level of accuracy varies for each application. Generally, precise measurements can provide more accurate outcomes than theoretical calculations. However, _ab initio_ quantum-chemical calculations of potential energy curves, permanent and transition electric dipole moments, and other molecular couplings are frequently necessary to propose, guide, and explain experimental endeavors. Alkaline-earth-metal diatomic molecules, despite their apparently simple nature with four valence electrons and closed-shell ground electronic state, have constituted a challenge for modern electronic structure theory. Already the simplest Be\({}_{2}\) dimer presents unusually strong bonding and unique shape of the ground-state potential energy curve [16], which accurate theoretical description required highly correlated methods [17]. Confirming the existence of elusive vibrational states of the ground-state Mg\({}_{2}\) dimer also needed state-of-the-art quantum-chemical calculations [18]. Thus, it is not surprising that the accurate theoretical description of the Sr\({}_{2}\) dimer in the ground and excited electronic states may require careful treatment, similar to lighter neutral dimers or charged Sr\({}_{2}^{+}\) molecular ion [19]. The ground \(X^{1}\Sigma_{g}^{+}\) and excited \(2^{1}\Sigma_{u}^{+}\) and \(3^{1}\Pi_{u}\) electronic states of Sr\({}_{2}\) were initially investigated experimentally with absorption and laser-induced fluorescence spectroscopy [20; 21; 22], followed by high-resolution Fourier-transform laser-induced fluorescence spectroscopy of the \(X^{1}\Sigma_{g}^{+}\) state [23; 24] and the minimum region of the excited \(1^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), and \(2^{1}\Sigma_{u}^{+}\) states [25]. Recently, high-accurate measurements with ultracold Sr\({}_{2}\) allowed for improving the accuracy of rovibrational spectra of the \(X^{1}\Sigma_{g}^{+}\) and \(1^{1}\Sigma_{u}^{+}\) states [7]. The ground and excited electronic states of Sr\({}_{2}\) were also investigated theoretically using different computational approaches, including large-core semiempirical pseudopotentials [26; 27], small-core relativistic pseudopotentials [28], and all-electron relativistic Hamiltonian [29]. The challenging character of calculations for excited molecular electronic states could be seen in contradictory dissociation energies for the lowest-excited \(1^{1}\Sigma_{u}^{+}\) and \(1^{1}\Pi_{u}\) states reported without detailed estimates of computational uncertainties. Several other studies focused solely on the ground \(X^{1}\Sigma_{g}^{+}\) electronic state [30; 31; 32; 33; 34; 35], which is already well understood. In this work, we investigate the excited electronic states of the Sr\({}_{2}\) molecule. We start with the computational evaluation of the complete molecular electronic spectrum up to the excitation energy of around \(25000\,\mathrm{cm}^{-1}\). Next, we compute potential energy curves for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), \(2^{1}\Pi_{u}\), and \(1^{1}\Delta_{u}\) states using several variants of advanced _ab initio_ methods. New experimental measurements of the excited \(2^{1}\Sigma_{u}^{+}\) state using polarisation labelling spectroscopy are presented, extending the range of observed vibrational levels to higher energies. The corresponding Dunham coefficients and experimental potential energy curve are reported. The observed perturbations in the recorded spectrum give preliminary information on higher-excited electronic states. The comparison of the experimental observations with several theoretical predictions helps to assess and benchmark the accuracy and limitations of employed theoretical models. The structure of the paper is the following. In Section 2, we describe the employed theoretical methods and obtained computational results. In Section 3, we present the used experimental technique and the spectroscopic results. In section 4, we conclude and provide an outlook. ## 2 Electronic structure calculations ### Computational methods Several computational approaches were used in the electronic structure calculations to assess and benchmark their accuracy. The all-electron computations employed the eXact-2-Component Hamiltonian [36] and the equation-of-motion coupled cluster method [37] with single and double excitations (EOM-CCSD) [38] with the relativistic correlation-consistent core-valence quadruple-zeta basis sets (aug-cc-pwCVQ-X2C) [39] in version implemented in the Molpro 2022.1 Program [40]. We explored the impact of the core-electron correlation on the results by correlating only valence electrons (denoted as x2cCCSDv), valence and \(4s4p\) electrons (denoted as x2cCCSDbc), as well as valence, \(4s4p\), and \(3s3p3d\) electrons (denoted as x2cCCSDsc). Other equation-of-motion coupled cluster computations used the small-core relativistic energy-consistent ECP28MDF pseudopotential [41; 42] with the quadruple- and quintuple-zeta pseudopotential-based correlation-consistent polarised core-valence basis sets (aug-cc-pCVQZ-PP and aug-cc-pCV5Z-PP, donated as QZ and 5Z, respectively) [39] in the Cfour 2.1 software [43]. We obtained the complete basis set (CBS) limit with two-point \(1/X^{3}\) extrapolation [44]. To estimate the role of the higher excitations, we compared EOM-CCSD (denoted as ecpCCSD) and EOM-CCSDT-3 [45; 46] (denoted as ecpCCSDT3). We also performed multireference computations with the standard Davidson correction [47]. We described the valence-electron correlation by the multiconfiguration reference internally contracted configuration interaction method [48; 50; 49] with the active space composed of 20 (sMRCI+Q) or 24 (MRCI+Q) orbitals. Such a sizeable active space is necessary to correctly describe the \(2^{1}\Pi_{u}\) state of Sr\({}_{2}\). The orbitals were optimised at the complete active space self-consistent field (CASSCF) level [51]. Additionally, we employed the hybrid CIPT2+Q method [52], which adds the core-electron correlation to MRCI+Q by the multireference Rayleigh-Schrodinger second-order perturbation theory. All multireference computations used the aug-cc-pwCV5Z-PP basis set [39] and were performed with the Molpro 2022.1 program. Since we had a problem converging calculations for monomers in the dimer basis set, we assumed that the basis set superposition error for multireference methods is the same as for ecpCCSD. This assumption is well justified as the total basis set superposition error is relatively small for Sr\({}_{2}\). We shifted the computed interaction energies by the sum of the appropriate experimentally-measured atomic excitation energies from the NIST database [53] and \(1081.64\,\mathrm{cm}^{-1}\), corresponding to the molecular ground state's depth [24]. This procedure guarantees that the reported energies are relative to the ground state's minimum and tend to the corresponding atomic values in their asymptotes. ### Theoretical results Figure 1 shows an overview of potential energy curves (PECs) computed with the sMRCI+Q/5Z approach, which is often the method of choice to study the excited electronic states of diatomic molecules. However, in the case of Sr\({}_{2}\), this approach has some shortcomings, as we shall discuss later. The main conclusion from the overview of excited states is that the \(2^{1}\Sigma_{u}^{+}\) state is fairly well separated from other states, and, thus, a relatively small number of perturbations from other states should be expected. The strontium dimer contains 76 electrons. Therefore, a precise quantum-mechanical description is challenging. Additionally, the large charge of its nuclei limits the applicability of non-relativistic quantum mechanics. Thus, an accurate description of Sr\({}_{2}\) has to include: _i_) extensive orbital basis set, _ii_) valence-electron correlation, _iii_) core-electron correlation, _iv_) scalar relativistic contribution, _v_) spin-related relativistic effects, like fine and hyperfine couplings, and _vi_) leading quantum electrodynamic corrections. However, it is currently only feasible to simultaneously account for some of these effects for many-electron molecules. Here, we neglect spin-related and quantum electrodynamic contributions (_v-vi_) and explore the sensitivity of the potential energy curves to the remaining contributions (_i-iv_), which are usually the most crucial for reaching quantitative description of any molecule. The spin-orbit coupling, the largest neglected contribution, can be perturbatively added in the next steps [54; 28]. Figure 2 presents the PECs for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), \(2^{1}\Pi_{u}\), and \(1^{1}\Delta_{u}\) electronic states obtained at several different levels of theory. Corresponding spectroscopic parameters are collected in Tab. 1. We selected these singlet ungerade states for detailed computational tests because they are the most relevant for parallel spectroscopic measurements. The first observation is that the PECs exhibit relatively low sensitivity to the orbital basis set size, meaning that calculations in the quadruple- and quintuple-zeta basis sets are already close enough to the complete basis set limit. This can be demonstrated by comparing the results in the quintuple-zeta basis set with the estimated CBS (ecpCCSDT3/5Z vs. ecpCCSDT3/CBS). The difference in the dissociation energy is of the order of 30 cm\({}^{-1}\) for the \(1^{1}\Sigma_{u}^{+}\) and \(1^{1}\Delta_{u}\) states and of the order of 200 cm\({}^{-1}\) for the \(1^{1}\Pi_{u}\) and \(2^{1}\Sigma_{u}^{+}\) states. Next, we observe that correlating only valence electrons is insufficient. We systematically investigate the effect of the core-electron correlation by changing the size of the frozen core in the all-electron equation-of-motion coupled-cluster computations with the eXact-2-Component Hamiltonian (x2cCCSDv vs. x2CCCDbc vs. x2cCCSDsc). The lack of the core correlation not only alters the depth of PEC by hundreds of cm\({}^{-1}\), but mainly elongates the equilibrium distance. Surprisingly, the core correlation also affects the asymptotic region of the \(1^{1}\Sigma_{u}^{+}\) and \(2^{1}\Sigma_{u}^{+}\) states. On the other hand, the correlation of the valence and \(4s4p\) electrons is sufficient, and the correlation of the electrons occupying the lower orbitals (replaced by the small-core pseudopotential) is not necessary. Finally, we address the relativistic effects. Our calculations include the scalar relativistic effects only. We do not observe substantial differences between the all-electron x2cCCSDsc and small-core-pseudopotential ecpCCSD results. Therefore, we conclude that using the small-core ECP28MDF pseudopotential to account for the scalar relativistic effects is justified and sufficient, which is in agree Figure 1: Potential energy curves for the ground and excited electronic states of Sr\({}_{2}\) obtained in the non-relativistic spin-free sMRCI+Q/5Z computations with the scalar-relativistic small-core pseudopotential. The states are labeled by symmetry and asymptote or only by symmetry in the case of states involving asymptotes for which numerical difficulties prevent obtaining whole PECs. ment with other studies [28; 55]. Our calculations do not account for the spin-related part of the Dirac-Coulomb-Breit Hamiltonian. It is necessary to go beyond this approximation to correctly describe the crossing between states of different spin multiplicity coupled by spin-orbit coupling, which can be added to our curves perturbatively. For example, this coupling is important for the \(1^{1}\Sigma_{u}^{+}\) state at distances larger than \(4.5\,\mathrm{\SIUnitSymbolAngstrom}\). Kotochigova [29] included the spin-orbit part in her computations directly, but her results for \(1^{1}\Sigma_{u}^{+}\) significantly deviate from modern experimental results and our calculations due to her approximate treatment of the electron correlation by the configuration interaction valence bond self-consistent-field approach. In contrast, Skomorowski _et al._[28] included the electron correlation directly and spin-orbit interaction perturbatively for \(1^{1}\Sigma_{u}^{+}\) with the coupled cluster method and small-core pseudopotential, and obtained a much better agreement with the experiment. Overall, the primary factor determining the accuracy of the calculations for Sr\({}_{2}\) is the inclusion of high excitations in the description of the valence-electron correlation. For the analyzed electronic states, the difference between the ecpCCSDT3 and ecpCCSD results, that is, the inclusion of triple excitations, is more significant than the effect of the core-electron correlation. Among the methods used, the ecpCCSDT3 and CIPT2+Q approaches include the core-electron correlation and a substantial part of excitations higher than doubles. The CIPT2+Q method is a multireference approach that includes all possible excitations within the active space. Thus, CIPT2+Q accounts well for the static correlation but gives only an approximate description of the core-electron dynamic correlation. This approximation was necessary since an alternative approach, based on the multireference configuration interaction method with a large active space that correlates core electrons, goes beyond the technical capabilities of modern quantum-chemical programs. On the other hand, the single-reference ecpCCSDT3 approach accounts for the dynamic correlation of core and valence electrons but only includes a fraction of triple and higher excitations. The importance of higher excitations and multireference nature can be seen by analyzing the wavefunctions. We inspected the squares of reference coefficients obtained with the sMRCI+Q method at \(R=4.13\,\mathrm{\SIUnitSymbolAngstrom}\), close to respective equilibrium distances. We found that the electronic wavefunction of the \(1^{1}\Sigma_{u}^{+}\) state consists mostly of single-excited determinants (\(72\,\mathrm{\char 37}\)), and the role of excitations higher than double is negligible (\(3\,\mathrm{\char 37}\)). Therefore, it is not surprising that for this state, we observe the smallest difference between energies obtained with the ecpCCSDT3 and CIPT2+Q approaches, which also agree well with another single-reference calculation reported by Skomorowski _et al._[28]. The slightly smaller role of single-excited determinants is visible for the \(2^{1}\Sigma_{u}^{+}\) and \(1^{1}\Delta_{u}\) states (about \(66\%\)), where the difference between PECs calculated with the ecpCCSDT3 and CIPT2+Q methods is larger. Still, the role of triple excitations for these states is below \(5\,\mathrm{\char 37}\) \begin{table} \begin{tabular}{l l l c c c c c} \hline \hline state & method & \(T_{e}\) & \(E_{e}\) & \(R_{e}\) & \(\omega_{e}\) & \(B_{e}\) & \(D_{e}\) \\ & & [cm\({}^{-1}\)] & [cm\({}^{-1}\)] & [Å] & [cm\({}^{-1}\)] & [cm\({}^{-1}\)] & [10\({}^{-9}\) cm\({}^{-1}\)] \\ \hline \(1^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/QZ & 13013 & 8219 & 4.035 & 74.94 & 0.02356 & 9.32 \\ \(1^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/5Z & 13010 & 8221 & 3.982 & 78.17 & 0.02419 & 9.27 \\ \(1^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/CBS & 12984 & 8247 & 3.928 & 81.54 & 0.02486 & 9.24 \\ \(1^{1}\Sigma_{u}^{+}\) & ecpCCSDT5/QZ & 13693 & 7538 & 4.017 & 76.41 & 0.02376 & 9.20 \\ \(1^{1}\Sigma_{u}^{+}\) & sMRCI+Q/5Z & 12649 & 8582 & 4.209 & 71.25 & 0.02164 & 7.99 \\ \(1^{1}\Sigma_{u}^{+}\) & MRCI+Q/5Z & 12910 & 8321 & 4.181 & 71.12 & 0.02194 & 8.36 \\ \(1^{1}\Sigma_{u}^{+}\) & CIPT2+Q/5Z & 12502 & 8729 & 3.93 & 77.82 & 0.02484 & 10.0 \\ \(1^{1}\Sigma_{u}^{+}\) & x2cCCSDv & 13373 & 7858 & 4.272 & 69.86 & 0.02102 & 7.61 \\ \(1^{1}\Sigma_{u}^{+}\) & x2cCCSDbc & 13731 & 7500 & 4.037 & 75.28 & 0.02354 & 9.20 \\ \(1^{1}\Sigma_{u}^{+}\) & x2cCCSDsc & 13673 & 7559 & 4.044 & 74.81 & 0.02345 & 9.22 \\ \(1^{1}\Sigma_{u}^{+}\) & theory [26] & 12363 & & 3.850 & 79 & 0.0259 & \\ \(1^{1}\Sigma_{u}^{+}\) & theory [27] & & 5490 & 4.01 & 80.21 & & \\ \(1^{1}\Sigma_{u}^{+}\) & theory [29] & 17269 & 5475 & 4.02 & 88 & & \\ \(1^{1}\Sigma_{u}^{+}\) & theory [28] & & 8433 & 3.99 & & & \\ \(1^{1}\Sigma_{u}^{+}\) & exp. [25] & 12796(2) & & 3.95(1) & 80.71(3) & 0.024794(2) & \\ \hline \(1^{1}\Delta_{u}\) & ecpCCSDT3/QZ & 16605 & 4627 & 3.939 & 85.34 & 0.02472 & 8.30 \\ \(1^{1}\Delta_{u}\) & ecpCCSDT3/5Z & 16580 & 4652 & 3.921 & 85.74 & 0.02494 & 8.44 \\ \(1^{1}\Delta_{u}\) & ecpCCSDT3/CBS & 16550 & 4681 & 3.903 & 86.34 & 0.02518 & 8.56 \\ \(1^{1}\Delta_{u}\) & ecpCCSD/5Z & 17626 & 3605 & 3.974 & 80.18 & 0.02429 & 8.92 \\ \(1^{1}\Delta_{u}\) & MRCI+Q/5Z & 15864 & 5367 & 4.041 & 85.22 & 0.02349 & 7.14 \\ \(1^{1}\Delta_{u}\) & MRCI+Q/5Z & 15646 & 5585 & 4.031 & 84.99 & 0.02361 & 7.28 \\ \(1^{1}\Delta_{u}\) & CIPT2+Q/5Z & 15786 & 5446 & 3.902 & 113.1 & 0.02519 & 5.00 \\ \(1^{1}\Delta_{u}\) & x2cCCSDv & 17393 & 3839 & 4.106 & 78.01 & 0.02275 & 7.74 \\ \(1^{1}\Delta_{u}\) & x2cCCSDbc & 17635 & 3596 & 3.982 & 79.82 & 0.02418 & 8.88 \\ \(1^{1}\Delta_{u}\) & x2cCCSDsc & 17595 & 3636 & 3.98 & 80.09 & 0.02421 & 8.84 \\ \(1^{1}\Delta_{u}\) & theory [26] & 16158 & & 3.868 & 82 & 0.0257 & \\ \hline \(1^{1}\Pi_{u}\) & ecpCCSDT3/QZ & 17331 & 3901 & 4.123 & 84.6 & 0.02256 & 6.42 \\ \(1^{1}\Pi_{u}\) & ecpCCSDT3/5Z & 17510 & 3722 & 4.107 & 84.67 & 0.02274 & 6.56 \\ \(1^{1}\Pi_{u}\) & ecpCCSDT3/CBS & 17695 & 3536 & 4.089 & 84.79 & 0.02293 & 6.71 \\ \(1^{1}\Pi_{u}\) & ecpCCSD/5Z & 19324 & 1907 & 4.217 & 75.01 & 0.02157 & 7.13 \\ \(1^{1}\Pi_{u}\) & sMRCI+Q/5Z & 14741 & 6490 & 4.173 & 85.77 & 0.02203 & 5.81 \\ \(1^{1}\Pi_{u}\) & MRCI+Q/5Z & 14530 & 6701 & 4.159 & 85.96 & 0.02217 & 5.90 \\ \(1^{1}\Pi_{u}\) & CIPT2+Q/5Z & 15933 & 5298 & 3.992 & 89.6 & 0.02407 & 6.94 \\ \(1^{1}\Pi_{u}\) & x2cCCSDv & 17530 & 3701 & 4.278 & 80.66 & 0.02096 & 5.66 \\ \(1^{1}\Pi_{u}\) & x2cCCSDbc & 19299 & 1932 & 4.226 & 75.57 & 0.02148 & 6.94 \\ \(1^{1}\Pi_{u}\) & x2cCCSDsc & 19219 & 2012 & 4.223 & 75.88 & 0.0215 & 6.91 \\ \(1^{1}\Pi_{u}\) & theory [26] & 16243 & & 3.952 & 96 & 0.0246 & \\ \(1^{1}\Pi_{u}\) & theory [29] & 18658 & 4081 & 3.93 & 72 & & \\ \(1^{1}\Pi_{u}\) & exp. [25] & 16617.86(2) & & 4.0473(2) & 86.300(3) & 0.023415(2) & 6.943 \\ \hline \(2^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/QZ & 18193 & 4587 & 4.192 & 82.78 & 0.02183 & 6.07 \\ \(2^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/5Z & 18000 & 4780 & 4.19 & 82.17 & 0.02185 & 6.18 \\ \(2^{1}\Sigma_{u}^{+}\) & ecpCCSDT3/CBS & 17797 & 4983 & 4.187 & 81.52 & 0.02187 & 6.30 \\ \(2^{1}\Sigma_{u}^{+}\) & ecpCCSD/5Z & 18858 & 3922 & 4.276 & 71.96 & 0.02097 & 7.13 \\ \(2^{1}\Sigma_{u}^{+}\) & sMRCI+Q/5Z & 17674 & 5106 & 4.273 & 82.66 & 0.021 & 5.42 \\ \(2^{1}\Sigma_{u}^{+}\) & MRCI+Q/5Z & 17182 & 5599 & 4.277 & 84.23 & 0.02096 & and higher excitations are an order of magnitude less important. For the \(1^{1}\Pi_{u}\) state, we observe similar contributions from single- and double-excited determinants (49 % and 39 %), that explains the significant difference between the ecpCCSD and ecpCCSDT3 results. The role of triple and quadruple excitations for this state is relatively low, accounting for about 5 % and 0.65 %, respectively. Therefore, we can conclude that the application of the full configuration interaction method may be unnecessary to obtain accurate results for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Delta_{u}\), and \(1^{1}\Pi_{u}\) states. We can also assume that the ecpCCSDT3 and CIPT+Q approaches properly set the boundaries for the PEC shapes. Indeed, near the minima, the experimental PECs for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), and \(1^{1}\Pi_{u}\) states lie between the curves obtained with the ecpCCSDT3 and CIPT2+Q methods. The \(2^{1}\Pi_{u}\) state deserves special attention and particular comment because the variation of PECs obtained for this state with different methods is the largest, and this is the only state analyzed in which double excitations are dominant (75 %). Additionally, it exhibits the highest contribution from the quadrupole excitations (2%). It corresponds to the \({}^{1}P\)+\({}^{1}S\) asymptote, and for large internuclear distances, it is repulsive. We can observe its two avoided crossings with other states of the same symmetry. The one at a larger interatomic separation involves a state from the \({}^{3}P\)+\({}^{3}P\) asymptote. The assignment of the second crossing is far more complex, as numerical difficulties prevent obtaining whole PECs for all states from the \({}^{1}D(4d5p)\)+\({}^{1}S(5s^{2})\), \({}^{1}P(5s5p)\)+\({}^{1}S(5s^{2})\), \({}^{1}D(5s5d)\)+\({}^{1}S(5s^{2})\) and \({}^{3}D(5s4d)\)+\({}^{3}P(5s5p)\) asymptotes. We suppose that doubly excited states, \({}^{1}D(4d5p)\)+\({}^{1}S(5s^{2})\) and \({}^{3}D(5s4d)\)+\({}^{3}P(5s5p)\), play a crucial role here. The equation-of-motion coupled cluster method with single and double excitations does not describe them accurately, so the PEC predicted at that level is mostly repulsive. The inclusion of some higher excitations by the ecpCCSDT3 method allows for a poor description of \({}^{1}D(4d5p)\) state of Sr, where the excitation energy is overestimated by nearly 2000 cm\({}^{-1}\) (see Tab. 2). The MRCI+Q/5Z method predicts the energy of \({}^{1}D(4d5p)\) in a reasonably good agreement with the experiment. On the other hand, MRCI+Q/5Z tends to overestimate the depth of the potential for other states of Sr\({}_{2}\). Additionally, our active space is too small to fully account for the \({}^{1}P(5s5p)\)+\({}^{1}S(5s^{2})\) and \({}^{1}D(5s5d)\)+\({}^{1}S(5s^{2})\) asymptotes. Overall, the expected position of the minimum in the \(2^{1}\Pi_{u}\) potential is in the wide range between 19000 cm\({}^{-1}\) given by the CIPT2+Q/5Z method and 24000 cm\({}^{-1}\) from the ecpCCSDT3/CBS computation above the minimum of the ground electronic state (see Tab. 1). This range covers the value \(T_{e}>19100\) cm\({}^{-1}\) estimated based on our present experimental observations (_vide infra_). Our computations do not confirm the existence of the avoided crossing between \(1^{1}\Pi_{u}\) and \(2^{1}\Pi_{u}\) predicted by Boutassetta _et al._. [26] We indeed observe that \(1^{1}\Pi_{u}\) and \(2^{1}\Pi_{u}\) approach each other in the repulsive part of the PECs, but a possible crossing may occur only in the region experimentally insignificant. We suppose that a small basis set, with an insufficient number of high angular momentum components, could have significantly decreased the precision of Boutassetta _et al._ results for highly excited states. However, their approach accounts well for static correlation and thus reproduces the general shape of the PECs. Czuchaj _et al._[27] reported PEC, which differs from our and Boutassetta _et al._ results. However, their active space in multireference computations was smaller than ours and did not allow for an accurate description of static correlation. We believe that the accurate description of \(2^{1}\Pi_{u}\) state is a major computational challenge. Most likely, the only way to obtain its reliable and accurate description is to use the full configuration interaction with a large basis set and proper account for the core and core-valence correlation method, which is out of the scope of this work. Therefore, for now, we must assume that the exact shape of the curve is unknown and falls somewhere between the curves predicted by the ecpCCSDT3/CBS and CIPT2+Q/5Z approaches. We provide the potential energy curves for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), \(2^{1}\Pi_{u}\), and \(1^{1}\Delta_{u}\) states obtained with the most-accurate ecpCCSDT3/CBS and CIPT2+Q/5Z methods, and for the states presented in Fig. 1 obtained with the sMRCI+Q/5Z method in the supplementary data accompanying this paper [56]. ## 3 Experiment ### Experimental setup The Sr\({}_{2}\) molecules were produced in a three section heat-pipe oven [57] of 1 m length filled in the central part with 15 g of strontium. The central part with a length \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline method & \({}^{1}D\) & \({}^{1}P\) & \({}^{1}S\) & \({}^{1}D\) & \({}^{1}P\) & \({}^{1}D\) \\ & 5s4d & 5s5p & 5s6s & 4d5p & 5s6p & 5s5d \\ \hline exp. [53] & 20149.685 & 21698.452 & 30591.825 & 33826.899 & 34098.404 & 34727.447 \\ x2eCCSDv & 20144 & 20817 & 29076 & 33569 & 32494 & 33623 \\ x2eCCSDbc & 20659 & 22612 & 30982 & — & 34811 & 35355 \\ x2eCCSDsc & 20855 & 22652 & 30993 & — & 34837 & 35405 \\ ecpCCSDT3/5Z & 20489 & 21845 & 30517 & 35776 & 34150 & 34820 \\ MRCI+Q/5Z & 20162 & 20849 & 29095 & 33609 & — & — \\ \hline \hline \end{tabular} \end{table} Table 2: Excitation energies (in cm\({}^{-1}\)) of singlet electronic states of the Sr atom obtained with different quantum chemistry methods. of 20 cm was heated to 1020\({}^{\circ}\)C, while external parts were maintained at 720\({}^{\circ}\)C. In the case of strontium a proper circulation of the metal inside the heat-pipe is a challenge. To solve this problem, 1.5 g of metallic magnesium was added to its central section (strontium and magnesium form an alloy with substantially lower melting point than its constituents [58]). A steel mesh was placed separately in each section and the heat-pipe was filled with 15 Torr of argon buffer gas. A polarisation labelling spectroscopy (PLS) technique was employed to obtain spectra of Sr\({}_{2}\) molecules. The PLS is a pump-probe experimental technique, which takes advantage of an optical anisotropy created in a chosen group of molecules in the sample to limit the number of observed spectral lines [59]. In the present experiment a NarrowScan dye laser of a spectral linewidth of 0.07 cm\({}^{-1}\) pumped with a XeCl excimer laser (Light Machinery) was used as a pump laser and its wavelength was scanned between 18400 cm\({}^{-1}\) and 20300 cm\({}^{-1}\), covering transitions from the ground state of Sr\({}_{2}\) to the upper part of the 2\({}^{1}\Sigma_{u}^{+}\) state. As a probe laser a ring dye laser (Coherent 899, pumped with Sprout laser) was employed, and its wavelength was controlled with HighFinesse WS-7 wavemeter. The laser was working on Rhodamine 6G, what enabled tuning its light within a spectral range 16800 cm\({}^{-1}\) - 17600 cm\({}^{-1}\). The probe laser wavelength was fixed on selected transitions from the ground X\({}^{1}\Sigma_{g}^{+}\) state of Sr\({}_{2}\) molecule to low levels of the 2\({}^{1}\Sigma_{u}^{+}\) state known from supplementary materials of the publication describing the bottom part of this state [25]. Reference signals for wavenumber calibration of the molecular spectra were needed, therefore two auxiliary signals were recorded, namely transmission fringes from a Fabry-Perot interferometer with \(\mathrm{FSR}=1\,\mathrm{cm}^{-1}\), and optogalvanic spectrum from argon and neon hollow-cathode lamps. This ensured that the uncertainty of wavenumbers determined in this way is below \(\pm 0.1\,\mathrm{cm}^{-1}\). ### Analysis of the spectra As the bottom part of the 2\({}^{1}\Sigma_{u}^{+}\) state has been characterised in the Fourier-transform spectroscopy experiment by Stein _et al._[25], we have concentrated on higher vibrational levels of this state. A typical example of the recorded spectrum of Sr\({}_{2}\) is presented in Fig. 3. Our experiment provided information about rovibrational levels with quantum numbers ranging from \(v^{\prime}=13\) to 52 and \(J^{\prime}\) from 43 to 149, solely in the most abundant \({}^{88}\)Sr\({}_{2}\) isotopologue. We supplemented the database with levels of \({}^{88}\)Sr\({}_{2}\) measured in [25], however to avoid too strong influence of these levels on subsequent fits of molecular constants we limited the borrowed levels to \(J^{\prime}<150\) and assigned them with the same accuracy of 0.1 cm\({}^{-1}\) as our own data. This resulted in 714 levels taken from [25] and 1760 levels from our own measurements. The term values of all levels were calculated by adding the measured transition energies to the energies of the initial X\({}^{1}\Sigma_{g}^{+}\) (\(v^{\prime\prime}\), \(J^{\prime\prime}\)) levels obtained from the highly accurate molecular constants reported in Ref. [23]. Originally we tried to fit the term energies to the standard Dunham expansion \[T(v,J)=T_{e}+\sum_{m,n}Y_{mn}(v+1/2)^{m}J(J+1)]^{n}\,, \tag{1}\] but the rms error of the fit amounted to 0.5 cm\({}^{-1}\), i.e. five times more than our experimental accuracy. This result suggested strong perturbations in the 2\({}^{1}\Sigma_{u}^{+}\) state, particularly that the misbehaving levels, all of them corresponding to \(v^{\prime}>\approx 19\), were centred around isolated (\(v^{\prime}\), \(J^{\prime}\)) values and the deviations fell into patterns characteristic for perturbations. Figure 4 displays term values of several levels of the 2\({}^{1}\Sigma_{u}^{+}\) state plotted against \(J(J+1)\) and open squares localise regions of the observed perturbations. Figure 5 visualises typical pattern of deviations between the observed and predicted line positions versus rotational quantum number \(J\). In the preliminary analysis presented here which aims primarily to test accuracy of theoretical predictions based on various computational methods, we decided to remove the apparently perturbed levels from the database and to fit Dunham coefficients to the remaining levels. When fitting energies of somewhat arbitrarily chosen 1636 levels (out of the total 2474) we obtained rms error 0.08 cm\({}^{-1}\), this time consistent with the experimental accuracy. The Dunham coefficients have been rounded to minimise the number of digits by a procedure described by Le Roy [60]. They are listed in Tab. 3 together with the equilibrium bond length \(R_{e}\) calculated from the rotational constant and reduced mass of strontium nuclei. A rotationless potential energy curve for the 2\({}^{1}\Sigma_{u}^{+}\) state was constructed by a standard Rydberg-Klein-Rees (RKR) method. The Figure 3: A portion of the experimental spectrum of Sr\({}_{2}\) corresponding to transitions from rovibrational level \(v^{\prime\prime}=4\), \(J^{\prime\prime}=90\) in the ground X\({}^{1}\Sigma_{g}^{+}\) state to consecutive rovibrational levels (\(v^{\prime}=19-28\)) of the excited 2\({}^{1}\Sigma_{u}^{+}\) state. vibrational term energies \(G_{v}\) and turning points \(R_{-}\) and \(R_{+}\) are given in Tab. 4 and the potential curve is displayed in Fig. 2 along with the theoretical predictions. Our work extends the range of experimentally determined potential to 3.5 A \(<R<\) 6.1 A and more than doubles the range of covered energies. It must be noted that in the range \(v^{\prime}=0\) - \(\approx 18\) the \(2^{1}\Sigma_{u}^{+}\) state is free of (strong) perturbations which become visible only from \(v^{\prime}\approx 19\). The most likely perturber is the \(1^{1}\Pi_{u}\) state, as the outer limb of its potential curve gradually approaches potential of the \(2^{1}\Sigma_{u}^{+}\) state. However, at \(v^{\prime}\approx 25\) apparently an additional perturbing state emerges, since perturbations become more frequent (\(v^{\prime}=25\) is perturbed at least around four \(J^{\prime}\) values, see Fig. 5). From analysis of the theoretical potential energy curves it follows that the new perturber must be the \(2^{1}\Pi_{u}\) state, the only other singlet ungerade state which is expected to be present in the vicinity. Therefore our observation indicates that the bottom of its potential cannot be located higher than approximately 19100 cm\({}^{-1}\) above the minimum of the ground state potential, what can serve as another test for validity of theoretical predictions. ## 4 Summary and conclusion In this study, we investigated the excited electronic states of the strontium dimer. We theoretically obtained the complete molecular electronic spectrum up to the excitation energy of around 25000 cm\({}^{-1}\) using the multireference configuration interaction method. Next, we studied in detail potential energy curves for the \(1^{1}\Sigma_{u}^{+}\), \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\), \(2^{1}\Pi_{u}\), and \(1^{1}\Delta_{u}\) states using several advanced electronic structure method. We evaluated the importance of the orbital basis set size, core- and valence-electron correlation, and scalar relativistic effects. Theoretical results had been motivated by our ongoing spectroscopic studies. We presented new experimental measurements Figure 4: Reduced term values \(E_{\rm red}=E-0.19\times J(J+1)\) cm\({}^{-1}\) of part of the observed rovibrational levels in the \(2^{1}\Sigma_{u}^{+}\) state (dots) plotted against \(J(J+1)\). Open squares indicate approximate positions of centres of perturbations. Figure 5: Observed shifts of the rotational energy levels in the \(2^{1}\Sigma_{u}^{+}\) state from their predicted positions for vibrational levels \(v^{\prime}=24\) and 25. \begin{table} \begin{tabular}{c c} \hline \hline constant & value [cm\({}^{-1}\)] \\ \hline \(T_{e}\) & 17358.70(10) \\ \(Y_{10}\) & 84.2169(27) \\ \(Y_{20}\) & -0.26729(21) \\ \(Y_{30}\) & -0.10850(6) \(\times 10^{-2}\) \\ \(Y_{40}\) & -0.10760(6) \(\times 10^{-4}\) \\ \(Y_{01}\) & 0.021990(14) \\ \(Y_{11}\) & -0.6808(18) \(\times 10^{-4}\) \\ \(Y_{21}\) & -0.4860(10) \(\times 10^{-6}\) \\ \(Y_{31}\) & -0.1030(13) \(\times 10^{-7}\) \\ \(Y_{02}\) & -0.712(6) \(\times 10^{-8}\) \\ \hline \(R_{e}\) [Å] & 4.176(1) \\ \hline \hline \end{tabular} \end{table} Table 3: The Dunham coefficients that describe the \(2^{1}\Sigma_{u}^{+}\) state of \({}^{88}\)Sr\({}_{2}\) in the range \(0\leq v^{\prime}\leq 52\), \(J^{\prime}\leq 149\). The numbers in parentheses give uncertainties in the last quoted digits (one standard deviation). of the excited \(2^{1}\Sigma_{u}^{+}\) state using polarisation labelling spectroscopy, extending the range of observed vibrational levels to higher energies. We reported the corresponding Dunham coefficients and experimental potential energy curve. We observed perturbations in the recorded spectrum that give preliminary information on higher-excited electronic states. We compared the available experimental observations with the theoretical predictions to assess the accuracy and limitations of employed theoretical models. Our findings provide valuable insights into the complex electronic structure of Sr\({}_{2}\), paving the way for future, more accurate theoretical and experimental spectroscopic studies. The challenging nature of excited electronic states of the Sr\({}_{2}\) dimer makes them a perfect testbed and playground for near-future developments of the electronic structure theory and computation. In the following work, we plan to present calculations at the valence full configuration interaction level with large basis sets. Such converged calculations in versions with both small-core and large-core pseudopotentials and approximately included core and core-valence correlation may resolve the nature of most problematic states such as \(2^{1}\Pi_{u}\). A more rigorous deperturbation procedure is also needed to clarify the experimental observations. It would require a coupled channels treatment involving (at least) the \(2^{1}\Sigma_{u}^{+}\), \(1^{1}\Pi_{u}\) and \(2^{1}\Pi_{u}\) interacting states and such an analysis is in future plans of our group. However, more precise theoretical predictions of the relevant potential energy curves are needed to serve as a starting point for deperturbation procedure. At the present stage we show the approximate results, accuracy of which is more than sufficient for comparison with theoretical models, and all the experimental term energies are listed in the supplementary data accompanying this paper [56]. ## Acknowledgements Financial support from the National Science Centre Poland (grant no. 2021/43/B/ST4/03326) is gratefully acknowledged. The computational part of this research has been partially supported by the PL-Grid Infrastructure (grant no. PLG/2021/015237).
2310.11631
Future changes in the vertical structure of severe convective storm environments over the U.S. central Great Plains
The effect of warming on severe convective storm potential is commonly explained in terms of changes in vertically-integrated ("bulk") environmental parameters, such as CAPE and 0--6 km shear. However, such events are known to depend on details of the vertical structure of the thermodynamic and kinematic environment that can change independently of these bulk parameters. This work examines how warming may affect the complete vertical structure of these environments for fixed ranges of values of high CAPE and bulk shear, using data over the central Great Plains from two high-performing climate models. Temperature profiles warm relatively uniformly with height, with a slight decrease in free tropospheric lapse rate, and the tropopause shifts upwards at constant temperature. The boundary layer becomes slightly drier (-2--4\% relative humidity) while the free troposphere becomes slightly moister (+2--3\%). Moist static energy (MSE) increases relatively uniformly with height with slightly larger increase within the boundary layer. Moist static energy deficit increases slightly above 4 km altitude. Wind shear and storm-relative helicity increase within the lowest 1.5 km associated with stronger hodograph curvature. Changes are broadly consistent between the two models despite differing biases relative to ERA5. The increased low-level shear and SRH suggests an increased potential for severe thunderstorms and tornadoes, while the slight increase in free tropospheric MSE deficit (enhanced entrainment) and decrease in boundary layer relative humidity (higher LCL) may oppose these effects. Evaluation of the net response of severe convective storm outcomes cannot be ascertained here but could be explored in simulation experiments.
Isaac Davis, Funing Li, Daniel Chavas
2023-10-17T23:41:41Z
http://arxiv.org/abs/2310.11631v1
Future changes in the vertical structure of severe convective storm environments over the U.S. central Great Plains ###### Abstract The effect of warming on severe convective storm potential is commonly explained in terms of changes in vertically-integrated ("bulk") environmental parameters, such as CAPE and 0-6 km shear. However, such events are known to depend on details of the vertical structure of the thermodynamic and kinematic environment that can change independently of these bulk parameters. This work examines how warming may affect the complete vertical structure of these environments for fixed ranges of values of high CAPE and bulk shear, using data over the central Great Plains from two high-performing climate models. Temperature profiles warm relatively uniformly with height, with a slight decrease in free tropospheric large rate, and the tropopause shifts upwards at constant temperature. The boundary layer becomes slightly drier (-2-4% relative humidity) while the free troposphere becomes slightly noisiter (+2-3%). Moist static energy (MSE) increases relatively uniformly with height with slightly larger increase within the boundary layer. Moist static energy deficit increases slightly above 4 km altitude. Wind shear and storm-relative helicity increase within the lowest 1.5 km associated with stronger hodograph curvature. Changes are broadly consistent between the two models despite differing biases relative to ERA5. The increased low-level shear and SRH suggests an increased potential for severe thunderstorms and tornadoes, while the slight increase in free tropospheric MSE deficit (enhanced entrainment) and decrease in boundary layer relative humidity (higher LCL) may oppose these effects. Evaluation of the net response of severe convective storm outcomes cannot be ascertained here but could be explored in simulation experiments. Daniel R Chavas, Department of Earth, Atmospheric, and Planetary Sciences, Purdue University, West Lafayette, IN. Email: [email protected]. SIGNificance STATEMENT: Severe thunderstorms and tornadoes cause substantial damage and loss of life each year, which raises concerns about how they may change as the world warms. We typically use a small number of common atmospheric parameters to understand how these localized events may change with climate change. However, climate change may alter the weather patterns that produce these events in ways not captured by these parameters. This work examines how climate change may alter the complete vertical structure of temperature, moisture, and wind and discusses the potential implications of these changes for future severe thunderstorms and tornadoes. ## 1 Introduction Severe convective storms (SCS) produce large hail, strong convective wind gusts, and tornadoes, all of which pose a significant threat to life and property annually (Ashley, 2007; Strader and Ashley, 2015). It is thus of great interest to understand how SCS activity and their associated risks may change in the future as the climate warms. Future changes in risk are difficult to assess because severe thunderstorms are too small in scale to be resolved in modern climate models. Nonetheless, recent research using downscaling simulations that can resolve thunderstorm systems have found that proxies for SCS activity show broad increases in frequency and severity in the future over the United States as the climate warms (Ashley et al., 2023; Trapp et al., 2019; Hoogewind et al., 2017; Gensini and Mote, 2015), with notable shifts in their spatial distribution and seasonal cycle. Understanding these future changes in SCS activity is rooted in our understanding of how SCS outcomes are linked to the larger-scale thermodynamic environment within which these storms are generated (Ashley et al., 2023). Environments favorable for SCS activity ("SCS environments") are typically defined by the combination of a thermodynamic ingredient, given by convective available potential energy (CAPE), and a kinematic environmental ingredient, given by 0-6 km bulk wind difference (S06; "bulk shear"). Tornado-favorable environments are defined using a third environmental parameter, storm-relative helicity, that is often calculated within the lowest 1 km (Coffer et al., 2019, 2020). These parameters have been widely used to explain the historical spatiotemporal pattern of severe thunderstorms and tornadoes, particularly over the United States (Gensini and Brooks, 2018; Taszarek et al., 2021; Coffer et al., 2020; Li et al., 2021; Hoogewind et al., 2017; Chen et al., 2020) but also globally (Brooks et al., 2003; Allen et al., 2011; Taszarek et al., 2020, 2021). Note that these parameters represent necessary but not sufficient conditions for severe convective storms and tornadoes, as a triggering mechanism for convective initiation is also required, though this step is relatively poorly understood and hence not easily incorporated into environmental analyses (e.g., Ashley et al., 2023). In future climate projections, the increase in SCS activity over North America is explained principally by large projected increases in CAPE (Ashley et al., 2023; Tippett et al., 2015; Gensini, 2021; Lepore et al., 2021; Seeley and Romps, 2015; Diffenbaugh et al., 2013; Hoogewind et al., 2017), consistent with a theoretical model that predicts a rapid increase in continental diurnal-maximum CAPE with warming (Agard and Emanuel, 2017). Finally, tropospheric relative humidity is expected to remain relatively constant with warming based on observations and climate models (Douville et al., 2022), though this outcome has yet to be evaluated specifically in the context of SCS environments. However, each of these environmental parameters are "bulk", i.e., vertically-integrated measures of buoyancy or shear. Substantial research has shown there are pathways to change SCS outcomes that are independent of these bulk measures. This includes dependencies on the vertical distribution of buoyancy and shear (McCaul Jr and Weisman, 2001), low-level shear profile (Guarriello et al., 2018; Peters et al., 2023), low-level thermal and moisture structure (McCaul and Cohen, 2002; Brown and Nowotarski, 2019), and hodograph curvature (Nixon and Allen, 2022). Additionally, SCS outcome is sensitive to free-tropospheric relative humidity (Chavas and Dawson II, 2021; Lasher-Trapp et al., 2021; Jo and Lasher-Trapp, 2022), due in part to its strong effects on entrainment dilution that reduces parcel buoyancy and CAPE (Peters et al., 2020); however, CAPE itself is nearly insensitive to this quantity. Thermodynamic and kinematic profiles are complex, which makes it difficult to define and interpret variations in their vertical structure. Recently, Chavas and Dawson II (2021) developed a simple model for the SCS environmental thermodynamic and kinematic sounding structure comprised of a boundary layer model and a free-tropospheric model. The model is consistent with how SCS environments are generated downstream of the Rocky Mountains (Carlson and Ludlam, 1968; Doswell, 2001; Agard and Emanuel, 2017). This sounding framework offers a foundation for defining the vertical structure of an SCS environment based on a relatively small number of parameters. Hence, the model may also provide a useful framework for defining key changes in vertical structure in the future. How the complete vertical structure of SCS environments, rather than simply bulk parameters, may change in a future climate has received relatively little attention to date. To fill this knowledge gap, we seek to answer the following research questions: 1. How do the vertical thermodynamic and kinematic structure of severe convective storm environments change in high-performing CMIP6 climate model projections over the central Great Plains? 2. Are the changes consistent between two high-performing climate models? To answer these questions, we investigate how the vertical thermodynamic profiles (temperature, relative humidity and moist static energy deficit) and kinematic profiles (wind shear and storm relative helicity) in severe storm environments may change with future warming. We focus here on the central Great Plains, which is within the primary severe thunderstorm and tornado hotspot over North America, to minimize geographic and seasonal variability. There is evidence of an eastward shift towards the southeast U.S. in recent decades (Gensini and Brooks, 2018), though the nature and dynamics of SCS environments are known to differ in this region. As a result, mixing the two regions is not advisable, but future work may seek to expand this analysis to the southeast U.S. For our analysis, we take advantage of recent work that identified a small number of climate models that best reproduce the historical SCS environment climatology over North America (Chavas and Li, 2022). We further use a simple model for the SCS environmental sounding from Chavas and Dawson II (2021) as a guiding framework to define key parameters that capture the basic vertical thermodynamic and kinematic structure, though our results are not specific to that model in order to keep our findings general. We detail our methodology in Section 2. We present our results for future changes in the structure of SCS environments and discuss key outcomes in section 3. Finally, we summarize conclusions and avenues for future work in section 4. ## 2 Methodology This work uses reanalysis and climate model data to examine future thermodynamic and kinematic changes in vertical structure independent of changes in bulk SCS environmental parameters (CAPE and S06). To do so, we focus our analysis on a limited geographic region, to minimize regional variability in the climatology (Taszarek et al., 2020), and within a fixed range of values of CAPE and S06, to minimize effects of future changes in CAPE and S06 itself. We use soundings from March through June in a region over eastern Kansas and Nebraska within the central Great Plains bounded by 100-95 \({}^{o}\)W and 38-43 \({}^{o}\)N (Figure 1a), which is the same subdomain used in Li et al. (2020), to focus on the primary severe convective storm season. We retain soundings with CAPE between 3000 and 6000 J kg\({}^{-1}\) and S06 between 15 and 35 ms\({}^{-1}\) to capture the principal regime of CAPE-S06 values associated with significant SCS activity in the historical record (Brooks et al., 2003; Li et al., 2020). We further impose an upper bound on the magnitude of convective inhibition of CIN\(<\)125 J kg\({}^{-1}\) following (Lepore et al., 2021) to avoid environments unlikely to allow convective initiation, though results are similar without this criteria; its inclusion eliminates relatively few soundings as shown below. For both the thermodynamic and kinematic vertical structure, we first compare ERA5 data to climate model historical experiments to examine similarities and biases of the models. We then analyze changes between the future and historical period for each climate model. For climate model analysis, we select two specific CMIP6 climate models (Eyring et al., 2016): MPI-ESM1-2-HR (hereafter "MPI") and CNRM-ESM2-1 (hereafter "CNRM"). These two models were identified by Chavas and Li (2022) as the two highest performing models in the CMIP6 archive for reproducing the historical SCS environment climatology over North America. Chavas and Li (2022) demonstrated that climate models exhibit a very wide range of variability in the climatological representation of severe convective storm environments (spatiotemporal pattern and amplitude) relative to historical data, and thus it is important to first select models that can credibly reproduce the historical record. Though both models perform comparably well in reproducing the overall SCS environment climatology over North America, they still may differ in their representation of such environments specifically over our central Great Plains region of interest, as will be noted below. This outcome can be useful to check whether the models yield consistent responses in vertical structure despite differing mean-state biases, which suggests greater robustness; this is a common approach when working with climate models for which the mean state (e.g. global-mean temperature) can vary across models but the structure of their responses to forcing (atmospheric "fingerprint", e.g. spatial structure of warming due to increased greenhouse gas concentrations) may be quite similar (Sander et al., 2013; Zhang et al., 2023). We use the radiative forcing ssp370 experiment for each model as our future simulation, which is considered as a more plausible high-end forcing scenario than ssp585 (Pielke Jr et al., 2022). Results for ssp585 are qualitatively similar (not shown). The historical period used is 1980-2014 and future period is 2065-2099. We compare climate model output against the ERA5 reanalysis model-level data for the identical period to match the climate model historical period (Hersbach et al., 2020). Model level data ensures use of the highest vertical resolution data available, though results are similar when using pressure-level data (not shown). ERA5 is sampled at 00, 06, 12, 18 UTC to match the same 6-hourly output available from both climate models. ERA5 is the highest resolution of existing global long-term reanalysis datasets and performs well in reproducing the climatological spatiotemporal distribution of SCS environments found in radiosonde observations (Li et al., 2020). The vertical resolution of ERA5 (137 levels) and the two climate models (95 levels for MPI, 91 levels for CNRM) differ, so for direct comparison between ERA5 and the climate model data we linearly interpolate the model data in the vertical to match ERA5. The horizontal resolution for ERA5 is \(\Delta x\) = \(\sim\)31 km, \(\Delta x\) = \(\sim\)100 km for MPI, and \(\Delta\)\(\sim\)x = 140 km for CNRM. We downsample ERA5 to every fourth gridpoint to more closely match the spacing of the models; the grids for each within our domain of interest are displayed in Figure 1a. ERA5 and MPI both contain 25 grid points with the resolution of MPI being slightly finer, while CNRM contains 16 grid points. In contrast to the vertical grid, we do not interpolate data horizontally to the same grid to avoid mixing soundings at adjacent grid points that represent very different environments, such as in the vicinity of a cold front, which is not uncommon for SCS environments. For a given sounding, we define the tropopause as the lowest altitude where the lapse rate drops below 2 K km\({}^{-1}\)(WMO/OMM/BMO, 1992; Chavas and Li, 2022), and define the top of boundary layer as the height of maximum relative humidity following Chavas and Dawson II (2021). We calculate CAPE, given by \[CAPE=\int_{z_{LFC}}^{z_{EL}}g\frac{T_{vp}-T_{ve}}{T_{ve}}dz, \tag{1}\] where \(g\) = 9.81 is the acceleration due to gravity; \(z\) is altitude with subscripts 'LFC' and 'EL' denoting level of free convection and equilibrium level, respectively; and \(T_{v}\) is the virtual temperature with subscripts 'p' and 'e' denoting parcel and environment, respectively. CAPE is calculated using the xcape codebase (Lepore et al., 2022) for the near-surface (z = 2 m) parcel as in Chavas and Li (2022). Bulk wind shear within a layer between bottom altitude \(z_{b}\) and top altitude \(z_{t}\) is given by the magnitude of the vector wind difference across the layer \[S[z_{b}]\,[z_{t}]=|\mathbf{V}_{z_{t}}-\mathbf{V}_{z_{b}}|, \tag{2}\] For the standard 0-6 km shear layer (S06), \(z_{b}\) = 10 m and \(z_{t}\) = 6000 m. Finally, storm-relative helicity (SRH) is given by \[SRH_{0-z_{t}}=\int_{10m}^{z_{t}}(\mathbf{V}-\mathbf{C})\cdot(\nabla\times \mathbf{V})\,dz, \tag{3}\] where \(\mathbf{V}\) is the wind vector at a given level, \(\mathbf{C}\) is the storm-motion vector, and \(z_{t}\) is defined in the same manner as with shear. In our results below, we analyze wind shear variations between the surface and 6 km and SRH integrated over layers between the surface and 3 km. All parameters involving wind (hodographs, wind shear, SRH) are calculated using the hodograph, storm relative helicity, and Bunkers storm motion functions from MetPy v1.1 (May et al., 2008, 2020). The climatological joint histogram of CAPE and S06 for ERA5 is shown in Figure 1b and for the historical periods in MPI and CNRM, respectively, in Figure 1c-d. Our fixed ranges of high CAPE and S06 values are highlighted by the red box, which is the same in each subplot. Because CNRM contains fewer grid points than ERA5 and MPI, we rescale the histogram data for CNRM by the factor 25/16 to account for its smaller number of gridpoints for an apples-to-apples comparison across all datasets. Both models do very well in reproducing the structure of the cli Figure 1: (a) Map of gridpoint distributions within our region of interest from the ERA5 historical dataset and from the MPI and CNRM climate model datasets. (b) Joint histogram of CAPE and bulk shear (S06) from the ERA5 dataset for March-June for the period 1980–2014, with box denoting the fixed ranges of CAPE and S06 examined in this study. (c)-(d) As in (b) but for MPI and CNRM, respectively. (e)-(f) Same as (c)-(d) but for the difference between the future ssp370 (2065–2099) and historical periods for each model. CNRM values have been multiplied by the factor 25/16 to account for its smaller number of gridpoints for an apples-to-apples comparison with ERA5 and MPI. matological joint distribution of CAPE and S06. Relative frequencies peak at moderate S06 (10-15 \(m\,s^{-1}\)) and small values of CAPE, and the largest CAPE values are associated with these moderate values of S06, consistent with past work (Li et al., 2020; Taszarek et al., 2020). Within our phase space subset of interest (the box), CNRM captures both the structure and magnitude very well, while MPI captures the structure but its magnitude has a clear high bias. Future changes in this joint distribution relative to historical within each model are shown in Figure 1e-f. In the future, in both models the phase space distribution shifts principally rightwards, associated with an increase in the frequency of relatively high CAPE (\(>\)3000 J kg\({}^{-1}\)) but minimal change in S06 (Figure 1e-f), again consistent with past research (Lepore et al., 2021). Within our phase space subset of interest, this shift represents an increase in the absolute frequency of such environments (predominantly red colors). Overall, then, both models perform reasonably well in reproducing the climatology of SCS environments, though CNRM better captures the magnitude, consistent with the findings of Chavas and Li (2022). The monthly and diurnal frequency distributions of our final sounding datasets are shown in Figure 2. In ERA5, the seasonal cycle of SCS environments (Figure 2a) closely follows the seasonal cycle of observed severe thunderstorm hazards and tornadoes for this region (Figures 5-6 of Taszarek et al., 2020), with a rapid increase from March through June, a peak in June/July, then a rapid decrease from July through October. CNRM does very well in reproducing the ERA5 seasonal cycle structure and amplitude, particularly for April through July, with a moderate high bias in August and September. Meanwhile, MPI also captures the structure but exhibits a strong high bias in magnitude that is consistent throughout the season. In the future, both models exhibit a shift in the seasonal cycle towards earlier months (April/May) as has been found in recent work (Ashley et al., 2023). The diurnal cycle (Figure 2b) is skewed strongly towards afternoon/evening as expected, with the vast majority of soundings in ERA5 at 00 UTC (\(\sim\) 55%) and 18 UTC (\(\sim\) 45%) and very few events at 06 UTC and 12 UTC. While 18 UTC may seem early relative to the typical timing of convective storm occurrence in the late afternoon (Figure 8 of Taszarek et al., 2020), this difference likely reflects a lag between the gradual daytime generation of these soundings and the initiation of convection itself. CNRM captures the ERA5 diurnal cycle structure and amplitude well, though its distribution is shifted towards 18 UTC, i.e., a bias towards too early in the day. MPI again captures the gross diurnal cycle structure with an overall strong high bias in magnitude, but notably it exhibits a similar timing bias towards 18 UTC as CNRM. This early timing bias may be associated with the known and early timing bias in inland precipitation itself that has been persistent in climate models (Christopoulos and Schneider, 2021). In the future, the distribution of soundings across the diurnal cycle remains relatively constant. Finally, we examine the frequency distributions of convective inhibition (CIN; Figure 2c) and of lifted condensation level (LCL; Figure 2d). CIN frequency peaks at relatively small values in ERA5 as well as in both climate models. There is a long tail of relatively low frequencies for higher CIN values up to our upper-bound threshold, and as a result this criterion is already met for nearly all soundings that met our CAPE and S06 thresholds. MPI yields similar relative frequencies to ERA5, while CNRM is skewed more strongly towards smaller values. Both models reproduce the CIN distributions within SCS environments found in ERA5 relatively well. In the future, the relative frequency shifts towards slightly higher CIN, as the frequency of the lowest CIN bin increases by a significantly smaller percentage (+10%) than the higher bins (e.g. +25% for 60-80 J kg\({}^{-1}\)). This increase in CIN with warming has been found in numerous past downscaling studies (e.g. Ashley et al., 2023). As for the LCL, in ERA5 the frequency distribution peaks at 1000-1500 m though with relatively high frequency within 500-1000 m as well. Both models are biased towards slightly lower LCLs, with slightly higher frequencies at 500-1000 m in CNRM and a more pronounced low-LCL bias in MPI. Both climate models show a shift in their relative frequencies towards higher LCLs, suggesting a shift towards drier boundary layer as will be found below. Overall, the above examination of the spatial, seasonal, and diurnal variability lends credence that our sounding database is broadly representative of environments favorable for severe convective storms, and that our two climate models perform reasonably well in reproducing these environments. For our region, CNRM appears to be the slightly better model given its much smaller amplitude bias. As noted above, the contrasting mean-state amplitude biases provide useful context for our analysis, as responses of the vertical structure to forcing that are consistent between the two models suggest they are less likely to be sensitive to biases in the climate model mean state. ## 3 Results ### Changes in thermodynamic vertical structure #### 1.1 Temperature We begin by examining changes in thermodynamic vertical structure simulated by both models. For all of the analyses below, values of some key quantities, including those relevant to the framework of Chavas and Dawson II (2021), are provided in Table 1 for additional reference. Mean temperature profiles from both models for the historical climate are compared with ERA5 in Figure 3a. MPI performs well in reproducing the vertical structure of temperature found in ERA5, with temperatures decreasing rapidly with height above the surface within a well-mixed boundary layer, then decreasing more slowly above the boundary layer before decreasing more rapidly again through the free troposphere. The model has a slight cool bias of 1-2 K that is relatively constant with height through the depth of the troposphere. CNRM also reproduces the vertical structure of the temperature profile, though with a stronger overall cool bias of 6 K that is relatively constant with height within the free troposphere; the bias is smaller (3.5 K) within the boundary layer. The result that MPI better captures mean state temperature is consistent with Chavas and Li (2022). While these contrasts in gross temperature bias between models may seem surprising, it is common for climate models to have differing mean state biases yet exhibit similar structural responses to forcing, as noted earlier. Despite different magnitudes of mean bias, both models capture the mean lapse rate within the free troposphere found in ERA5. MPI and CNRM both give lapse rate of 7.4 K km\({}^{-1}\) for 2-6 km and 7.6 K km\({}^{-1}\) for 2-10 km (Table 1), which closely match the values found in ERA5. Moreover, both models reasonably capture the tropopause temperature, with values of 212.2 K and 209.2 K for MPI and CNRM, respectively, compared with the ERA5 value of 212.0 K. Both models also capture the magnitude of variability in temperature, with a relatively small variability of less than 3 K magnitude in the interquartile range throughout the troposphere (Figure 3a). The primary qualitative difference in vertical thermal structure between the models is that MPI better captures the transition layer of reduced lapse rate separating the boundary layer and free troposphere found in ERA5. Future changes in the simulated temperature profiles for each model are shown in Figure 3b. Both models project future thermal structures that are qualitatively similar to their historic simulations but simply shifted nearly uniformly warmer through the depth of the troposphere by approximately 2K. CNRM warms slightly more than MPI between 1 km and 5 km, while both models warm nearly Figure 2: (a) Monthly frequency of final subset for ERA5, MPI historical and ssp370 future, and CNRM historical and ssp3 future. (b) Diurnal frequency of final subset (00/06/12/18 UTC). (c) Frequency distribution of convective inhibition (CIN) within final subset in 20 J kg\({}^{-1}\) bins starting from zero. (d) Frequency distribution of lifted condensation level (LCL) within final subset in 500 m bins starting from zero. CNRM values have been multiplied by the factor 25/16 to account for its smaller number of gridpoints for an apples-to-apples comparison with ERA5 and MPI. identically in the upper free troposphere. Free tropospheric lapse rates decrease very slightly (Table 1), by 0.2 K km\({}^{-1}\) in MPI and by 0.1 K km\({}^{-1}\) in CNRM. In both models, the tropopause temperature remains nearly constant, while the tropopause height shifts upwards. This upward expansion at fixed tropopause temperature is consistent with a similar finding for both the midlatitudes and the tropics in general (Singh and O'Gorman, 2012; Seeley et al., 2019; Thompson et al., 2019). Moreover, the thermal structure sharpens around the tropopause including a sharper temperature increase above the tropopause, which may have impacts on the depth and strength of overshooting tops (O'Neill et al., 2021). Overall, despite some significant differences in historical vertical thermal structure biases between the two models, their future climate responses are strikingly similar. This outcome gives greater confidence in the ability to use these models to quantify future changes in the thermal structure of SCS environments. #### 2.2.1 Relative humidity We next examine the vertical structure of relative humidity (RH) in Figure 4. Mean RH profiles from both Figure 3: (a) Mean vertical profiles of temperature from ERA5 and the historic runs of MPI and CNRM. The shaded area represents the interquartile range (25th–75th percentile), and the difference plot shows the differences between the ERA5 and MPI, and ERA5 and CNRM. (b) As in (a) but for the historic and future runs of MPI and CNRM. The difference plot shows the differences between the future and historic simulations of each model. models for the historical climate are compared with ERA5 in Figure 4a. MPI performs well in reproducing the gross vertical structure of RH found in ERA5, with RH increasing with height above the surface toward a local maximum near the top of the well-mixed boundary layer, then decreasing sharply up to approximately 2.5 km altitude before becoming relatively constant with height through the rest of the free troposphere. The model is moister than ERA5, with a moist RH bias of 12% within the lowest 1 km, near-zero bias through the 1-2 km transition layer, and a moist RH bias of 5-15% within the free troposphere (7% for 2-6 km mean; 10% for 2-10 km mean). MPI does capture the mean 1-2 km RH lapse rate of 25% found in ERA5. CNRM also reproduces the vertical structure of the RH profile, though with a smaller boundary layer bias and larger moist bias through both the transition layer and lower free troposphere below 6 km, with a peak moist bias of 27% at 3 km altitude. This latter moist bias indicates a much slower decrease in RH between the moist boundary layer and drier free troposphere (1-2 km RH lapse rate of 8.0% as compared to 25% for ERA5). This behavior is consistent with the weaker thermal transition layer (Figure 4a). Figure 4: (a) Mean vertical profiles of relative humidity from ERA5 and the historic runs of MPI and CNRM. The shaded area represents the interquartile range (25th–75th percentile), and the difference plot shows the differences between the ERA5 and MPI, and ERA5 and CNRM. (b) As in (a) but for the historic and future ssp370 runs of MPI and CNRM. The difference plot shows the differences between the future and historic simulations of each model. 3a), which suggests stronger shallow convective mixing through the top of the boundary layer and into the lower free troposphere in CNRM (e.g., Hu et al. 2022). Note that there is a much larger range of variability in RH across soundings relative to temperature (compare width of interquartile shading with Figure 3a). These ranges of variability are also well-captured by the models. This distinction is a clear indication of how free tropospheric moisture can vary widely on short spatial and temporal scales, as its local structure depends on the transport by antecedent convection and the mesoscale and synoptic scale flow, all of which may vary strongly in space and time. In contrast, free tropospheric temperatures are much more strongly constrained by the larger-scale dynamics of the mid-latitude atmosphere. Despite quite different magnitudes of mean bias, both models do capture the gross vertical structure of RH, with the primary contrast in the transition layer between the boundary layer and the free troposphere. Moreover, both models reproduce the magnitude of variability in RH. Future changes in the simulated RH profiles for each model are shown in Figure 4b. Both models project future thermal structures that remain qualitatively similar to their historic simulations but shifted to slightly higher RH through the middle and upper free troposphere and slightly lower RH in the boundary layer. In the free troposphere, MPI and CNRM RH increases by 3.0% and 1.5% for the 2-6 km mean, respectively, and 2.8% and 1.7% for the 2-10 km mean, respectively (Table 1). A larger increase in RH occurs near the top of the free troposphere associated with the upward shift in the tropopause, which shifts more humid air upwards at heights previously occupied by very dry lower-stratospheric air. At low levels, MPI and CNRM RH decreases by 5% and 1.5% for the 0-1 km mean. In the transition layer, the 1-2 km RH lapse rate decreases slightly in both models (-3.9% in MPI and -2.2% in CNRM). Overall, the future climate responses of both models are again quite similar despite significant differences in their historical biases. 3) Moist static energy deficit Finally, we examine the vertical structure of the moist static energy (MSE) deficit profile due to its close link to entrainment. The MSE deficit is defined as the difference between the MSE of the parcel at the lowest model level (LML) and the MSE of the environment at each level: \[MSE_{def}(z)=MSE(z_{LML})-MSE(z) \tag{4}\] MSE is the sum of potential, sensible, and latent energies, i.e., \[MSE=gz_{p}+C_{p}T+L_{v}q_{v} \tag{5}\] where \(C_{p}\) is the specific heat capacity of air, \(L_{v}\) is the specific latent heat of vaporization of water, \(z_{p}=z+z_{p,sfc}\) is the geopotential height with \(z_{p,sfc}\) the geopotential height of the surface, and \(q_{v}\) is the water vapor mass fraction (specific humidity); we neglect the latent energy content of water ice as it is generally small. Its dry counterpart, dry static energy (DSE), neglects the latent energy term. DSE and MSE are closely analogous to dry and equivalent potential temperature, respectively (Betts 1974; Chavas and Dawson II 2021; Chavas and Peters 2023). The MSE deficit represents the energy difference between a parcel rising through a deep convective cloud and the surrounding near-cloud environment. To keep the calculation simple, the parcel MSE is assumed to be adiabatically conserved, thereby neglecting the MSE sink due to buoyancy (Peters et al. 2022). Entrainment is a process that mixes a parcel with the environmental air, and the energy a parcel loses per unit height is commonly parameterized in classical plume updraft models as proportional to the MSE deficit (e.g., Betts 1975; Zhang 2009). This process dilutes the buoyancy of rising air parcels and hence reduces the true CAPE realized by the parcel (Peters et al. 2023a). Thus, an increase in MSE deficit would imply an increase in entrainment in deep convective clouds, all else equal. Since the MSE deficit profile is calculated from the MSE profile itself, we begin with vertical profiles of DSE and MSE for both models and ERA5 in Figure 5a. Model biases in DSE profiles are unsurprisingly very similar to biases in temperature discussed previously (Figure 3). Biases in MSE combines the energetic effects of biases from temperature and moisture. This yields an MSE bias structure that is very similar to the DSE in the free troposphere, where latent energy content is much smaller than sensible energy, but more similar to the relative humidity bias within the boundary layer, where latent energy content is much larger. MPI reproduces the full vertical profiles of MSE and DSE remarkably well, while CNRM also reproduces their structures but with a systematic low bias that is relatively constant with height. Mean MSE deficit profiles from both models for the historical climate are compared with ERA5 in Figure 5b. Both models perform well in reproducing the MSE deficit at all \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & **MAX** & **MAX** & **MAX** & **MAX** & **MAX** \\ \hline \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}}\) \\ \hline \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}}\) & \(\frac{\text{d}}{\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d}\text{d} \text{ levels as compared to ERA5, with errors of less than 10% of the ERA5 MSE deficit value throughout the troposphere. The largest errors are located near the boundary layer top and are likely associated with discrepancies in boundary layer height. This is particularly true for MPI due to its bias in boundary layer height (Table 1), whereas this bias is small in CNRM. Notably, the variability in the MSE deficit profiles is much smaller than for relative humidity (Figure 4), suggesting there may be strong co-variability between boundary layer and free tropospheric MSE; this question is intriguing but lies beyond the scope of this work. Future changes in the DSE and MSE profiles are shown in 5c. DSE increases relatively uniformly with height in line with changes in temperature found above. MSE also increases relatively uniformly with height, though with slightly larger increases in the lowest 2 km. This outcome is not obvious from our previous analyses, but given that MSE is approximately conserved in deep convection this suggests that to first order convection (as parameterized in the models) acts to mix MSE vertically and hence homogenize its changes in a convectively active region. Future changes in the MSE deficit profile are shown in Figure 5d. Both models project relatively small changes in MSE deficit with very similar structures at all levels between models. The lone exception is the middle and upper troposphere, where CNRM projects increases of up to 10% above 6 km while MPI changes are closer to zero. Below 4 km, both models project nearly identical changes, with a local maximum decrease in MSE deficit at approximately 1.5 km. The largest magnitude changes occur near the tropopause associated with the upward shift in the depth of the free troposphere. The modest increase in MSE deficit in the free troposphere is consistent with the finding that the MSE profile increases relatively uniformly with height as noted above, with only a slightly larger increase in MSE within the boundary layer. Overall, across all thermodynamic variables the projected changes are quite robust between our two model simulations despite differences in their representation of the historical climate. This outcome provides greater confidence in the robustness of the projected changes. ### Changes in kinematic structure We now examine the vertical structure of the lower tropospheric winds associated with SCS environments in Fig Figure 5: (a) Mean vertical profiles of dry static energy (DSE) and moist static energy (MSE) from ERA5 and the historic runs of MPI and CNRM. The shaded area represents the interquartile range (25th–75th percentile), and the difference plot shows the differences between the ERA5 and MPI, and ERA5 and CNRM. (b) As in (a) but for moist static energy deficit, calculated for a parcel lifted from the lowest model level assuming adiabatic conservation of MSE. (c)–(d) as in (a)–(b) but for the future ssp370 and historic simulations of both models. The numbers in the key indicate sample size. ure 6. For all of the analyses below, values of some key quantities, including those relevant to the framework of Chavas and Dawson II (2021), are provided in Table 2 for additional reference. Mean hodographs from both models for the historical climate are compared with ERA5 in Figure 5(a). MPI and CNRM are both quite similar to one another, and both reproduce the qualitative L-shaped vertical structure of the hodograph common in SCS environments (Guarriello et al., 2018; Coffer et al., 2020; Chavas and Dawson II, 2021) and also found in ERA5. This structure is characterized by a boundary layer of predominantly southerly shear in the lowest 0.5 km, with the southerly component of the flow increasing in magnitude with height. Overlying this boundary layer flow is a layer of predominantly westerly shear as the flow transitions from principally southerly to southwesterly moving upwards to 6 km altitude. In ERA5, the shear in the lower free-troposphere has a northerly component that results in a sharper wind direction shift moving across the boundary layer top before becoming unidirectional similar to the models. Both simulations have a weaker southerly wind speed in the lowest 2 km, resulting in a hodograph that is shifted slightly to the south relative to ERA5. However, the Bunkers storm motion vector of both models are similar to one another and are also shifted southeastward of the ERA5 vector by approximately 2 ms\({}^{-1}\). Hence, in a storm-relative sense the biases in hodograph and storm motion partially offset one another in the lowest 3 km. The surface flow vector is also nearly identical in both models. The lone notable difference between the two model hodographs is the shear between 0.5 km and 2 km, where the wind vector changes more rapidly within the 0.5-1 km layer in MPI and within the 1-2 km layer in CNRM. This difference alters the shear distribution within the 0.5-2 km layer, which we discuss next. Figure 5(b) shows bulk shear from the surface up to layer-top altitudes from 500 m to 6 km altitude, calculated from the hodographs in Figure 5(a). Figure 5(c) displays the vertical profile of bulk shear (units \(m\,s^{-1}\)) calculated layerwise within 500-m depth layers. The former directly visualizes the integrated bulk shear over any desired layer depth, while the latter visualizes the vertical distribution of shear biases (note though that integrated bulk shear biases need not equal the sum of the layerwise biases since shear is a vector difference). In each case, we show differences relative to ERA5 as percentages to compare the magnitude of the change across different layer depths. In ERA5, integrated bulk shear is large near the surface (8 \(m\,s^{-1}\) at 500 m) and then increases at a relatively constant rate with height, from 10 \(m\,s^{-1}\) at 1 km up to nearly 20 \(m\,s^{-1}\) at 6 km. This behavior is also evident in the layerwise bulk shear profile that decreases rapidly from 8 \(m\,s^{-1}\) in the lowest 500 m to 4 \(m\,s^{-1}\) between 1 km and 2 km and then further decreases gradually towards 2 \(m\,s^{-1}\) above 4 km. Both models are similar to one another in reproducing this overall structure, consistent with their very similar hodographs, but both substantially underestimate low-level shear (50% underestimation for 0-500 m shear). CNRM underestimations are slightly larger in the lowest 1.5 km relative to MPI owing to the different distribution of shear within 0.5-1.5 km described above. The models also consistently underestimate layerwise shear by 20% above 2 km, though these biases translate to relatively small biases (\(<\) 5%) when integrated in deeper-layer shear of 5-6 km (Figure 5(b)). Hence, both models do very well in reproducing S06 yet exhibit large low biases in bulk shear for layers closer to the surface. Similarly, Figure 5(d) shows SRH up to layer-top altitudes from 0.5-3 km, calculated from the hodographs in Figure 5(a). In ERA5, SRH gradually increases with increasing layer-top altitude as expected given the two-layer structure of the hodograph. The results are again qualitatively similar for the two models. Both models have a moderate low bias in SRH relative to ERA5 at all levels. The bias is smallest for the 0-3km layer (-20% and -25% for MPI and CNRM, respectively) and remains relatively constant across depths for MPI but increases in magnitude moving towards shallower layers for CNRM with a 30-35% underestimation for layers up to 1.5 km. Since SRH is equal to twice the vector area of the hodograph layer relative to the storm motion vector, the low bias in SRH in the two models arises due to the weaker southerly component of the flow in the lowest 1 km of their hodographs relative to ERA5 in Figure 5(a). This difference can also be interpreted as due to the low bias in shear in the lowest 1 km seen in Figure 5(c), as this low shear bias is effectively integrated upwards with height to yield the low biases in SRH, an effect amplified for the shallowest layers. Future changes in the simulated hodographs in each model are shown in Figure 5(e). In both models, the hodograph shifts west/northwestward within the lowest 1 km, resulting in slightly stronger curvature. Meanwhile, the shear structure above 1 km remains nearly constant relative to the 1 km flow. In CNRM, the entire hodograph is to first-order simply translated, with a northwestward shift at lower levels and more northward shift at higher levels. In MPI, the hodograph barely shifts at all except for a westward shift near the surface. The net result is a slightly enhanced curvature in the lowest 1 km and shear in the lowest 2 km. The storm motion vector shifts northwestward in CNRM and very slightly southward in MPI, both consistent with the shifts in their respective hodographs. Hence, for storm-relative flow, the shift in the hodograph is partially offset by the shift in the storm-motion vector. The change in integrated bulk shear is shown in Figure 5(f) and changes in the vertical distribution of bulk shear is shown in Figure 5(g), along with the percentage differences in future relative to historical for each. Recall that for our analysis we restricted our range of S06 values to focus on future changes in vertical structure for a given value of bulk environmental parameters. Unsurprisingly, then, we find that the 0-6 km wind shear changes only minimally in both models, with small increases of approximately 5%. The percent change is relatively small for all layer-top altitudes greater than 2 km and consistent in both models, with near zero change for layer-top altitudes of 2-3 km in CNRM. In contrast, within the lowest 2 km, integrated bulk shear increases more strongly in both models, though the models differ in magnitude and altitude of peak change: both models show +10% at 1.5 km, but for shallower layers this value decreases towards zero in MPI whereas it increases to a peak of +40% for the shallowest layer (500m) in CNRM. This strong increase in near-surface shear in Figure 6: (a) Mean hodographs for ERA5 and the historic runs of MPI and CNRM. Dots correspond in order to the altitudes (0.5, 1, 2, 3, 4, 5, and 6 km), and the Burkers storm motion is shown plotted as an ’M’. (b) Mean bulk shear integrated from the surface up to layer-top altitudes up to 6 km, for ERA5 and the historic runs of MPI and CNRM. (c) As in (b) but for the vertical profile of bulk shear calculated layerwise every 500 m (plotted point is at layer center). (d) As in (b) but for SRH up to 3 km. (e)–(h) As in (a)–(d) but for the future ssp370 and the historic runs of MPI and CNRM. The shaded area represents the interquartile range (25th–75th percentile), and the difference plot shows the differences between the ssp370 future and historic simulations of both models. For the differences in (f)-(h), a filled circle indicates the means are statistically significantly different at the 95 percent confidence level. CNRM is associated with the stronger northwestward shift in the hodograph at 0.5 km relative to the surface noted above. These outcomes are also evident in the changes in layerwise bulk shear profiles, with the lone notable difference being that CNRM exhibits more substantial increases in shear within the 2-5 km layer associated with a subtle reduction in the curvature of the hodograph (Figure 6a), though these layerwise directional shear changes have a relatively small impact when integrated over a deeper layer such as for S06. Overall, the wind shear changes over all layer depths are modest, though with some indication of a stronger increase in shear (corresponding to a larger percentage change as well) within the lowest 1 km. However, the magnitude and particularly vertical structure of those changes differ markedly between our two models, indicating significant uncertainty in this outcome. Moreover, it is important to note that the model biases relative to ERA5 were also largest in the lowest 1 km, which may further reduce confidence in the model outcomes despite the consistency between models. The change in SRH is shown in Figure 6h, along with the percentage differences in future relative to historical. The qualitative structure of the changes in SRH mirror the changes in integrated bulk shear discussed above (Figure 6f). Both models show increases in SRH over all layer depths in the range of +10-20%. Changes are smaller for the 0-3km layer (+10-15%) in both models. For shallower layers the models differ, with MPI showing the largest increase for the 0-1.5 km layer (+20%) while CNRM showing SRH increasing more strongly for the shallowest layers (+20% for the 0-500m layer). These increases in SRH are associated with the increases in shear found primarily within the lowest 1.5 km but whose detailed structure differs between the two models. Such changes in the low-level shear structure are effectively integrated upwards in the calculation of SRH. Ultimately, both models show consistent moderate increases in both 0-1 km and 0-3 km SRH, though they give highly divergent behavior for 0-500 m SRH, again reflecting significant uncertainty in the representation of the detailed structure of wind shear within the lowest kilometer. Overall, for the kinematic fields the two models behave quite similarly to one another in both their representation of the historical climate (and hence their biases too) and in their projected changes in these fields with future climate change. Both models project significant changes in the hodograph principally in the lowest 2 km. These changes translate to projected increases in shear below 2 km, though with minimal changes in 0-6 km shear. The changes in shear translate to projected increases in SRH at all levels, with similar moderate increases for 0-3 km and 0-1 km levels but only one model shows a comparable increase for 0-500 m. ### Implications Taken together, two notable changes emerge: potential increases in low-level shear and in MSE deficit. Otherwise, the free troposphere becomes deeper while maintaining a relatively constant tropopause temperature and lapse rate, indicating a thermodynamic environment that remains highly conducive to buoyant deep convective updrafts. Given recent work finding that tornadogenesis depends most strongly on shear in the lowest 1 km and possibly even lowest 0.5 km (Coffer et al., 2019, 2020), the increase in low-level shear suggests that the SRH ingredient for tornadogenesis could be enhanced when supercells form from a fixed bulk environment. However, other less well-understood factors that influence tornadoes (e.g., updraft width) may change with warming as well that could potentially offset the changes in shear structure analyzed here. Moreover, the poor representation of low-level shear in the historical climate as compared to ERA5, coupled with the lack of clear agreement in the structure of its projected change between our two models, indicate that this conclusion must be made with substantial caveat. Meanwhile, an increase in MSE deficit suggests the possibility of enhanced entrainment dilution. An increase in entrainment dilution may result in a reduction in the frequency of severe convective storms by reducing the true parcel CAPE from its undiluted value (Peters et al., 2020, 2023). The above interpretation is consistent with recent high-resolution regional modeling studies that find an increase in the frequency of SCS environments yet a decrease in the frequency of SCS activity over the central Great Plains region studied here (Ashley et al., 2023). This finding is typically ascribed to increases in convective inhibition, though changes in entrainment may play a similar role and is worthy of deeper study. Additionally, the slight decrease in boundary layer relative humidity may increase LCLs, which may exacerbate these entrainment effects. Overall, evaluation of the net response of severe thunderstorms and tornadoes themselves is too complex to be ascertained here. Nonetheless, we hope this work provides a basis from climate model projections to build upon in future work. ## 4 Conclusions Recent evidence suggests that severe thunderstorms and possibly tornadoes may become more frequent and/or intense in the future. Our understanding of this behavior is typically explained via changes in common vertically integrated (i.e., "bulk") variables, such as CAPE and 0-6 km shear. However, the vertical structure of thermodynamic and kinematic profiles in severe convective storm environments possess many more degrees of freedom that can change independently of these standard bulk parameters. This work examined how climate change may affect the complete vertical structure of these environments for a fixed range of values of CAPE and S06, using soundings over the central Great Plains from two high-performing climate models for the high-end forcing ssp370 scenario. Hence, our results may be thought of as probing future climate changes in severe convective storm environments _not_ associated with changes in CAPE and S06. We summarize our primary results as follows: 1. Changes in thermal structure: temperature profiles warm relatively uniformly with height, free tropospheric lapse rates decrease slightly, and the free tropopause shifts upwards at approximately constant temperature. 2. Changes in moisture structure: For relative humidity, the boundary layer becomes slightly drier (-2-5%) while the free troposphere becomes slightly moister (+3-5%). The boundary layer height remains constant. 3. Changes in moist static energy deficit: free-tropospheric moist static energy deficit increases (1-10%) above 4 km altitude, suggesting that the effects of entrainment could become stronger with warming, though the models do not agree on the magnitude. This outcome is consistent with the finding that MSE itself increases relatively uniformly with height with a slightly larger increase within the boundary layer. 4. Changes in kinematic structure: Hodographs become more strongly curved, due to stronger southerly/southeasterly flow between 500 m and 1.5 km relative to flow near the surface and above 3 km that remains largely unchanged. This behavior results in stronger wind shear within the lowest 1.5 km and greater storm-relative helicity within the lowest 1.5 km (which enhances SRH through all layers up to 3 km). 5. Changes in both thermodynamic and kinematic profiles are relatively consistent between our two models, despite different bias structures relative to ERA5, suggesting that the qualitative outcomes are robust. 6. Overall, the most notable change is the increase in low-level shear and SRH that may suggest increased potential for severe thunderstorms and tornadoes forming within high CAPE and high shear environments in our domain. However, modest changes in other factors, including both the slight increase in free tropospheric MSE deficit, which may enhance entrainment effects, and the slight decrease in boundary layer relative humidity, which may increase LCLs, may offset these effects. Evaluation of the net response of severe thunderstorms and tornadoes themselves is too complex to be ascertained here. The findings of a slight reduction in free tropospheric lapse rate and boundary layer relative humidity with warming within our fixed ranges of CAPE and S06 is consistent with similar findings in Wang and Moyer (2023) for the entire distribution over summertime North America. We reiterate that such changes would be on top of changes associated with the expected large increases in CAPE with warming that would likely increase the intensity, and possibly frequency, of severe convective storms as noted in past studies. Our interpretation is by no means final, but rather should be tested in numerical simulations and placed in the context of other climate-driven changes in severe convective storm environments. Our effort is a starting point for considering the full vertical structure of the thermodynamic and kinematic environment from climate models to better understand how severe thunderstorms and tornadoes may change with climate change. Ideally, this information would be integrated into a climate-dependent theory for these phenomena. Future work should test outcomes in limited-area numerical model experiments of individual storms. Moreover, our analysis can be extended to other geographic regions, particularly the southeast United States, where the nature of severe thunderstorm environments and events are known to differ in complex ways from the Great Plains (Sherburn et al., 2016). This carries added importance given the recent shift in tornado activity towards the southeast U.S. (Gensini and Brooks, 2018). The outcomes can be compared against regional model experiments that can properly represent convective initiation and the array of convective forcing agents found in the real world. Additionally, future work may want to consider the components of storm-relative helicity, particularly environmental horizontal vorticity and storm-relative flow, given emerging research that it is specifically these two components of SRH that are associated with intense low-level mesocyclogenesis (Goldacker and Parker, 2023). Finally, we note that here we do not offer explanations for _why_ key aspects of the vertical structure, such as the curvature of the low-level hodograph or the free-tropospheric relative humidity, do or do not change in the future. This type of understanding requires a broader analysis of how the large-scale circulation pattern will change and how this interacts with the land surface over the continental interior. Such endeavors are highly complex but also worthwhile, especially for understanding climate change impacts on tornadoes given the critical importance of the near-surface wind structure for tornadogenesis. Acknowledgments.The authors thank three anonymous reviewers for thorough and highly constructive feedback that greatly improved this manuscript. The authors were supported by National Science Foundation (NSF) AGS grants 1648681 and 2209052 and NASA grant 19-EARTH20-0216. We also acknowledge the open-source Python community, and particularly the authors and contributors to the Matplotlib (Hunter, 2007), NumPy (Oliphant, 2006), and MetPy (May et al., 2008-2020) packages that were used to generate many of the analyses and figures. Other post-processed data are available from the authors upon request. Data availability statement.6-hourly ERA5 reanalysis data were accessed on model levels from [https://doi.org/10.5065/XV5R-5344](https://doi.org/10.5065/XV5R-5344) and for the near-surface and on pressure levels from [https://doi.org/10.5065/BHGN-5N20](https://doi.org/10.5065/BHGN-5N20) (European Centre for Medium-Range Weather Forecasts, 2019). 6-hourly CMIP6 model historical and future experiment (ssp370,ssp585) data were accessed from [https://esgf-node.llnl.gov/search/cmip6](https://esgf-node.llnl.gov/search/cmip6). Analyses were performed on the NCAR Cheyenne and Casper supercomputers (Computational and Information Systems Laboratory, 2019) as well as on computational resources provided by Purdue Rosen Center for Advanced Computing (RCAC; McCartney et al., 2014).
2303.12200
Schoen's conjecture for limits of isoperimetric surfaces
Let $(M,g)$ be an asymptotically flat Riemannian manifold of dimension $3\leq n\leq 7$ with non-negative scalar curvature. R. Schoen has conjectured that $(M,g)$ is isometric to Euclidean space if it admits a non-compact area-minimizing hypersurface $\Sigma \subset M$. This has been proved by O. Chodosh and the first-named author in the case where $n = 3$. In this paper, we confirm this conjecture in the case where $3<n\leq 7$ and $\Sigma$ arises as the limit of isoperimetric surfaces. As a corollary, we obtain that large isoperimetric surfaces diverge unless $(M,g)$ is flat. By contrast, we show that, in dimension $3<n\leq 7$, a large part of spatial Schwarzschild is foliated by non-compact area-minimizing hypersurfaces.
Michael Eichmair, Thomas Koerber
2023-03-21T21:15:58Z
http://arxiv.org/abs/2303.12200v1
# Schoen's conjecture for limits of isoperimetric surfaces ###### Abstract. Let \((M,g)\) be an asymptotically flat Riemannian manifold of dimension \(3\leq n\leq 7\) with non-negative scalar curvature. R. Schoen has conjectured that \((M,g)\) is isometric to Euclidean space if it admits a non-compact area-minimizing hypersurface \(\Sigma\subset M\). This has been proved by O. Chodosh and the first-named author in the case where \(n=3\). In this paper, we confirm this conjecture in the case where \(3<n\leq 7\) and \(\Sigma\) arises as the limit of isoperimetric surfaces. As a corollary, we obtain that large isoperimetric surfaces diverge unless \((M,g)\) is flat. By contrast, we show that, in dimension \(3<n\leq 7\), a large part of spatial Schwarzschild is foliated by non-compact area-minimizing hypersurfaces. ## 1. Introduction Throughout, we assume that \((M,g)\) is a connected, complete Riemannian manifold. The following conjecture of R. Schoen is related to his proof of the positive mass theorem with S.-T. Yau in [25, 26]. **Conjecture 1** (Cp. [27, p. 48]).: _Let \((M,g)\) be a Riemannian manifold of dimension \(3\leq n\leq 7\) and asymptotically flat of rate \(\tau=n-2\) with non-negative scalar curvature. Suppose that there exists a non-compact area-minimizing boundary \(\Sigma=\partial\Omega\). Then \((M,g)\) is isometric to flat \(\mathbb{R}^{n}\)._ The background on asymptotically flat manifolds, area-minimizing boundaries, and isoperimetric regions used in this paper is recalled in Appendix A and Appendix B. Conjecture 1 has been proved in the special case where \(n=3\) by O. Chodosh and the first-named author [8, Theorem 1.6]. A natural way in which non-compact area-minimizing boundaries arise is as the limit of isoperimetric surfaces. The goal of this paper is to settle Conjecture 1 in this case. **Theorem 2**.: _Let \((M,g)\) be a Riemannian manifold of dimension \(3<n\leq 7\) and asymptotically flat of rate \(\tau>n-3\) with non-negative scalar curvature. Suppose that there exist a non-compact area-minimizing boundary \(\Sigma=\partial\Omega\) and isoperimetric regions \(\Omega_{1},\,\Omega_{2},\ldots\subset M\) with \(\Omega_{k}\to\Omega\) locally smoothly. Then \((M,g)\) is isometric to flat \(\mathbb{R}^{n}\)._ **Remark 3**.: _Note that the decay rate \(\tau>n-3\) guarantees that coordinate hyperplanes in the end of \((M,g)\) are asymptotically flat with mass zero._ O. Chodosh, Y. Shi, H. Yu, and the first-named author have showed that in asymptotically flat Riemannian three-manifolds with non-negative scalar curvature and positive mass, there is a unique isoperimetric region for every given sufficiently large amount of volume and that these large isoperimetric regions are close to centered coordinate balls in the chart at infinity; see [10, Theorem 1.1]. An alternative proof of this result with a different condition on the scalar curvature was subsequently given by H. Yu; see [17, Theorem 1.6]. As a step towards the characterization of large isoperimetric regions in asymptotically flat Riemannian manifolds of dimension \(3<n\leq 7\), Corollary 4 shows that the (unique) large components of the boundaries of such regions necessarily diverge. **Corollary 4**.: _Let \((M,g)\) be a Riemannian manifold of dimension \(3<n\leq 7\) and asymptotically flat of rate \(\tau>n-3\) with non-negative scalar curvature and positive mass. Let \(K\subset M\) be a compact set that is disjoint from the boundary of \(M\) and suppose that there are isoperimetric regions \(\Omega_{1},\,\Omega_{2},\ldots\) in \((M,g)\) with \(|\Omega_{k}|\to\infty\). Then, for all \(k\) sufficiently large, either \(K\subset\Omega_{k}\) or \(K\cap\Omega_{k}=\emptyset\)._ ### Outline of our arguments Let \((M,g)\) be an asymptotically flat Riemannian manifold of dimension \(3\leq n\leq 7\) with non-negative scalar curvature. Suppose that \(\Sigma=\partial\Omega\) is a non-compact area-minimizing boundary. In particular, for every open set \(U\Subset M\) and every smooth variation \(\{\Sigma(s)\}_{|s|<\epsilon}\) of \(\Sigma=\Sigma(0)\) with compact support in \(U\), \[\left.\frac{d}{ds}\right|_{s=0}\lvert\Sigma(s)\cap U\rvert=0\qquad\text{ and }\qquad\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\lvert\Sigma(s)\cap U\rvert\geq 0.\] Equivalently, the mean curvature of \(\Sigma\) vanishes and the stability inequality \[\int_{\Sigma}(|h|^{2}+Ric(\nu,\nu))\,f^{2}\,\mathrm{d}\mu\leq\int_{\Sigma} \lvert\nabla f\rvert^{2}\,\mathrm{d}\mu\] holds for all \(f\in C_{c}^{\infty}(\Sigma)\). Here, \(\mathrm{d}\mu\) is the area element, \(\nabla\) the covariant derivative, \(\nu\) the outward normal, and \(h\) the second fundamental form, all with respect to \(\Sigma\). \(Ric\) denotes the Ricci tensor of \((M,g)\). We say that \(\Sigma\) is stable with respect to asymptotically constant variations if, in addition, \[\int_{\Sigma}(|h|^{2}+Ric(\nu,\nu))\,(1+f)^{2}\,\mathrm{d}\mu\leq\int_{\Sigma} \lvert\nabla f\rvert^{2}\,\mathrm{d}\mu \tag{1}\] for all \(f\in C_{c}^{\infty}(\Sigma)\). The proof of the positive mass theorem [26, Theorem 4.2] shows that an asymptotically flat area-minimizing boundary that has mass zero and which is stable with respect to asymptotically constant variations is isometric to flat \(\mathbb{R}^{n-1}\) and totally geodesic. Moreover, the scalar curvature of \((M,g)\) vanishes along such a boundary; see Proposition 30. An important ingredient in the proof of Conjecture 1 in [8] in the case where \(n=3\) and specific to three dimensions is that every non-compact area-minimizing boundary is stable with respect to asymptotically constant variations; see, e.g., [25, p. 54]. Using this, O. Chodosh and the first-named author have showed that \((M,g)\) is foliated by non-compact area-minimizing boundaries. The construction of these boundaries is based on solving Plateau problems with respect to a carefully chosen local perturbation of the metric \(g\) and inspired by the proof of a conjecture of J. Milnor due to G. Liu [20]. An adaptation of an argument by M. Anderson and L. Rodriguez [1] shows that the curvature tensor of \((M,g)\) vanishes along each leaf of this foliation and hence on all of \(M\). Our next results show that the situation is markedly different in the case where \(3<n\leq 7\). **Theorem 5**.: _Let \(3<n\leq 7\) and \((M,g)\) be spatial Schwarzschild of dimension \(n\) with mass \(m=2\). There exists infinitely many mutually disjoint non-compact area-minimizing hypersurfaces in \((M,g)\)._ The construction of the Riemannian manifold \((M,g)\) in Theorem 6 below is based on the gluing technique developed by A. Carlotto and R. Schoen [9]. **Theorem 6**.: _Let \(3<n\leq 7\) and \((n-2)/2<\tau<(n-2)\). There exists a Riemannian manifold \((M,g)\) of dimension \(n\) that is asymptotically flat of rate \(\tau\) with non-negative scalar curvature and positive mass which contains infinitely many mutually disjoint non-compact area-minimizing hypersurfaces all of which are stable with respect to asymptotically constant variations._ **Remark 7**.: _Theorem 5 and Theorem 6 show that Conjecture 1 is not true without any further assumptions. We note that the area-minimizing hypersurfaces whose existence is asserted in Theorem 5 are not stable with respect to asymptotically constant variations; see Proposition 40. By contrast, the Riemannian manifold constructed in the proof of Theorem 6 is not asymptotic to Schwarzschild._ **Remark 8**.: _O. Chodosh and D. Ketover have showed in [11] that in every complete asymptotically flat Riemannian three-manifold \((M,g)\) which does not contain closed embedded minimal surfaces, through every point, there exists a properly embedded minimal plane; see also the subsequent improvement due to L. Mazet and H. Rosenberg in [21]. Note that if the scalar curvature of \((M,g)\) is non-negative, none of these planes are area-minimizing unless \((M,g)\) is flat \(\mathbb{R}^{3}\)._ In the proof of [26, Theorem 4.2], R. Schoen and S.-T. Yau have showed that if \((M,g)\) is asymptotic to Schwarzschild \[g_{S}=\left(1+\frac{m}{2}\left|x\right|_{\bar{g}}^{2-n}\right)^{\frac{4}{n-2 }}\,\bar{g} \tag{2}\] with negative mass \(m<0\), then \((M,g)\) contains a non-compact area-minimizing boundary that is stable with respect to asymptotically constant variations. Here, \(\bar{g}\) is the Euclidean metric. This boundary is obtained as the limit of solutions to the Plateau problem with prudently chosen boundaries. Our starting point is the following complementary consideration. If \(\Sigma=\partial\Omega\) arises as the limit of isoperimetric surfaces, then we expect \(\Sigma\) to be stable with respect to asymptotically constant variations. We now describe the proof of Theorem 2. Suppose that \(3<n\leq 7\). A first difficulty not present in the case where \(n=3\) is to show that \(\Sigma\) is asymptotically flat with mass zero. This is complicated by the fact that \(\Sigma\) is not known to be stable with respect to asymptotically constant variations at this point. By contrast, this stability is an additional assumption in the work of A. Carlotto [7]. To remedy this, we prove the explicit estimate \[1-O(r^{-\frac{\tau}{n-1}})\leq\frac{\left|B_{r}\cap\Sigma\right|}{\omega_{n-1 }\,r^{n-1}}\leq 1+O(r^{-\tau}); \tag{3}\] see Lemma 16 and Lemma 17. Here, \(B_{r}\), \(r>1\), is the bounded region in \((M,g)\) whose boundary corresponds to \(\{x\in\mathbb{R}^{n}:|x|_{\bar{g}}=r\}\) in the chart at infinity and \(\omega_{n-1}\) the Euclidean area of an \((n-1)\)-dimensional unit ball. The proof of (3) is based on the monotonicity formula applied to carefully chosen, off-centered balls. We then use (3) to prove a precise asymptotic expansion for \(\Sigma\); see Proposition 9. Using that \(\tau>n-3\), it follows that \(\Sigma\) is asymptotically flat with mass zero; see Lemma 22. We also note that these arguments work for any stable properly embedded non-compact minimal hypersurfaces with \(r^{1-n}\left|B_{r}\cap\Sigma\right|=O(1)\) for \(r>1\). Next, we assume that \(\Sigma=\partial\Omega\) where \(\Omega\) is the limit of large isoperimetric regions \(\Omega_{1}\), \(\Omega_{2},\ldots\) with \(\left|\Omega_{k}\right|\to\infty\) and prove that \(\Sigma\) is stable with respect to asymptotically constant variations. To this end, we consider the second variation of area of \(\Omega_{k}\) with respect to a suitable Euclidean translation that is corrected to be volume-preserving. The stability with respect to asymptotically constant variations then follows by passing to the limit \(k\to\infty\), using the asymptotic expansion for \(\Sigma\) obtained in Proposition 9, the assumption that \(\tau>n-3\), and the integration by parts formula in Lemma 63; see Proposition 28. The arguments from [26] then show that \(\Sigma\) is isometric to flat \(\mathbb{R}^{n-1}\) and totally geodesic; see Proposition 30. Finally, given any point \(p\in M\), we construct a new non-compact area-minimizing boundary \(\Sigma_{p}\subset M\) that passes through \(p\); see Proposition 32. In view of Theorem 5 and different from the situation in [8], we need to ensure that \(\Sigma_{p}\) is again stable with respect to asymptotically constant variations. To this end, we construct suitable local perturbations of the metric \(g\) in Lemma 31 and obtain \(\Sigma_{p}\) as the limit of large isoperimetric regions with respect to these perturbations. A crucial ingredient in the construction of \(\Sigma_{p}\) is a result from [8], stated here as Lemma 61; namely: Asymptotically flat Riemannian manifolds of positive mass admit isoperimetric regions of every sufficiently large volume. Although the area-minimizing boundaries obtained in our construction do not necessarily form a foliation, we show how to adapt the techniques developed in [1, 8, 20] to conclude that the curvature tensor of \((M,g)\) vanishes along each of these boundaries. This completes the proof of Theorem 2. ### Outline of related results J. Metzger and the first-named author [12] have observed that the existence of area-minimizing boundaries in asymptotically flat manifolds is related to the positioning of large isoperimetric regions. In particular, the authors show that, in asymptotically flat Riemannian three-manifolds, the existence of large isoperimetric regions that do not diverge is not compatible with positive scalar curvature; see [12, Theorem 1.5]. In subsequent work [14, Theorem 1.1], they have showed that, if \((M,g)\) is asymptotic to Schwarzschild of dimension \(n\geq 3\), large isoperimetric regions are unique and geometrically close to centered coordinate balls. This implies, in particular, that Theorem 2 holds in all dimensions if \((M,g)\) is assumed to be asymptotic to Schwarzschild. We also note that Conjecture 1 has been proved by A. Carlotto [7, Theorem 1 and Theorem 2] in the case where \(3\leq n\leq 7\) under the additional assumptions that \((M,g)\) is asymptotic to Schwarzschild and that \(\Sigma\) is stable with respect to asymptotically constant variations. In this case, Proposition 30 below yields an immediate contradiction once \(\Sigma\) is showed to be asymptotically flat with mass zero. Finally, we note that the method of O. Chodosh and the first-named author in [8, p. 991] has been used by C. Li to study the polyhedron rigidity conjecture using isoperimetric regions [18, 19]. ### Acknowledgments The first-named author acknowledges the support of the START-Project Y963 of the Austrian Science Fund. The second-named author acknowledges the support of the Lise-Meitner-Project M3184 of the Austrian Science Fund. ## 2. Asymptotic behavior of area-minimizing surfaces In this section, we assume that \(g\) is a Riemannian metric on \(\mathbb{R}^{n}\) such that \[|g-\bar{g}|_{\bar{g}}+|x|_{\bar{g}}\,|\bar{D}g|_{\bar{g}}+|x|_{\bar{g}}^{2}\, |\bar{D}^{2}g|_{\bar{g}}=O(|x|_{\bar{g}}^{-\tau}),\] where \(3\leq n\leq 7\) and \(0<\tau\leq n-2\). Here, \(\bar{g}\) denotes the Euclidean metric. Geometric quantities are computed with respect to \(g\) unless indicated otherwise. Let \(\Sigma\subset\mathbb{R}^{n}\) be a non-compact two-sided properly embedded hypersurface with \(\partial\Sigma=\emptyset\). We assume that \(\{x\in\mathbb{R}^{n}:|x|_{\bar{g}}>1\}\cap\Sigma\) is a stable minimal surface and that \[\limsup_{r\to\infty}\frac{|B_{r}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,r^ {n-1}}<\infty. \tag{4}\] The goal of this section is to prove the following result. **Proposition 9**.: _There exist \(r_{0}>2,\) an integer \(N\geq 1\), numbers \(a_{1},\dots,a_{N}\in\mathbb{R}\), functions \(u_{1},\dots,u_{N}\in C^{\infty}(\mathbb{R}^{n-1}),\) and a rotation \(S\in SO(n)\) such that_ \[S(\Sigma\setminus B_{r_{0}}^{n}(0))\subset\bigcup_{i=1}^{N}\{(y,u_{i}(y)):y \in\mathbb{R}^{n-1}\}.\] _Moreover, for all \(0<\varepsilon<\tau/2\) and \(i=1,\dots,N\),_ \[|y|_{\bar{g}}^{-1}\,|u_{i}(y)-a_{i}|+|\bar{\nabla}u_{i}|_{\bar{g}}+|y|_{\bar{ g}}\,|\bar{\nabla}^{2}u_{i}|_{\bar{g}}=O(|y|_{\bar{g}}^{-\tau+\varepsilon}). \tag{5}\] **Remark 10**.: _In the case where \(g\) is asymptotic to the Schwarzschild metric (2), Proposition 9 has been proved by A. Carlotto in [7, Lemma 18] under the additional assumption that \(\Sigma\) is stable with respect to asymptotically constant variations in the sense of (1). There, a version of Corollary 20 is obtained using techniques developed by L. Simon [29, Theorem 5.7]; see [7, p. 10]. A version of estimate (14) is obtained as a consequence of the stability with respect to asymptotically constant variations; see [7, pp. 17-18]. Here, we provide a new proof of these results in [7] based on the monotonicity formula which does not require the assumption of stability with respect to asymptotically constant variations._ We first reduce the proof of Proposition 9 to the case where \(\Sigma\) has only one end. **Lemma 11**.: _There exists \(r_{0}>1\) and an integer \(N\geq 1\) such that \(\Sigma\setminus B_{r_{0}}^{n}(0)\) has \(N\) connected components \(\Sigma_{1},\dots,\Sigma_{N}\subset\mathbb{R}^{n}\) each satisfying_ \[\lim_{r\to\infty}\frac{|B_{r}^{n}(0)\cap\Sigma_{i}|_{\bar{g}}}{\omega_{n-1}\,r ^{n-1}}=1. \tag{6}\] Proof.: By the work of R. Schoen and L. Simon [24, Theorem 3], \(h=O(|x|_{\bar{g}}^{-1})\) where \(h\) is the second fundamental form of \(\Sigma\) with respect to a choice of unit normal. In conjunction with (4), the assumption that \(\Sigma\) is non-compact, and the classification of stable minimal cones in \(\mathbb{R}^{n}\) by J. Simons [31, SS6], it follows that for each sequence \(\{r_{k}\}_{k=1}^{\infty}\) of numbers \(r_{k}>0\) with \(r_{k}\to\infty\), \(r_{k}^{-1}\,\Sigma\) converges to a hyperplane locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\) possibly with multiplicity. In particular, \(\Sigma\) intersects \(S_{r}^{n-1}(0)\) transversally for all \(r>1\) sufficiently large. It follows that there is \(r_{0}>1\) such that the number of components of \(\Sigma\setminus B_{r}^{n}(0)\) is finite and constant for all \(r>r_{0}\). Let \(\Sigma_{1}\) be a component of \(\Sigma\setminus B_{r_{0}}^{n}(0)\). Since \(\Sigma_{1}\setminus B_{r}^{n}(0)\) is connected for all \(r>r_{0}\), \(r_{k}^{-1}\,\Sigma_{1}\) converges to a hyperplane locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\) with multiplicity one for each sequence \(\{r_{k}\}_{k=1}^{\infty}\) of numbers \(r_{k}>0\) with \(r_{k}\to\infty\). In conjunction with (4), we obtain (6). In view of Lemma 11, we may and will assume that \[\lim_{r\to\infty}\frac{|B_{r}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,r^{ n-1}}=1 \tag{7}\] in the proof of Proposition 9. We also record the following byproduct of the proof of Lemma 11. **Lemma 12**.: _Let \(\{r_{k}\}_{k=1}^{\infty}\) be a sequence of numbers \(r_{k}>1\) with \(r_{k}\to\infty\). Then, passing to a subsequence, \(r_{k}^{-1}\,\Sigma\) converges locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\) with multiplicity one to a hyperplane through the origin._ In Lemma 13 and Lemma 15, we collect basic properties of \(\Sigma\). **Lemma 13**.: _There holds, as \(|x|_{\bar{g}}\to\infty\),_ \[|x|_{\bar{g}}\,|\bar{H}|+|x|_{\bar{g}}^{2}\,|\bar{\nabla}\bar{H}|_{\bar{g}}=O( |x|_{\bar{g}}^{-\tau})\] _and_ \[|x|_{\bar{g}}\,|\bar{h}|_{\bar{g}}+|x|_{\bar{g}}^{2}\,|\bar{\nabla}\bar{h}|_{ \bar{g}}=o(1).\] Proof.: This follows from Lemma 64 using Lemma 12 and that \(H=0\) on \(\{x\in\mathbb{R}^{n}:|x|_{\bar{g}}>1\}\cap\Sigma\). To proceed, we recall the monotonicity formula from [28]. **Lemma 14** ([28, 17.4]).: _Let \(x_{0}\in\mathbb{R}^{n}\) and \(0<s<t\). There holds_ \[\begin{split} t^{1-n}\,|B_{t}^{n}(x_{0})\cap\Sigma|_{\bar{g}}& =s^{1-n}\,|B_{s}^{n}(x_{0})\cap\Sigma|_{\bar{g}}+\int_{(B_{t}^{n} (x_{0})\setminus B_{s}^{n}(x_{0}))\cap\Sigma}|x-x_{0}|_{\bar{g}}^{1+n}\,\bar{ g}(x-x_{0},\bar{\nu})^{2}\,\mathrm{d}\bar{\mu}\\ &\quad-\frac{1}{n-1}\,\int_{(B_{t}^{n}(x_{0})\setminus B_{s}^{n} (x_{0}))\cap\Sigma}(t^{1-n}-|x-x_{0}|_{\bar{g}}^{1-n})\,\bar{g}(x-x_{0},\bar{ \nu})\,\bar{H}\,\mathrm{d}\bar{\mu}\\ &\quad-\frac{1}{n-1}\,\int_{B_{s}^{n}(x_{0})\cap\Sigma}(t^{1-n}-s ^{1-n})\,\bar{g}(x-x_{0},\bar{\nu})\,\bar{H}\,\mathrm{d}\bar{\mu}.\end{split} \tag{8}\] **Lemma 15**.: _There holds_ \[\sup_{x\in\mathbb{R}^{n}}\sup_{r>0}\frac{|B_{r}^{n}(x)\cap\Sigma|_{\bar{g}}}{ \omega_{n-1}\,r^{n-1}}<\infty.\] Proof.: Suppose, for a contradiction, that there are sequences \(\{r_{k}\}_{k=1}^{\infty}\) of numbers \(r_{k}>0\) and \(\{x_{k}\}_{k=1}^{\infty}\) of points \(x_{k}\in\mathbb{R}^{n}\) with \[\lim_{k\to\infty}\frac{|B_{r_{k}}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1} \,r_{k}^{n-1}}=\infty. \tag{9}\] Passing to a subsequence and using that \(\Sigma\) is properly embedded, we may assume that either \[\lim_{k\to\infty}|x_{k}|_{\bar{g}}=\infty\qquad\text{or}\qquad\lim_{k\to\infty }r_{k}=\infty.\] Note that \[\liminf_{k\to\infty}\frac{|x_{k}|_{\bar{g}}}{r_{k}}\geq 3. \tag{10}\] Indeed, if not, then \(B_{r_{k}}^{n}(x_{k})\subset B_{4\,r_{k}}^{n}(0)\) for a subsequence. This is not compatible with (9) and (7). Let \(t_{k}=|x_{k}|_{\bar{g}}/2\). By Lemma 13, we have \[|x_{k}|_{\bar{g}}\,\bar{H}=O(|x_{k}|_{\bar{g}}^{-\tau}) \tag{11}\] on \(B_{t_{k}}^{n}(x_{k})\cap\Sigma\). We choose \(s_{k}\) with \(r_{k}\leq s_{k}\leq t_{k}\) such that \[\frac{|B_{s_{k}}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s_{k}^{n-1}}= \sup_{r_{k}\leq r\leq t_{k}}\frac{|B_{r}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{ \omega_{n-1}\,r^{n-1}}. \tag{12}\] Using the monotonicity formula (8) and (11), we have \[\frac{|B_{t_{k}}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t_{k }^{n-1}}\geq\frac{|B_{s_{k}}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s_{ k}^{n-1}}-O(|x_{k}|_{\bar{g}}^{-1-\tau})\,\int_{(B_{t_{k}}^{n}(x_{k})\setminus B_{s_{ k}}^{n}(x_{k}))\cap\Sigma}|x-x_{k}|_{\bar{g}}^{2-n}\,\mathrm{d}\bar{\mu}\] \[\qquad\qquad-O(|x_{k}|_{\bar{g}}^{-\tau})\,\frac{|B_{s_{k}}^{n}(x _{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s_{k}^{n-1}}.\] Using Lemma 65 and (12), we have \[\int_{(B_{t_{k}}^{n}(x_{k})\setminus B_{s_{k}}^{n}(x_{k}))\cap\Sigma}|x-x_{k}| _{\bar{g}}^{2-n}\,\mathrm{d}\bar{\mu}=O(|x_{k}|_{\bar{g}})\,\frac{|B_{s_{k}}^{ n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s_{k}^{n-1}}.\] In conjunction with (9) and (12), we conclude that \[\lim_{k\to\infty}\frac{|B_{t_{k}}^{n}(x_{k})\cap\Sigma|_{\bar{g}}}{\omega_{n- 1}\,t_{k}^{n-1}}=\infty.\] This is not compatible with (10). Next, we prove refined estimates on the area growth of \(\Sigma\). **Lemma 16**.: _As \(s\to\infty\),_ \[\frac{|B_{s}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s^{n-1}}\leq 1+O(s^{-\tau}).\] Proof.: Using the monotonicity formula (8) and Lemma 13, we have, for every \(0<s<t\), \[\frac{|B_{s}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s^{n-1}} \leq\frac{|B_{t}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t^{n-1}}+O(1)\, \int_{(B_{t}^{n}(0)\setminus B_{s}^{n}(0))\cap\Sigma}|x|_{\bar{g}}^{1-n-\tau} \,\mathrm{d}\bar{\mu}\] \[\qquad+O(1)\,s^{1-n}\,\int_{B_{s}^{n}(0)\cap\Sigma}|x|_{\bar{g}}^ {-\tau}\,\mathrm{d}\bar{\mu}.\] Using Lemma 65, it follows that \[\frac{|B_{s}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,s^{n-1}}\leq\frac{|B_ {t}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t^{n-1}}+O(t^{-\tau})+O(s^{-\tau }).\] Letting \(t\to\infty\) and using (7), the assertion follows. **Lemma 17**.: _As \(t\to\infty\),_ \[\frac{|B_{t}^{n}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t^{n-1}}\geq 1-O(t^{- \tau/(n-1)}).\] Proof.: For \(t>1\) large, we choose \(x_{t}\in\Sigma\) with \(|x_{t}|_{\bar{g}}=t^{(n-1-\tau)/(n-1)}\). We apply the monotonicity formula (8) with \(x_{0}=x_{t}\). Letting \(s\to 0\), using that \(\Sigma\) is properly embedded and Lemma 13, we obtain \[\frac{|B_{t}^{n}(x_{t})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t^{n-1}}\geq 1+O(1) \,\int_{B_{t}^{n}(x_{t})\cap\Sigma}|x-x_{t}|_{\bar{g}}^{2-n}\,|x|_{\bar{g}}^{- 1-\tau}\,\mathrm{d}\bar{\mu}.\] Clearly, \(|x|_{\bar{g}}>|x_{t}|_{\bar{g}}/2\) for all \(x\in B_{|x_{t}|_{\bar{g}}/2}^{n}(x_{t})\). Using Lemma 65, we obtain \[\int_{B_{|x_{t}|_{\bar{g}}/2}^{n}(x_{t})\cap\Sigma}|x-x_{t}|_{\bar{g}}^{2-n}\,| x|_{\bar{g}}^{-1-\tau}\,\mathrm{d}\bar{\mu}=O(|x_{t}|_{\bar{g}}^{-\tau}).\] Likewise, \(|x-x_{t}|_{\bar{g}}\geq|x_{t}|_{\bar{g}}/2\) for all \(x\in\mathbb{R}^{n}\setminus B^{n}_{|x_{t}|_{\bar{g}}/2}(x_{t})\). It follows that \[\int_{(B^{n}_{t}(x_{t})\setminus B^{n}_{|x_{t}|_{\bar{g}}/2}(x_{t}))\cap\Sigma} |x-x_{t}|_{\bar{g}}^{2-n}\,|x|_{\bar{g}}^{2-n}\,\mathrm{d}\bar{\mu}=O(1)\,|x_{t }|_{\bar{g}}^{2-n}\,\int_{B^{n}_{t}(0)\cap\Sigma}|x|_{\bar{g}}^{-1-\tau}\, \mathrm{d}\bar{\mu}.\] Using Lemma 65 again, we find \[\int_{B^{n}_{t}(0)\cap\Sigma}|x|_{\bar{g}}^{-1-\tau}\,\mathrm{d}\bar{\mu}=O(t^ {n-2-\tau}).\] Since \(0<\tau\leq n-2\), \[-\tau\,\frac{n-1-\tau}{n-1}<-\frac{\tau}{n-1}=(2-n)\,\frac{n-1-\tau}{n-1}+(n- 2-\tau).\] We conclude that \[\frac{|B^{n}_{t}(x_{t})\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\,t^{n-1}}\geq 1-O(t^ {-\tau/(n-1)}).\] Note that \[|B^{n}_{t}(x_{t})\cap\Sigma|_{\bar{g}}\leq|B^{n}_{t\,(1+t^{-\tau/(n-1)})}(0) \cap\Sigma|_{\bar{g}}.\] Using also Lemma 15, \[\frac{|B^{n}_{t\,(1+t^{-\tau/(n-1)})}(0)\cap\Sigma|_{\bar{g}}}{\omega_{n-1}\, t^{n-1}}\leq\frac{|B^{n}_{t\,(1+t^{-\tau/(n-1)})}(0)\cap\Sigma|_{\bar{g}}}{ \omega_{n-1}\,t^{n-1}\,(1+t^{-\tau/(n-1)})^{n-1}}+O(t^{-\tau/(n-1)}).\] The assertion follows from these estimates. **Lemma 18**.: _As \(s\to\infty\),_ \[\int_{\Sigma\setminus B^{n}_{s}(0)}|x|_{\bar{g}}^{-1-n}\,\bar{g}(x,\bar{\nu}) ^{2}\,\mathrm{d}\bar{\mu}=O(s^{-\tau/(n-1)}).\] Proof.: We apply the monotonicity formula (8) with \(x_{0}=0\). In conjunction with Lemma 16 and Lemma 17, letting \(t\to\infty\), we obtain \[\int_{\Sigma\setminus B^{n}_{s}(0)}|x|_{\bar{g}}^{-1-n}\,\bar{g}( x,\bar{\nu})^{2}\,\mathrm{d}\bar{\mu}=O(1)\,\int_{\Sigma\setminus B^{n}_{s}(0)}|x| _{\bar{g}}^{1-n}\,\bar{g}(x,\bar{\nu})\,\bar{H}\,\mathrm{d}\bar{\mu}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+O(s^{1-n})\,\int_{B^{n}_{s} (0)\cap\Sigma}\bar{g}(x,\bar{\nu})\,\bar{H}\,\mathrm{d}\bar{\mu}+O(s^{-\tau/ (n-1)}).\] By Lemma 13 and Lemma 65, \[\int_{\Sigma\setminus B^{n}_{s}(0)}|x|_{\bar{g}}^{1-n}\,\bar{g}(x,\bar{\nu}) \,\bar{H}\,\mathrm{d}\bar{\mu}=O(s^{-\tau})\qquad\text{and}\qquad\int_{B^{n}_ {s}(0)\cap\Sigma}\bar{g}(x,\bar{\nu})\,\bar{H}\,\mathrm{d}\bar{\mu}=O(s^{n-1- \tau}).\] The assertion follows from these estimates. Next, we show that there is only one tangent plane at infinity that can arise in the setting of Lemma 12. To this end, we apply an argument developed by B. White [32, pp. 146-147] to study the uniqueness of tangent planes at isolated singularities of area-minimizing surfaces. This argument has been adapted to study the uniqueness of tangent planes at infinity of certain minimal surfaces in \(\mathbb{R}^{3}\) by P. Gallagher [15, p. 374]. **Lemma 19**.: _Let \(F:\Sigma\setminus B_{1}^{n}(0)\to S_{1}^{n-1}(0)\) be given by_ \[F(x)=\frac{x}{|x|_{\bar{g}}}.\] _As \(s\to\infty\),_ \[|F(\Sigma\setminus B_{s}^{n}(0))|_{\bar{g}}=O(s^{-\tau/(2\,n-2)}).\] Proof.: By the area formula, \[|F(\Sigma\setminus B_{s}^{n}(0))|_{\bar{g}}=\int_{\Sigma\setminus B_{s}^{n}(0 )}|x|_{\bar{g}}^{-n}\,|\bar{g}(x,\bar{\nu})|\,\mathrm{d}\bar{\mu}.\] By the Cauchy-Schwarz inequality, \[\left(\int_{\Sigma\setminus B_{s}^{n}(0)}|x|_{\bar{g}}^{-n}\,| \bar{g}(x,\bar{\nu})|\,\mathrm{d}\bar{\mu}\right)^{2}\] \[\qquad\leq\sum_{k=0}^{\infty}\int_{(B_{2k+1\ s}^{n}(0)\setminus B _{2^{k}s}^{n}(0))\cap\Sigma}|x|_{\bar{g}}^{-1-n}\,\bar{g}(x,\bar{\nu})^{2}\, \mathrm{d}\bar{\mu}\,\int_{(B_{2^{k+1\ s}}^{n}(0)\setminus B_{2^{k}s}^{n}(0)) \cap\Sigma}|x|_{\bar{g}}^{1-n}\,\mathrm{d}\bar{\mu}.\] Invoking Lemma 16 and Lemma 18, we obtain \[\left(\int_{\Sigma\setminus B_{s}^{n}(0)}|x|_{\bar{g}}^{-n}\,|\bar{g}(x,\bar{ \nu})|\,\mathrm{d}\bar{\mu}\right)^{2}\leq O(1)\,\sum_{k=1}^{\infty}(2^{k}\,s )^{-\tau/(n-1)}=O(s^{-\tau/(n-1)}).\] **Corollary 20**.: _The tangent planes in Lemma 12 all agree._ Proof.: Suppose, for a contradiction, that \(\pi_{1},\pi_{2}\subset\mathbb{R}^{n}\) are two different tangent planes at infinity that arise as in Lemma 12. Let \(\lambda_{2}>\lambda_{1}>s>1\) be such that \(S_{1}^{n-1}(0)\cap\lambda_{1}^{-1}\,\Sigma\) and \(S_{1}^{n-1}(0)\cap\lambda_{2}^{-1}\,\Sigma\) are close to \(S_{1}^{n-1}(0)\cap\pi_{1}\) and \(S_{1}^{n-1}(0)\cap\pi_{2}\), respectively. Note that by the intermediate value theorem, \(F((B_{\lambda_{2}}^{n}(0)\setminus B_{\lambda_{1}}^{n}(0))\cap\Sigma)\) contains at least two of the four components of \(S_{1}^{n-1}(0)\setminus(\lambda_{1}^{-1}\Sigma\cup\lambda_{2}^{-1}\,\Sigma)\). Using this, it follows that \[\liminf_{s\to\infty}|F(\Sigma\setminus B_{s}^{n}(0))|_{\bar{g}}>0.\] This is not compatible with Lemma 19. Proof of Proposition 9.: Using Lemma 11, we may assume that (7) holds. Using Lemma 12 and Corollary 20, we see that, after a rotation, there are \(r_{0}>1\) and \(u\in C^{\infty}(\mathbb{R}^{n-1})\) with \[\Sigma\setminus B_{r_{0}}^{n}(0)\subset\{(y,u(y)):y\in\mathbb{R}^{n-1}\}.\] Using Lemma 12, we obtain that \[|y|_{\bar{g}}^{-1}\,|u|+|\bar{\nabla}u|_{\bar{g}}+|y|_{\bar{g}}\,|\bar{\nabla }^{2}u|_{\bar{g}}=o(1). \tag{13}\] Let \(v\in C^{\infty}(\Sigma)\) be given by \(v=\bar{g}(x,\bar{\nu})\). Using the Codazzi equation and Lemma 13, we have \[\bar{\Delta}v+|\bar{h}|_{\bar{g}}^{2}\,v=O(|x|_{\bar{g}}^{-1-\tau}).\] Let \(x\in\Sigma\) and \(r=|x|_{\bar{g}}/4\). Using the interior \(L^{2}\)-estimate [16, Theorem 9.11] and the Sobolev embedding theorem, recalling that \(3\leq n\leq 7\), we have \[r^{-1}\,\left(r^{1-n}\,\int_{B^{n}_{2\,r}(x)\cap\Sigma}v^{6}\,\mathrm{d}\bar{ \mu}\right)^{\frac{1}{6}}\leq O(r^{-1})\,\left(r^{1-n}\,\int_{B^{n}_{3\,r}(x) \cap\Sigma}v^{2}\,\mathrm{d}\bar{\mu}\right)^{\frac{1}{2}}+O(r^{-\tau}).\] By Lemma 18, \[r^{3-n}\,\int_{B^{n}_{3\,r}(x)\cap\Sigma}v^{2}\,\mathrm{d}\bar{\mu}=O(r^{-\tau /(n-1)}).\] It follows that \[r^{-1}\,\left(r^{1-n}\,\int_{B^{n}_{2\,r}(x)\cap\Sigma}v^{6}\,\mathrm{d}\bar{ \mu}\right)^{\frac{1}{6}}=O(r^{-\tau/(2\,n-2)}).\] Using the interior \(L^{6}\)-estimate [16, Theorem 9.11] and the Sobolev embedding theorem, we have \[r^{-1}\,\left(r^{1-n}\,\int_{B^{n}_{r}(x)\cap\Sigma}v^{7}\,\mathrm{d}\bar{\mu }\right)^{\frac{1}{7}}=O(r^{-\tau/(2\,n-2)}).\] Finally, using the interior \(L^{7}\)-estimate [16, Theorem 9.11] and the Sobolev embedding theorem, we conclude that \[(\bar{\nabla}v)(x)=O(|x|_{\bar{g}}^{-\tau/(2\,n-2)}).\] Note that \(\bar{\nabla}v=x^{\bar{\top}}\lrcorner\bar{h}.\) By (13), \(|x|_{\bar{g}}=O(|x^{\bar{\top}}|_{\bar{g}})\). In conjunction with Lemma 13, we conclude that \[|x|_{\bar{g}}\,\bar{h}=O(|x|_{\bar{g}}^{-\tau/(2\,n-2)}).\] Using (13), we obtain the improved estimate \[|y|_{\bar{g}}\,|\bar{\nabla}^{2}u|_{\bar{g}}=O(|y|_{\bar{g}}^{-\tau/(2\,n-2)}). \tag{14}\] Let \(\alpha>0\) and suppose that \(|y|_{\bar{g}}\,|\bar{\nabla}^{2}u|_{\bar{g}}=O(|y|_{\bar{g}}^{-\alpha})\). As in [7, p. 16], accounting for the weaker decay assumptions on the Riemannian metric \(g\) here, we rewrite the minimal surface equation as \[|y|_{\bar{g}}\,\bar{\Delta}u=O(|y|_{\bar{g}}\,|\bar{\nabla}u|_{\bar{g}}^{2}\, |\bar{\nabla}^{2}u|_{\bar{g}})+O(|y|_{\bar{g}}^{-\tau}).\] In particular, \[|y|_{\bar{g}}\,|\bar{\Delta}u|=O(|y|_{\bar{g}}^{\max\{-3\,\alpha,-\tau\}}).\] Proceeding as in [7, p. 16], we see that, given \(\varepsilon>0\), there are \(u_{1},\,u_{2}\,\in C^{\infty}(\Sigma)\) such that \(u=u_{1}+u_{2}\) where \(u_{2}\) satisfies \[|y|_{\bar{g}}^{-1}\,|u_{2}|+|\bar{\nabla}u_{2}|_{\bar{g}}+|y|_{\bar{g}}\,|\bar {\nabla}^{2}u_{2}|_{\bar{g}}=O(|y|_{\bar{g}}^{\max\{-3\,\alpha,-\tau\}+ \varepsilon}).\] and \(u_{1}\) is harmonic with \(u_{1}-a=O(|y|_{\bar{g}}^{3-n})\) for some \(a\in\mathbb{R}\) if \(3<n\leq 7\) and \(u_{1}=O(\log|y|_{\bar{g}})\) if \(n=3\), respectively. Iterating this argument, we obtain (5) from (14). **Corollary 21**.: _Suppose that \(n-3<\tau\leq n-2\). There holds_ \[\int_{\Sigma}|h|^{2}\,\mathrm{d}\mu<\infty\qquad\text{and}\qquad\int_{\Sigma}| Ric(\nu,\nu)|\,\mathrm{d}\mu<\infty.\] Proof.: This follows from Proposition 9 and Lemma 64. For the next lemma, recall the definition (43) of the mass of an asymptotically flat manifold. **Lemma 22**.: _Suppose that \(n-3<\tau\leq n-2\). Each end of the Riemannian \((n-1)\)-manifold \((\Sigma,g|_{\Sigma})\) is asymptotically flat with mass zero._ Proof.: Fix \(i\in\{1,\ldots,N\}\) and let \(\varphi:\mathbb{R}^{n-1}\to\mathbb{R}^{n}\) be the chart given by \(\varphi(y)=(y,u_{i}(y))\) where \(u_{i}\in C^{\infty}(\mathbb{R}^{n-1})\) is as in Proposition 9. Note that \[(\varphi^{*}g)(e_{a},e_{b})=\delta_{ab}+O(|y|_{\bar{g}}^{-\tau})\] for all \(a,\,b\in\{1,\ldots,n-1\}\). Since \(\tau>n-3=(n-1)-2\), the assertion follows. ## 3. Stability with respect to asymptotically constant variations In this section, we assume that \((M,g)\) is a Riemannian manifold of dimension \(3\leq n\leq 7\) which is asymptotically flat of rate \(\tau\) where \[n-3<\tau\leq n-2. \tag{15}\] Let \(\Omega_{1},\,\Omega_{2},\ldots\subset M\) be isoperimetric regions with \(|\Omega_{k}|\to\infty\) such that \(\Omega_{k}\) converges locally smoothly to a region \(\Omega\subset M\) whose boundary \(\Sigma=\partial\Omega\) is non-compact and area-minimizing. Recall from Proposition 58 that \(\Sigma\) has one non-compact component and that the other components of \(\Sigma\) are contained in the boundary of \(M\). Moreover, after a rotation and passing to a subsequence, there holds \(\lambda(\partial\Omega_{k})^{-1}\,(\Omega_{k}\setminus B_{1})\to B_{1}^{n}(e_ {n})\) locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\). By Proposition 9, \(\Sigma\) is asymptotic to a coordinate hyperplane. Note that the normal of this plane pointing towards \(\Omega\) is necessarily \(e_{n}\). The goal of this section is to show that \(\Sigma\) is stable with respect to asymptotically constant variations, i.e., that \[\int_{\Sigma}(|h|^{2}+Ric(\nu,\nu))\,(1+f)^{2}\,\mathrm{d}\mu\leq\int_{\Sigma} |\nabla f|^{2}\,\mathrm{d}\mu\] for all \(f\in C_{c}^{\infty}(\Sigma)\). To this end, we will study the second variation of area of \(\Sigma_{k}=\partial\Omega_{k}\) with respect to a Euclidean translation; see Figure 1 Let \(\chi\in C^{\infty}(M,TM)\) be a vector field with \(\chi=e_{n}\) in \(M\setminus B_{2}\) and \(\chi=0\) in \(B_{1}\). Let \(u_{k},\,v_{k}\in C^{\infty}(\Sigma_{k})\) be the functions \[u_{k}=g(\chi,\nu)\qquad\text{and}\qquad v_{k}=-h(\chi^{\top},\chi^{\top}).\] Figure 1. The hypersurfaces \(\Sigma_{k}=\partial\Omega_{k}\) converge locally uniformly to \(\Sigma=\partial\Omega\). Both the inward-pointing normal of \(\Sigma\) and the re-scaled barycenters of \(\Omega_{k}\) asymptote to the vector \(e_{n}\). We define the area radius of \(\Sigma_{k}\) to be \[\lambda(\Sigma_{k})=\left(\frac{|\Sigma_{k}|}{n\,\omega_{n}}\right)^{\frac{1}{n-1 }}.\] Recall from (47) that \(\lambda(\Sigma_{k})\,H(\Sigma_{k})=(n-1)+o(1)\). **Lemma 23**.: _As \(k\to\infty\),_ \[\lambda(\Sigma_{k})^{1-n}\,\int_{\Sigma_{k}}u_{k}\,\mathrm{d}\mu=O(\lambda( \Sigma_{k})^{-\tau}).\] Proof.: Since \(\Sigma_{k}\) converges to \(\Sigma\) locally smoothly, there are closed hypersurfaces \(\hat{\Sigma}_{1}\), \(\hat{\Sigma}_{2},\ldots\) with \[\circ B_{1}^{n}(0)\cap\hat{\Sigma}_{k}=\emptyset,\] \[\circ \hat{\Sigma}_{k}\setminus B_{2}^{n}(0)=\Sigma_{k}\setminus B_{2},\] \[\circ |B_{2}^{n}(0)\cap\hat{\Sigma}_{k}|_{\bar{g}}=O(1),\text{ and }\] \[\circ h(\hat{\Sigma}_{k})=O(1).\] Note that \[\int_{\Sigma_{k}}g(\chi,\nu)\,\mathrm{d}\mu=\int_{\hat{\Sigma}_{k}}g(\chi,\nu )\,\mathrm{d}\mu-\int_{B_{2}\cap\hat{\Sigma}_{k}}g(\chi,\nu)\,\mathrm{d}\mu+ \int_{B_{2}\cap\Sigma_{k}}g(\chi,\nu)\,\mathrm{d}\mu\] Clearly, \[\int_{B_{2}\cap\hat{\Sigma}_{k}}g(\chi,\nu)\,\mathrm{d}\mu=O(1).\] By Lemma 46, \(\limsup_{k\to\infty}|B_{2}\cap\Sigma_{k}|<\infty\), so that \[\int_{B_{2}\cap\Sigma_{k}}g(\chi,\nu)\,\mathrm{d}\mu=O(1).\] Using Lemma 64, we see that \[\int_{\hat{\Sigma}_{k}}g(\chi,\nu)\,\mathrm{d}\mu=\int_{\hat{\Sigma}_{k}}\bar {g}(e_{n},\nu)\,\mathrm{d}\bar{\mu}+O(1)\,\int_{\hat{\Sigma}_{k}}|x|_{\bar{g}} ^{-\tau}\,\mathrm{d}\bar{\mu}.\] By the divergence theorem, \[\int_{\hat{\Sigma}_{k}}\bar{g}(e_{n},\nu)\,\mathrm{d}\bar{\mu}=0.\] Finally, by Lemma 46, \[\int_{\hat{\Sigma}_{k}}|x|_{\bar{g}}^{-\tau}\,\mathrm{d}\bar{\mu}=O(\lambda( \Sigma_{k})^{n-1-\tau}).\] The assertion follows from these estimates. **Lemma 24**.: _There holds, as \(k\to\infty\),_ \[\lambda(\Sigma_{k})^{2-n}\,\int_{\Sigma_{k}}v_{k}+H\,u_{k}^{2}\,\mathrm{d}\mu =O(\lambda(\Sigma_{k})^{-\tau}).\] Proof.: We continue with the notation introduced in the proof of Lemma 23. Using the area estimate from Lemma 46 and the curvature estimates (47) and (48), we see that \[\int_{B_{2}\cap\Sigma_{k}}v_{k}+H\,u_{k}^{2}\,\mathrm{d}\mu=O(1).\] Similarly, \[\int_{B_{2}\cap\hat{\Sigma}_{k}}-h(\chi^{\top},\chi^{\top})+H\,g(\chi,\nu)^{2}\, \mathrm{d}\mu=O(1).\] Using also Lemma 64, we obtain \[v_{k}+H\,u_{k}^{2}=-\bar{h}(e_{n}^{\top},e_{n}^{\top})+\bar{H}\,\bar{g}(e_{n}, \bar{\nu})^{2}+O(|x|_{\bar{g}}^{-1-\tau}).\] Using Lemma 46, we get \[\int_{\hat{\Sigma}_{k}}|x|_{\bar{g}}^{-1-\tau}\,\mathrm{d}\bar{\mu}=O(\lambda( \Sigma_{k})^{n-2-\tau}).\] Combining these equations, \[\int_{\Sigma_{k}}v_{k}+H\,u_{k}^{2}\,\mathrm{d}\mu=\int_{\hat{\Sigma}_{k}}- \bar{h}(e_{n}^{\top},e_{n}^{\top})+\bar{H}\,\bar{g}(e_{n},\bar{\nu})^{2}\, \mathrm{d}\bar{\mu}+O(\lambda(\Sigma_{k})^{n-2-\tau}).\] Note that, by the first variation formula, \[\int_{\hat{\Sigma}_{k}}-\bar{h}(e_{n}^{\top},e_{n}^{\top})+\bar{H}\,\bar{g}(e _{n},\bar{\nu})^{2}\,\mathrm{d}\bar{\mu}=0.\] The assertion follows from these estimates. Fix \(\eta\in C^{\infty}(\mathbb{R})\) with * \(\eta(t)=0\) if \(t\leq 1\), * \(\eta(t)>0\) if \(1<t<2\), and * \(\eta(t)=0\) if \(t\geq 2\). We define \[\kappa_{k}=\lambda(\Sigma_{k})^{1-n}\,\int_{\Sigma_{k}}\eta\left(\lambda( \Sigma_{k})^{-1}\,|x|_{\bar{g}}\right)\,\mathrm{d}\mu\qquad\text{and}\qquad \kappa=\int_{S_{1}^{n-1}(e_{n})}\eta(|x|_{\bar{g}})\,\mathrm{d}\bar{\mu}.\] Note that \(\kappa>0\). **Lemma 25**.: _Passing to a subsequence, there holds \(\kappa_{k}\to\kappa\)._ Proof.: This follows from Proposition 58. Let \(f\in C^{\infty}_{c}(M)\) be a function whose support is disjoint from the boundary of \(M\). We define \[\tilde{u}_{k}(x)=\begin{cases}\alpha_{k}\,\lambda(\Sigma_{k})^{-\tau}\,\eta \left(\lambda(\Sigma_{k})^{-1}\,|x|_{\bar{g}}\right)&\text{if }x\notin B_{1},\\ 0&\text{else}.\end{cases}\] Here, \(\alpha_{k}\in\mathbb{R}\) is chosen such that \[\int_{\Sigma_{k}}u_{k}+f+\tilde{u}_{k}\,\mathrm{d}\mu=0. \tag{16}\] Using Lemma 23 and Lemma 25, we see that \(\alpha_{k}=O(1)\). Next, we define \[\tilde{v}_{k}(x)=\begin{cases}\beta_{k}\,\lambda(\Sigma_{k})^{-1-\tau}\,\eta( \lambda(\Sigma_{k})^{-1}\,|x|_{\bar{g}})&\text{if }x\notin B_{1},\\ 0&\text{else},\end{cases}\] where \(\beta_{k}\in\mathbb{R}\) is chosen such that \[\int_{\Sigma_{k}}v_{k}+\tilde{v}_{k}+H\,(u_{k}+f+\tilde{u}_{k})^{2}\,\mathrm{d} \mu=0. \tag{17}\] Using Lemma 24, Lemma 25, and (47), we see that \(\beta_{k}=O(1)\). Note that there is \(\varepsilon>0\) such that, for all \(s\in(-\varepsilon,\varepsilon)\), \[\Sigma_{k}(s)=\{\exp_{x}\left(U_{k}(x,s)\,\nu(\Sigma_{k})(x)\right):x\in \Sigma_{k}\}\] where \[U_{k}(x,s)=s\,(u_{k}+f+\tilde{u}_{k})(x)+\tfrac{1}{2}\,s^{2}\,(v_{k}+\tilde{v }_{k})(x)\] is an embedded hypersurface in \(M\) that bounds a region \(\Omega_{k}(s)\). **Lemma 26**.: _There holds_ \[\left.\frac{d}{ds}\right|_{s=0}\lvert\Omega_{k}(s)\rvert=\left.\frac{d^{2}}{ ds^{2}}\right|_{s=0}\lvert\Omega_{k}(s)\rvert=0.\] Proof.: This follows from Lemma 62 using (16) and (17). By Proposition 9, \(\partial B_{r}\) intersects \(\Sigma\) transversally for all \(r>1\) sufficiently large. Increasing \(r>1\) if necessary, we may arrange that \[\operatorname{spt}(f)\subset B_{r}. \tag{18}\] For Proposition 27 below, let \(u\in C^{\infty}(\Sigma)\) be the function given by \(u=g(\chi,\nu)\). **Proposition 27**.: _There holds_ \[\int_{B_{r}\cap\Sigma}(\lvert h\rvert^{2}+Ric(\nu,\nu))\,(u+f)^{2}\,\mathrm{d }\mu\leq\int_{B_{r}\cap\Sigma}\lvert\nabla(u+f)\rvert^{2}\,\mathrm{d}\mu+O(r^ {n-3-\tau}).\] Proof.: By Lemma 26, the variation \(\{\Sigma_{k}(s)\}_{\lvert s\rvert<\varepsilon}\) is volume-preserving up to second order. Since \(\Omega_{k}\) is isoperimetric, it follows that \[\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\lvert\Sigma_{k}(s)\rvert_{g}\geq 0.\] By Lemma 62, \[\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\lvert\Sigma_{k}(s)\rvert_{g}=\int_{ \Sigma_{k}}H\,\ddot{U}_{k}+H^{2}\,\dot{U}_{k}^{2}+\lvert\nabla\dot{U}_{k} \rvert^{2}-\left(\lvert h\rvert^{2}+Ric(\nu,\nu)\right)\dot{U}_{k}^{2}\, \mathrm{d}\mu.\] Note that \(\tilde{v}_{k}(x)=\tilde{u}_{k}(x)=0\) if \(\lvert x\rvert_{\tilde{g}}\leq\lambda(\Sigma_{k})\). Using also the curvature estimates (47) and (48), that is, \(\lvert x\rvert_{\tilde{g}}\,h=O(1)\), we check that \[\begin{split}&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\ Note that \(f=0\) on \(\Sigma_{k}\setminus B_{r}\). Using Lemma 46, we have \[\int_{\Sigma_{k}\setminus B_{r}}Ric(\nu,\nu)\,u_{k}^{2}\,\mathrm{d}\mu=O(1)\, \int_{\Sigma_{k}\setminus B_{r}}|x|_{\bar{g}}^{-2-\tau}\,\mathrm{d}\bar{\mu}=O( \lambda(\Sigma_{k})^{n-3-\tau})+O(r^{n-3-\tau}).\] Similarly, using also Lemma 64, the curvature estimates (47) and (48), and (18), we check that \[\int_{\Sigma_{k}\setminus B_{r}}H\,v_{k}+H^{2}\,u_{k}^{2}+|\nabla u _{k}|^{2}-|h|^{2}\,u_{k}^{2}\,\mathrm{d}\mu\] \[\qquad=\int_{\Sigma_{k}\setminus B_{r}}\bar{H}\,\bar{v}_{k}+\bar {H}^{2}\,\bar{u}_{k}^{2}+|\bar{\nabla}\bar{u}_{k}|_{\bar{g}}^{2}-|\bar{h}|_{ \bar{g}}^{2}\,\bar{u}_{k}^{2}\,\mathrm{d}\bar{\mu}+O(\lambda(\Sigma_{k})^{n-3- \tau})+O(r^{n-3-\tau})\] where \(\bar{u}_{k}\), \(\bar{v}_{k}\in C^{\infty}(\Sigma_{k})\) are given by \(\bar{u}_{k}=\bar{g}(e_{n},\bar{\nu})\) and \(\bar{v}_{k}=-\bar{h}(e_{n}^{\bar{\top}},e_{n}^{\bar{\top}}).\) For instance, \[H\,v_{k}=\bar{H}\,\bar{v}_{k}+O(|x|_{\bar{g}}^{-2-\tau})\] and \[\int_{\Sigma_{k}\setminus B_{r}}|x|_{\bar{g}}^{-2-\tau}\,\mathrm{d}\bar{\mu}= O(\lambda(\Sigma_{k})^{n-3-\tau})+O(r^{n-3-\tau}).\] Since \(\Sigma_{k}\to\Sigma\) locally smoothly, we obtain, using also the integration by parts formula from Lemma 63, \[\int_{\Sigma_{k}\setminus B_{r}}\bar{H}\,\bar{v}_{k}+\bar{H}^{2}\,\bar{u}_{k} ^{2}+|\bar{\nabla}\bar{u}_{k}|_{\bar{g}}^{2}-|\bar{h}|_{\bar{g}}^{2}\,\bar{u} _{k}^{2}\,\mathrm{d}\bar{\mu}=O(1)\,\int_{\partial B_{r}\cap\Sigma}|\bar{h}( \Sigma)|_{\bar{g}}\,|e_{n}^{\bar{\top}}|_{\bar{g}}\,\mathrm{d}\bar{\ell}.\] By Proposition 9, \(|\bar{h}(\Sigma)|_{\bar{g}}=O(|y|_{\bar{g}}^{-1-\tau/2})\), \(e_{n}^{\bar{\top}}=O(|y|_{\bar{g}}^{-\tau/2})\), and \(|\partial B_{r}\cap\Sigma|_{\bar{g}}=O(r^{n-2})\). Thus, \[\int_{\partial B_{r}\cap\Sigma}|\bar{h}(\Sigma)|_{\bar{g}}\,|e_{n}^{\bar{\top }}|_{\bar{g}}\,\mathrm{d}\bar{\ell}=O(r^{n-3-\tau}).\] The assertion follows from the above estimates using (15), (47), and that \(\Sigma_{k}\to\Sigma\) locally smoothly. **Proposition 28**.: _Let \(f\in C_{c}^{\infty}(\Sigma)\). There holds_ \[\int_{\Sigma}(|h|^{2}+Ric(\nu,\nu))\,(1+f)^{2}\,\mathrm{d}\mu\leq\int_{\Sigma }|\nabla f|^{2}\,\mathrm{d}\mu.\] Proof.: We may assume that the support of \(f\) is disjoint from the boundary of \(M\), which is stable. Letting \(r\to\infty\) in Proposition 27 and using Corollary 21 and the dominated convergence theorem, we conclude that \[\int_{\Sigma}(|h|^{2}+Ric(\nu,\nu))\,(u+f)^{2}\,\mathrm{d}\mu\leq\int_{\Sigma }|\nabla(u+f)|^{2}\,\mathrm{d}\mu \tag{19}\] for all \(f\in C_{c}^{\infty}(\Sigma)\). Fix a function \(\psi\in C_{c}^{\infty}(\mathbb{R})\) with \(\psi(t)=1\) if \(t\leq 1\). Let \(\{f_{k}\}_{k=1}^{\infty}\) be the sequence of functions \(f_{k}\in C_{c}^{\infty}(\Sigma)\) given by \[f_{k}(x)=\begin{cases}1-u(x)+f(x)&\text{if }x\in B_{1},\\ \psi(k^{-1}\,|x|_{\bar{g}})\,(1-u(x))+f(x)&\text{else}.\end{cases}.\] Note that, on \(\Sigma\setminus B_{1}\), \[|\nabla f_{k}|^{2}\leq O(1)\,\big{(}|x|_{\bar{g}}^{-2}\,|u-1|^{2}+|\nabla u|^{2 }+|\nabla f|^{2}\big{)}. \tag{20}\] By Proposition 9, \[|u-1|+|x|_{\bar{g}}\,|\nabla u|=O(|x|_{\bar{g}}^{-\tau/2}).\] It follows that the right-hand side of (20) is integrable. The assertion now follows from applying (19) with \(f_{k}\) in place of \(f\), letting \(k\to\infty\), and using the dominated convergence theorem. **Remark 29**.: _Proposition 28 holds for all \(f\in H^{1}_{\ell oc}(\Sigma)\) for which there are \(f_{1},\,f_{2},\ldots\in C^{\infty}_{c}(\Sigma)\) with \(\nabla f_{k}\to\nabla f\) in \(L^{2}(\Sigma)\), as can be seen using the Sobolev inequality._ The following Proposition is a consequence of Proposition 28 using arguments of R. Schoen and S.-T. Yau [25] and of A. Carlotto [7]. **Proposition 30**.: _Suppose that \(R\geq 0\) along the non-compact component of \(\Sigma\). Then the non-compact component of \(\Sigma\) is totally geodesic and isometric to flat \(\mathbb{R}^{n-1}\). Moreover, there holds \(R=\text{Ric}(\nu,\nu)=0\) along the non-compact component of \(\Sigma\)._ Proof.: Let \(\Sigma^{0}\) be the non-compact component of \(\Sigma\). Arguing as in the proof of [7, Theorem 2] using Lemma 22 and the stability with respect to asymptotically constant variations asserted in Proposition 28, we see that the scalar curvature of \((\Sigma^{0},g|_{\Sigma^{0}})\) vanishes. By the rigidity statement of the positive mass theorem [23, Lemma 3 and Proposition 2], \((\Sigma^{0},g|_{\Sigma^{0}})\) is isometric to flat \(\mathbb{R}^{n-1}\). As explained in [26, p. 35], Proposition 28 implies that \[\int_{\Sigma^{0}}|h|^{2}+Ric(\nu,\nu)\,\mathrm{d}\mu\leq 0.\] Using the Gauss equation and that \(\Sigma^{0}\) is scalar flat, we have \[|h|^{2}+\text{Ric}(\nu,\nu)=\frac{1}{2}\,(|h|^{2}+R).\] In conjunction with \(R\geq 0\), we obtain that \(h=0\) and that \(R=Ric(\nu,\nu)=0\) along \(\Sigma^{0}\). ## 4. Proof of Theorem 2 In this section, \((M,g)\) is a Riemannian manifold of dimension \(3<n\leq 7\) that is asymptotically flat of rate \(\tau>n-3\) and which has non-negative scalar curvature and positive mass. We defer the proof of the following lemma to the end of this section. **Lemma 31**.: _Let \(p_{1},\,p_{2}\) be points in the interior of \(M\). There exists \(U\) open and compactly contained in the interior of \(M\) with \(p_{1},\,p_{2}\in U\) such that the following holds. For all \(r>0\) sufficiently small, there exist \(W\subset U\) open with \(p_{1},\,p_{2}\in W\) and a family \(\{g_{t}\}_{t\in(0,1)}\) of Riemannian metrics \(g_{t}\) on \(M\) such that_ \[\circ g_{t}\to g\text{ smoothly as }t\to 0, \tag{22}\] \[\circ g_{t}=g\text{ in }M\setminus W,\] (23) \[\circ g_{t}<g\text{ in }W,\text{ and }\] (24) \[\circ R(g_{t})>0\text{ in }\{x\in W:\operatorname{dist}_{g}(x,p_{2})>r\}. \tag{21}\] Let \(\Omega_{1},\,\Omega_{2},\ldots\subset M\) be isoperimetric regions with \(|\Omega_{k}|\to\infty\) and such that \(\Omega_{k}\) converges locally smoothly to a region \(\Omega\subset M\) whose boundary \(\Sigma=\partial\Omega\) is non-compact and area-minimizing. **Proposition 32**.: _Let \(p\) be a point in the interior of \(M\). There exist \(U\) open and compactly contained in the interior of \(M\), Riemannian metrics \(\tilde{g}_{1},\,\tilde{g}_{2},\ldots\) on \(M\), and regions \(\,\tilde{\Omega}_{1},\,\tilde{\Omega}_{2},\ldots\), and \(\Omega_{p}\subset M\) with the following properties._ * \(\operatorname{spt}(\tilde{g}_{k}-g)\subset U\) _for all_ \(k\) _and_ \(\tilde{g}_{k}\to g\) _smoothly._ * \(\tilde{\Omega}_{k}\) _is an isoperimetric region in_ \((M,\tilde{g}_{k})\) _and_ \(|\tilde{\Omega}_{k}|_{\tilde{g}_{k}}\to\infty\)_._ * \(\tilde{\Omega}_{k}\to\Omega_{p}\) _locally smoothly._ * \(\partial\Omega_{p}\) _is a non-compact area-minimizing boundary in_ \((M,g)\) _that is stable with respect to asymptotically constant variations and_ \(p\in\partial\Omega_{p}\)_._ Proof.: We first assume that the boundary of \(M\) is empty. We choose \(p_{1}\in\Sigma\) and let \(p_{2}=p\). Let \(U\subset M\) be the open set from Lemma 31 and \(r>0\) be small enough so that the conclusion of Lemma 31 holds. According to Lemma 31, there is \(W\subset U\) open and a family \(\{g_{t}\}_{t\in(0,1)}\) of Riemannian metrics \(g_{t}\) on \(M\) satisfying (21), (22), (23), and (24). Let \(t\in(0,1)\). Recall the isoperimetric profile \(A_{g}\) defined in (49). Let \(c(t)=|W|_{g}-|W|_{g_{t}}\). Note that * \(c(t)>0\) by (23) and * \(c(t)=o(1)\) as \(t\to 0\) by (21). Since \(\partial\Omega_{k}\to\Sigma\) locally smoothly and \(\Sigma\cap W\neq\emptyset\), it follows from (23) that there is \(\varepsilon(t)>0\) such that, for all \(k\) sufficiently large, \[|\partial\Omega_{k}|_{g_{t}}+\varepsilon(t)<|\partial\Omega_{k}|_{g}. \tag{25}\] Using Lemma 60, we see that \[|\partial\Omega_{k}|_{g}=A_{g}(|\Omega_{k}|_{g})\leq A_{g}(V)+o(1) \tag{26}\] for any amount of volume \(V\) such that \(|\Omega_{k}|_{g}-c(t)\leq V\leq|\Omega_{k}|_{g}+c(t)\). By Lemma 61, for every \(k\) sufficiently large, there exists an isoperimetric region \(\Omega_{k}(t)\subset M\) with respect to \(g_{t}\) such that \(|\Omega_{k}(t)|_{g_{t}}=|\Omega_{k}|_{g_{t}}\). We claim that there is \(\tilde{W}\Subset W\) open such that, for all \(k\) sufficiently large, \[\tilde{W}\cap\partial\Omega_{k}(t)\neq\emptyset. \tag{27}\] To see this, assume the contrary. Passing to a subsequence and using (22) and Lemma 46, we see that \[|\partial\Omega_{k}(t)|_{g_{t}}=|\partial\Omega_{k}(t)|_{g}+o(1). \tag{28}\] Using (23), we have \[|\Omega_{k}(t)|_{g}\leq|\Omega_{k}(t)|_{g_{t}}+c(t)=|\Omega_{k}|_{g_{t}}+c(t) \leq|\Omega_{k}|_{g}+c(t).\] Similarly, \(|\Omega_{k}(t)|_{g}\geq|\Omega_{k}|_{g}-c(t).\) Using (26) with \(V=|\Omega_{k}(t)|_{g}\), we conclude that \[|\partial\Omega_{k}(t)|_{g}\geq|\partial\Omega_{k}|_{g}-o(1).\] This is not compatible with (25) and (28). By Proposition 58, \(\Omega_{k}(t)\) converges locally smoothly to a region \(\Omega(t)\) with non-compact boundary \(\Sigma(t)\) that is area-minimizing with respect to \(g_{t}\). Using (27), we see that \(\Sigma(t)\) intersects the closure of \(\tilde{W}\). Using Proposition 30 and (24), it follows that, in fact, \(\{x\in M:\operatorname{dist}(x,p)<r\}\cap\Sigma(t)\neq\emptyset\). Passing to a subsequence, we see that, as \(t\to 0\), \(\Omega(t)\) converges locally smoothly to a a region \(\Omega_{p}(r)\subset M\) with non-compact boundary \(\Sigma_{p}(r)\) that is area-minimizing with respect to \(g\) such that \(\{x\in M:\operatorname{dist}(x,p)\leq r\}\cap\Sigma_{p}(r)\neq\emptyset\). Moreover, passing to a subsequence, we see that, as \(r\to 0\), \(\Omega_{p}(r)\) converges locally smoothly to a region \(\Omega_{p}\) with non-compact boundary \(\Sigma_{p}\) that is area-minimizing with respect to \(g\) and such that \(p\in\Sigma_{p}\). By a diagonal argument, we see that there are Riemannian metric \(\tilde{g}_{1},\,\tilde{g}_{2},\ldots\) on \(M\) and regions \(\tilde{\Omega}_{1},\,\tilde{\Omega}_{2},\ldots\subset M\) with the asserted properties. Repeating the argument that led to Proposition 28, we see that \(\Sigma_{p}\) is stable with respect to asymptotically constant variations in the sense of Proposition 28. This completes the proof in the case where the boundary of \(M\) is empty. In the case where the boundary of \(M\) is an outermost minimal surface, we note that the boundary of \(M\) is also an outermost minimal surface with respect to \(g_{t}\) for all sufficiently small \(t>0\). The rest of the proof only requires formal modifications. Proof of Theorem 2.: Let \((M,g)\) be asymptotically flat of rate \(\tau>n-3\) with non-negative scalar curvature. Suppose that there are isoperimetric regions \(\Omega_{1},\,\Omega_{2},\ldots\) in \((M,g)\) with \(|\Omega_{k}|\to\infty\) such that \(\Omega_{k}\) converges locally smoothly to a region \(\Omega\subset M\) whose boundary \(\Sigma=\partial\Omega\) is non-compact and area-minimizing. Suppose for a contradiction, that \((M,g)\) has positive mass \(m\). We first assume that the boundary of \(M\) is empty. Our goal is to show that the curvature tensor \(Rm\) vanishes everywhere. Let \(p\in M\). We first assume that \(p\in\Sigma\). Using Proposition 32, Proposition 30, and standard convergence results from geometric measure theory, we see that there are mutually distinct connected non-compact area-minimizing boundaries \(\Sigma_{1},\,\Sigma_{2},\ldots\subset M\) such that * \(\Sigma_{k}\) is isometric to flat \(\mathbb{R}^{n-1}\) for all \(k\), * \(\Sigma_{k}\) is totally geodesic for all \(k\), and * \(\Sigma_{k}\to\Sigma\) locally smoothly. Let \(W,\,X,\,Y,\,Z\) be tangent fields along \(\Sigma\). Note that \[Rm(W,\,X,\,Y,\,Z)=0. \tag{29}\] Indeed, this follows from the Gauss equation, using that \(\Sigma\) is isometric to flat \(\mathbb{R}^{n-1}\) and totally geodesic. Similarly, using the Codazzi equation, we obtain \[Rm(W,\,X,\,Y,\nu)=0. \tag{30}\] By the Hopf maximum principle, \(\Sigma\) and \(\Sigma_{k}\) are either disjoint or intersect transversally. In the latter case, \(Rm=0\) along the intersection. We may therefore assume that there is a neighborhood \(U\) of \(p\) such that \(U\cap\Sigma\cap\Sigma_{k}=\emptyset\) for all \(k\geq 1\). Since \(\Sigma\) and \(\Sigma_{k}\) are totally geodesic, the components of \(\Sigma\cap\Sigma_{k}\) are totally geodesic and therefore hyperplanes of \(\Sigma\). Since \(\Sigma\) and \(\Sigma_{k}\) are embedded, the components of \(\Sigma\cap\Sigma_{k}\) must be parallel. It follows that, passing to a subsequence, there are three cases. * For every \(k\), \(\Sigma\cap\Sigma_{k}=\emptyset\). * For every \(k\), \(\Sigma\cap\Sigma_{k}\) consists of a single hyperplane in \(\Sigma\). * For every \(k\), \(\Sigma\cap\Sigma_{k}\) consists of at least two parallel hyperplanes in \(\Sigma\). In the second and the third case, let \(\Sigma_{k}^{0}\) be the closure of the component of \(\Sigma\setminus\Sigma_{k}\) that contains \(p\). Note that \(\Sigma_{k}^{0}\) is isometric to a half-space in the second case and isometric to a slab in the third case. Since \(U\cap\Sigma\cap\Sigma_{k}=\emptyset\), it follows that \(\liminf_{k\to\infty}\operatorname{dist}(p,\partial\Sigma_{k}^{0})>0\). In the first case, we may argue exactly as in [8, p. 993]. Specifically, we may represent \(\Sigma_{k}\) as the graph of a positive function \(u_{k}\) over larger and larger domains, exhausting \(\Sigma\) as \(k\to\infty\). By the Harnack inequality [16, SS8.8], \(u_{k}\) is bounded by a multiple of \(u_{k}(p)\) locally in \(\Sigma\) as \(k\to\infty\). Using the first variation of the second fundamental form and proceeding as in [30, p. 333], we obtain a positive function \(f\in C^{\infty}(\Sigma)\) such that \[\nabla_{X,Y}^{2}f+Rm(X,\nu,\nu,Y)\,f=0 \tag{31}\] for all tangent fields \(X\), \(Y\) of \(\Sigma\). Tracing (31) and using that \(Ric(\nu,\nu)=0\) along \(\Sigma\), we see that \(f\) is harmonic. By the Liouville theorem, \(f\) is equal to a constant. It follows that \(Rm(X,\nu,\nu,Y)=0\). In conjunction with (29) and (30), we conclude that \(Rm=0\) along \(\Sigma\) and in particular at \(p\). In the second case, if \(\limsup_{k\to\infty}\operatorname{dist}(p,\partial\Sigma_{k}^{0})=\infty\), we may argue as in the first case. If \(\operatorname{dist}(p,\partial\Sigma_{k}^{0})=O(1)\), then, passing to a subsequence, we may assume that \(\Sigma_{k}^{0}\) converges locally smoothly to a half-space \(\Sigma^{0}\subset\Sigma\). As before, we may represent \(\Sigma_{k}^{0}\) as the graph of a smooth function \(u_{k}\) over larger and larger domains, exhausting \(\Sigma^{0}\) as \(k\to\infty\). Arguing as in the first case, using also the Boundary Harnack inequality (see, e.g., [6, Theorem 11.5]), we obtain a harmonic function \(f\in C^{\infty}(\Sigma^{0})\) that satisfies (31) in \(\Sigma^{0}\), \(f=0\) on \(\partial\Sigma^{0}\), and \(f>0\) away from \(\partial\Sigma^{0}\). By [5, Theorem I], \(f\) is a linear function. As before, it follows that \(Rm=0\) along \(\Sigma\) and in particular at \(p\). Finally, suppose for a contradiction that the third case arises. We may assume that, passing to a subsequence, \(\Sigma_{k}^{0}\) converges locally to a slab \(\Sigma^{0}\subset\Sigma\). As in the previous case, we obtain a harmonic function \(f\in C^{\infty}(\Sigma^{0})\) that satisfies (31) in \(\Sigma^{0}\), \(f=0\) on \(\partial\Sigma^{0}\), and \(f>0\) away from \(\partial\Sigma^{0}\). Using (31), we see that \(|\nabla f|\) is bounded from above by a positive constant on \(\partial\Sigma^{0}\). This is not compatible with Lemma 66. Now, let \(p\in M\setminus\Sigma\). By Proposition 32, there exists a non-compact area-minimizing boundary \(\Sigma_{p}\) with \(p\in\Sigma_{p}\) that is stable with respect to asymptotically constant variations. Repeating the argument above with \(\Sigma_{p}\) in place of \(\Sigma\), we see that \(Rm=0\) along \(\Sigma_{p}\). Since \((M,g)\) is asymptotically flat, it follows that \((M,g)\) is isometric to flat \(\mathbb{R}^{n}\). This is not compatible with \(m>0\). This completes the proof in the case where the boundary of \(M\) is empty. The case where the boundary of \(M\) is an outermost minimal surface only requires formal modifications. Proof of Corollary 4.: Let \((M,g)\) be asymptotically flat of rate \(\tau>n-3\) with non-negative scalar curvature and positive mass \(m\). Suppose, for a contradiction, that there are isoperimetric regions \(\Omega_{1}\), \(\Omega_{2},\ldots\) in \((M,g)\) with \(|\Omega_{k}|\to\infty\) and a compact set \(K\subset M\) disjoint from the boundary of \(M\) such that \(K\cap\partial\Omega_{k}\neq\emptyset\) for all \(k\). By Proposition 58, \(\Omega_{k}\) converges locally smoothly to a region \(\Omega\subset M\) whose boundary \(\Sigma=\partial\Omega\) is non-compact and area-minimizing. By Theorem 2, \((M,g)\) is isometric to flat \(\mathbb{R}^{n}\). This is not compatible with \(m>0\) Proof of Lemma 31.: Arguing as in [18, p. 21], we see that there is \(r_{0}>0\) depending only on \((M,g)\) with the following property. Given \(0<r<r_{0}\) and \(q\in M\), there exists a function \(v_{r,q}\in C^{\infty}(M)\) satisfying \[\circ v_{r,q}=0\text{ in }\{x\in M:\operatorname{dist}_{g}(x,q)\geq 6\,r\}, \tag{33}\] \[\circ v_{r,q}<0\text{ in }\{x\in M:\operatorname{dist}_{g}(x,q)<6\,r\}, \text{ and }\] (34) \[\circ \Delta v_{r,q}<0\text{ in }\{x\in M:r<\operatorname{dist}_{g}(x,q)<6\,r\}. \tag{32}\] Decreasing \(r>0\) if necessary, we may choose points \(q_{1},\dots,q_{N}\in M\) with \(q_{1}=p_{1}\) and \(q_{N}=p_{2}\) such that \[\{x\in M:\operatorname{dist}(x,q_{i})\leq r\}\subset\{x\in M:3\,r \leq\operatorname{dist}(x,q_{i+1})\leq 5\,r\}\text{ for all }i\leq N-1\text{ and }\] \[\{x\in M:\operatorname{dist}(x,q_{i})\leq 6\,r\}\text{ is disjoint from the boundary of }M\text{ for all }1\leq i\leq N; \tag{35}\] see Figure 2. Define \(a_{1},\dots,a_{N}\) recursively by \(a_{1}=1\) and, for \(i=2,\dots,N\), \[a_{i}=1+\frac{\sup\{\Delta v_{r,q_{i-1}}(x):\operatorname{dist}(x,q_{i-1})<r \}}{\big{|}\inf\{\Delta v_{r,q_{i}}(x):3\,r\leq\operatorname{dist}(x,q_{i}) \leq 5\,r\}\big{|}}\,a_{i-1}.\] Let \[W=\bigcup_{i=1}^{N}\{x\in M:\operatorname{dist}(x,q_{i})<6\,r\}\qquad\text{ and}\qquad v=\sum_{i=1}^{N}a_{i}\,v_{r,q_{i}}.\] Note that Figure 2. An illustration of the construction in Lemma 31. The open set \(W\) is bounded by the solid black line. The function \(v\) is super-harmonic outside of the hatched region. * \(v=0\) in \(M\setminus W\) by (32), * \(v<0\) in \(W\) by (33), and * \(\Delta v<0\) in \(\{x\in W:\operatorname{dist}(x,p_{2})\geq r\}\) by (34) and (35). For all \(\delta>0\) sufficiently small, the Riemannian metrics \[g_{t}=(1+t\,\delta\,v)^{\frac{4}{n-2}}\,g\] where \(t\in(0,1)\) satisfy the properties asserted in the lemma. ## 5. Proof of Theorem 5 Let \(3<n\leq 7\) and \((M,g)\) be spatial Schwarzschild with mass \(m=2\), i.e., \[M=\{x\in\mathbb{R}^{n}:|x|_{\bar{g}}\geq 1\}\qquad\text{and}\qquad g=\phi^{ \frac{4}{n-2}}\,\bar{g}\] where \(\phi\in C^{\infty}(M)\) is given by \[\phi(x)=1+|x|_{\bar{g}}^{2-n}.\] The goal of this section is to show that there are infinitely many mutually disjoint non-compact area-minimizing hypersurfaces in \((M,g)\). Let \(t\), \(s>0\). For the next lemma, we choose the following orientations. * \(M\cap(\mathbb{R}^{n-1}\times\{t\})\) is oriented by the normal in direction of \(-e_{n}\). * \(M\cap(S_{s}^{n-2}(0)\times\mathbb{R})\) is oriented by the normal pointing away from the vertical axis. **Lemma 33**.: _Let \(t\), \(s>0\). The following hold._ * \(M\cap(\mathbb{R}^{n-1}\times\{t\})\) _is strictly mean convex._ * \(M\cap(S_{s}^{n-2}(0)\times\mathbb{R})\) _is strictly mean convex._ * \(S_{1}^{n-1}(0)\) _is minimal._ Let \(r>2\), \(z>0\), and \(\Sigma_{r,z}\) be the least area hypersurface in \((M,g)\) with \[\partial\Sigma_{r,z}=S_{r}^{n-2}(0)\times\{z\}.\] By the convex hull property and Lemma 33, \(\Sigma_{r,z}\) is a vertical graph with finite slope near \(\partial\Sigma_{r,z}\). **Lemma 34**.: \(\Sigma_{r,z}\) _is axially symmetric with respect to \(e_{n}\)._ Proof.: Let \(\pi\subset\mathbb{R}^{n}\) be a hyperplane through the origin with \(e_{n}\in\pi\). Let \(\Pi:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be the reflection through \(\pi\), \(\mathbb{R}^{n}_{\pm}\) be the components of \(\mathbb{R}^{n}\setminus\pi\), and \(\Sigma^{\pm}_{r,z}=\mathbb{R}^{n}_{\pm}\cap\Sigma_{r,z}\). Note that \(M\cap\pi\) is minimal in \((M,g)\). By the Hopf maximum principle, it follows that \(\Sigma_{r,z}\) and \(\pi\) intersect transversally. In particular, \[|\Sigma^{+}_{r,z}|+|\Sigma^{-}_{r,z}|=|\Sigma_{r,z}|. \tag{36}\] Moreover, \(\tilde{\Sigma}_{r,z}=\Sigma^{+}_{r,z}\cup\Pi(\Sigma^{+}_{r,z})\cup(\Sigma_{r, z}\cap\pi)\) is a Lipschitz surface with \(\partial\tilde{\Sigma}_{r,z}=\partial\Sigma_{r,z}\). Using that \(g\) is rotationally symmetric, the area-minimizing property of \(\Sigma_{r,z}\), and (36), we conclude that \(2\,|\Sigma^{+}_{r,z}|=2\,|\Sigma^{-}_{r,z}|=|\Sigma_{r,z}|\). It follows that \(|\tilde{\Sigma}_{r,z}|=|\Sigma_{r,z}|\). Using again that \(\Sigma_{r,z}\) is area-minimizing, we see that \(\tilde{\Sigma}_{r,z}\) intersects \(\pi\) orthogonally. In particular, \(\tilde{\Sigma}_{r,z}\) is smooth. By unique continuation [3, p. 235], \(\tilde{\Sigma}_{r,z}=\Sigma_{r,z}\). The assertion follows. **Lemma 35**.: \(\Sigma_{r,z}\setminus\partial\Sigma_{r,z}\) _is contained in the cylinder \(B_{r}^{n-1}(0)\times(z,\infty)\)._ Proof.: This follows from Lemma 33 and the maximum principle. **Lemma 36**.: \(\Sigma_{r,z}\) _is a vertical graph over \(\{y\in\mathbb{R}^{n-1}:|y|_{\bar{y}}\leq r\}\)._ Proof.: Suppose, for a contradiction, that \(\Sigma_{r,z}\setminus\partial\Sigma_{r,z}\) is not a graph over \(B_{r}^{n-1}(0)\). Using Lemma 34 and Lemma 35, it follows that \(\Sigma_{r,z}\) touches \(S_{s}^{n-2}(0)\times\mathbb{R}\) from the inside for some \(s\in(0,r)\). This is not compatible with Lemma 33 and the maximum principle. By Lemma 34 and Lemma 36, there is \(f_{r,z}\in C^{\infty}(\mathbb{R})\) with \(f_{r,z}(r)=z\) and \(f_{r,z}^{\prime}(0)=0\) such that \[\Sigma_{r,z}=\{(y,f_{r,z}(|y|_{\bar{y}})):y\in\mathbb{R}^{n-1}\text{ with }|y|_{\bar{y}}\leq r\}.\] **Lemma 37**.: _There holds, for all \(t\in(0,r)\),_ \[\left(t^{n-2}\frac{f_{r,z}^{\prime}(t)}{\sqrt{1+f_{r,z}^{\prime}(t)^{2}}} \right)^{\prime}=-\frac{2\left(n-1\right)}{1+(t^{2}+f_{r,z}(t)^{2})^{-\frac{n- 2}{2}}}\,\frac{t^{n-2}}{(t^{2}+f_{r,z}(t)^{2})^{\frac{n}{2}}}\,\frac{f_{r,z}(t )-f_{r,z}^{\prime}(t)\,t}{\sqrt{1+f^{\prime}(t)^{2}}}.\] Proof.: This follows from the conformal transformation formula of the mean curvature, \[\phi^{\frac{2}{n-2}}\,H=\bar{H}+\frac{2\left(n-1\right)}{n-2}\phi^{-1}\,\bar{ D}_{\bar{\nu}}\phi, \tag{37}\] using that \(\Sigma_{r,z}\) is minimal. Here, \(\bar{\nu}\) is the Euclidean unit normal of \(\Sigma_{r,z}\) pointing in direction of \(e_{n}\). **Lemma 38**.: _There holds_ \[0\leq\sup_{z>0}\limsup_{r\to\infty}\left[f_{r,z}(4\left(n-1\right))-z\right]<\infty.\] _Moreover, as \(z\to 0\),_ \[\limsup_{r\to\infty}f_{r,z}(z^{-2})=z+o(z).\] Proof.: Using Lemma 33 and the maximum principle, we see that \(f_{r,z}^{\prime}(t)\leq 0\) for all \(t\in(0,r)\). Using Lemma 37 and the Cauchy-Schwarz inequality, we have, for all \(t\in(0,r)\), \[\left(t^{n-2}\,\frac{f_{r,z}^{\prime}(t)}{\sqrt{1+f_{r,z}^{\prime}(t)^{2}}} \right)^{\prime}\geq-2\left(n-1\right)\frac{t^{n-2}}{(t^{2}+f_{r,z}(t)^{2})^{ \frac{n-1}{2}}}.\] Integrating, we obtain \[\frac{f_{r,z}^{\prime}(t)}{\sqrt{1+f_{r,z}^{\prime}(t)^{2}}}\geq-2\left(n-1 \right)t^{2-n}\,\int_{0}^{t}\frac{s^{n-2}}{(s^{2}+f_{r,z}(s)^{2})^{\frac{n-1} {2}}}\,\mathrm{d}s. \tag{38}\] Note that \(s^{2}+f_{r,z}(s)^{2}\geq 1\). Consequently, for all \(t\geq 1\), \[\int_{0}^{t}\frac{s^{n-2}}{(s^{2}+f_{r,z}(s)^{2})^{\frac{n-1}{2}}}\,\mathrm{d }s\leq\int_{0}^{1}\,\mathrm{d}s+\int_{1}^{t}\,s^{-1}\,\mathrm{d}s=1+\log(t). \tag{39}\] It follows that \[2\left(n-1\right)t^{2-n}\,\int_{0}^{t}\frac{s^{n-2}}{(s^{2}+f_{r,z}(s)^{2})^{ \frac{n-1}{2}}}\,\mathrm{d}s\leq\frac{1}{2}\] provided that \(t\geq 4\,(n-1)\). Note that \[\frac{1}{2}\,f_{r,z}^{\prime}(t)\geq\frac{f_{r,z}^{\prime}(t)}{\sqrt{1+f_{r,z}^{ \prime}(t)^{2}}}\qquad\text{provided that}\qquad\frac{f_{r,z}^{\prime}(t)}{\sqrt{1+f_{r,z}^{\prime}(t)^{2}}}\geq-\frac{1}{2}.\] In conjunction with (38) and (39), we obtain \[f_{r,z}^{\prime}(t)\geq-4\,(n-1)\,t^{2-n}\,(\log(t)+1)\] provided that \(t\geq 4\,(n-1)\). Since \(f_{r,z}(r)=z\), we conclude that \[z\leq f_{r,z}(4\,(n-1))\leq z+4\,(n-1)\,\int_{4\,(n-1)}^{r}t^{2-n}\,(\log(t)+1) \,\mathrm{d}t\] for all \(r\geq 4\,(n-1)\). Note that, since \(n\geq 4\), \[\int_{4\,(n-1)}^{r}t^{2-n}\,(\log(t)+1)\,\mathrm{d}t\leq\int_{4\,(n-1)}^{ \infty}t^{2-n}\,(\log(t)+1)\,\mathrm{d}t<\infty.\] Likewise, we obtain, as \(z\to 0\), \[\limsup_{r\to\infty}f_{r,z}(z^{-2})\leq z+4\,(n-1)\,\int_{z^{-2}}^{\infty}t^{2 -n}\,(\log(t)+1)\,\mathrm{d}t=z+o(z).\] The assertion follows. **Lemma 39**.: _Let \(z>0\). There exists a sequence \(\{r_{k}\}_{k=1}^{\infty}\) with \(r_{k}\to\infty\) such that \(\Sigma_{z,r_{k}}\) converges locally smoothly to a non-compact area-minimizing hypersurface \(\Sigma_{z}\subset M\) that is axially symmetric with respect to \(e_{n}\) and satisfies_ \[\inf\{x^{n}:x\in\Sigma_{z}\}=z. \tag{40}\] Proof.: This follows from Lemma 35, Lemma 38, and standard compactness results from geometric measure theory. **Proposition 40**.: _The following hold._ * _As_ \(z\to 0\)_,_ \(\Sigma_{z}\) _converges locally smoothly to a non-compact area-minimizing hypersurface_ \(\Sigma_{0}\subset M\) _with_ \(\inf\{x^{n}:x\in\Sigma_{0}\}=0\)_._ * _As_ \(z\to\infty\)_,_ \(-z+\Sigma_{z}\) _converges locally smoothly to a Euclidean plane._ * _The family_ \(\{\Sigma_{z}\}_{z>0}\) _forms a smooth foliation of the component of_ \(M\setminus\Sigma_{0}\) _that lies above_ \(\Sigma_{0}\)_._ * \(\Sigma_{z}\) _is not stable with respect to asymptotically constant perturbations for any_ \(z\geq 0\)_._ Proof.: Let \(z_{1}\), \(z_{2}>0\) with \(z_{1}\neq z_{2}\). Since \(\Sigma_{z_{1}}\) and \(\Sigma_{z_{2}}\) are area-minimizing and axially symmetric with respect to \(e_{n}\), \(\Sigma_{z_{1}}\) and \(\Sigma_{z_{2}}\) are disjoint. Using also (40), we see that, as \(z\to 0\), \(\Sigma_{z}\) converges locally smoothly to a non-compact area-minimizing hypersurface \(\Sigma_{0}\subset M\) with \(\inf\{x^{n}:x\in\Sigma_{0}\}=0\). Moreover, using Lemma 38, we see that, as \(z\to\infty\), \(-z+\Sigma_{z}\) converges locally smoothly to a non-compact hypersurface \(\Sigma\) that is area-minimizing with respect to \(\bar{g}\). By Lemma 48, \(\Sigma\) is a Euclidean plane. Let \(z>0\). By (37), \(t+\Sigma_{z}\) is strictly mean-convex for every \(t>0\) when oriented by the unit normal in direction of \(e_{n}\). In conjunction with (40) and the maximum principle, it follows that \(\Sigma_{z^{\prime}}\to\Sigma_{z}\) locally smoothly as \(z^{\prime}\searrow z\). Let \(\{z_{i}\}_{i=1}^{\infty}\) be a sequence of numbers \(0<z_{i}<z\) with \(z_{i}\to z\). Passing to a subsequence and using (40), we see that \(\Sigma_{z_{i}}\) converges locally smoothly to a non-compact area-minimizing hypersurface \(\Sigma_{z}^{\prime}\) that lies below \(\Sigma_{z}\), is axially symmetric with respect to \(e_{n}\), and satisfies \(\inf\{x^{n}:x\in\Sigma_{z}^{\prime}\}=z\). Since both \(\Sigma_{r,z}\), \(r\geq 2\), and \(\Sigma_{z}^{\prime}\) are area-minimizing and axially symmetric with respect to \(e_{n}\), \(\Sigma_{r,z}\) lies below \(\Sigma_{z}^{\prime}\) for every \(r\geq 2\). It follows that \(\Sigma_{z}^{\prime}=\Sigma_{z}\). Consequently, \(\{\Sigma_{z}\}_{z>0}\) forms a smooth foliation; see Figure 3. Finally, suppose for a contradiction, that \(\Sigma_{z}\) is stable with respect to asymptotically constant perturbations for some \(z\geq 0\). By Proposition 30, \(\Sigma_{z}\) is totally geodesic. It follows that \(\Sigma_{z}=S_{1}^{n}(0)\). This is not compatible with \(\Sigma_{z}\) being non-compact. The assertion follows. ## 6. Proof of Theorem 6 Let \(3<n\leq 7\) and \((n-2)/2<\tau<n-2\). In this section, we use the gluing technique developed by A. Carlotto and R. Schoen [9] to construct a Riemannian manifold of dimension \(n\) which is asymptotically flat of rate \(\tau\) with non-negative scalar curvature and positive mass and which admits a non-compact area-minimizing boundary that is stable with respect to asymptotically constant variations (1). **Lemma 41**.: _There exists a Riemannian metric \(g\) on \(\mathbb{R}^{n}\) which is asymptotically flat of rate \(\tau\) with non-negative scalar curvature and positive mass such that_ \[g=\bar{g}\text{ on }\{x\in\mathbb{R}^{n}:x^{n}\leq 2+|x-x^{n}\,e_{n}|_{\bar{g}}\}.\] Proof.: The Riemannian metric \[\hat{g}=\left[1+(1+|x|_{\bar{g}}^{2\,n-4})^{-\frac{1}{2}}\right]^{\frac{4}{n- 2}}\,\bar{g}\] is asymptotically flat of rate \(n-2\) with non-negative scalar and mass \(2\). The assertion follows from localizing \((\mathbb{R}^{n},\hat{g})\) to the cone \(\{x\in\mathbb{R}^{n}:x^{n}>2+|x-x^{n}\,e_{n}|_{\bar{g}}\}\) using [9, Theorem 2.3]. **Lemma 42**.: _Let \((n-2)/2<\tilde{\tau}<\tau\) and \(g\) be as in Lemma 41. There exists a conformally flat Riemannian metric \(\tilde{g}\) on \(\mathbb{R}^{n}\) that is asymptotically flat of rate \(\tilde{\tau}\) with the following properties:_ * \(\tilde{g}\leq g\)_._ * \(\tilde{g}<g\) _on_ \(\{x\in\mathbb{R}^{n}:x^{n}\geq 2+|x-x^{n}\,e_{n}|_{\bar{g}}\}\)_._ * \(\tilde{g}=\bar{g}\) _on_ \(\{x\in\mathbb{R}^{n}:x^{n}\leq 0\}\)_._ Figure 3. An illustration of the foliation \(\{\Sigma_{z}\}_{z\geq 0}\) whose leaves are depicted by the solid black lines. The horizon is depicted by the gray line. \(\Sigma_{0}\) asymptotes to the plane \(\{x^{n}=0:x\in M\}\) depicted by the dashed black line. * \(\tilde{g}\) _is axially symmetric with respect to_ \(e_{n}\)_._ * _There holds, for all_ \(x\in\mathbb{R}^{n}\)_,_ \[\sum_{i=1}^{n-1}x^{i}\,\bar{g}(\bar{D}_{e_{i}}\tilde{g},\bar{g})\geq 0.\] Proof.: Using that \(g\) is asymptotically flat of rate \(\tau\), we see that, provided \(\delta>0\) is sufficiently small, \[g(x)>\left[1-(1-\delta)\,(1+\delta\,|x|_{\bar{g}}^{2})^{-\tilde{\tau}/2}\right] \,\bar{g}\] for every \(x\in\mathbb{R}^{n}\). Let \(\eta\in C^{\infty}(\mathbb{R})\) with * \(0\leq\eta\leq 1\), * \(\eta(t)=0\) if \(t\leq 1/2\), * \(\eta(t)=1\) if \(t\geq 1\), and * \(\eta^{\prime}(t)\geq 0\) for all \(t\in\mathbb{R}\). The metric \[\tilde{g}=\left[1-(1-\delta)\,\eta\big{(}(1+|x-x^{n}\,e_{n}|_{\bar{g}}^{2})^{- 1/2}\,x^{n}\big{)}\,(1+\delta\,|x|_{\bar{g}}^{2})^{-\tilde{\tau}/2}\right]\, \bar{g}\] has the asserted properties. Let \(r>2\), \(z<0\), and \(\tilde{\Sigma}_{r,z}\) be the least area hypersurface in \((\mathbb{R}^{n},\tilde{g})\) with \[\partial\tilde{\Sigma}_{r,z}=S_{r}^{n-2}(0)\times\{z\}.\] Repeating the proofs of Lemma 34, Lemma 35, and Lemma 36, we see that there is \(\tilde{f}_{r,z}\in C^{\infty}(\mathbb{R})\) with \(\tilde{f}_{r,z}(r)=z\) and \(\tilde{f}^{\prime}_{r,z}(0)=0\) such that \[\tilde{\Sigma}_{r,z}=\{(y,\tilde{f}_{r,z}(|y|_{\bar{g}})):y\in\mathbb{R}^{n-1 }\text{ with }|y|_{\bar{g}}\leq r\}.\] **Lemma 43**.: _There are \(t_{0}\) and \(c_{0}>1\) depending only on \(n\) such that, provided that \(r>2\) is sufficiently large,_ \[z\leq\tilde{f}_{r,z}(t)\leq c_{0}+z\] _for all \(t\in(t_{0},r)\) and \(z<0\)._ Proof.: Repeating the proof of Lemma 37, we see that there is \(c>1\) depending only on \(n\) such that \[\left(t^{n-2}\frac{\tilde{f}^{\prime}_{r,z}(t)}{\sqrt{1+\tilde{f}^{\prime}_{r,z}(t)^{2}}}\right)^{\prime}\geq-c\,\frac{t^{n-2}}{1+(t^{2}+\tilde{f}_{r,z}(t )^{2})^{(\tilde{\tau}+1)/2}}.\] The assertion now follows as in the proof of Lemma 38, using that \(\tilde{\tau}>(n-2)/2\geq 1\). **Lemma 44**.: _There holds_ \[\tilde{\Sigma}_{r,z}=\{y\in\mathbb{R}^{n-1}:|y|_{\bar{g}}\leq r\}\times\{z\}\] _provided that \(r>2\) is sufficiently large and that_ \[z<-c_{0}-t_{0}^{1/(n-2)}\,\int_{1}^{\infty}\frac{1}{\sqrt{s^{2n-4}-1}}\, \mathrm{d}s. \tag{41}\] Proof.: By Lemma 43 and (41), there is a least \(t_{r,z}\in[0,t_{0})\) such that \(\tilde{f}_{r,z}(t)<0\) for all \(t\in(t_{r,z},r)\) provided that \(r>2\) is sufficiently large. Consequently, \[\left(t^{n-2}\frac{\tilde{f}_{r,z}^{\prime}(t)}{\sqrt{1+\tilde{f}_{r,z}^{\prime }(t)^{2}}}\right)^{\prime}=0\] for all \(t\in(t_{r,z},r)\). Let \[a_{r,z}=-t_{r,z}^{n-2}\frac{\tilde{f}_{r,z}^{\prime}(t_{r,z})}{\sqrt{1+\tilde{ f}_{r,z}^{\prime}(t_{r,z})^{2}}}\] and note that \(a_{r,z}<t_{r,z}^{n-2}<t_{0}^{n-2}\). Integrating, we obtain \[\tilde{f}(t)=\tilde{f}_{r,z}(t_{r,z})-\int_{t_{r,z}}^{t}\frac{a_{r,z}}{\sqrt{s ^{2\,n-4}-a_{r,z}^{2}}}\,\mathrm{d}s.\] Consequently, using (41), \[\tilde{f}_{r,z}(t_{r,z})=-z+\int_{t_{r,z}}^{r}\frac{a_{r,z}}{\sqrt{s^{2\,n-4} -a_{r,z}^{2}}}\,\mathrm{d}s\leq-z+t(n)^{1/(n-2)}\,\int_{1}^{\infty}\frac{1}{ \sqrt{s^{2n-4}-1}}\,\mathrm{d}s<0.\] It follows that \(t_{r,z}=0\) so that \(\tilde{\Sigma}_{r,z}\subset\{x\in\mathbb{R}^{n}:x^{n}\leq 0\}\). Using that \(\tilde{g}=\bar{g}\) in \(\{x\in\mathbb{R}^{n}:x^{n}\leq 0\}\), the assertion follows. Proof of Theorem 6.: Let \[z<-c_{0}-t_{0}^{1/(n-2)}\,\int_{1}^{\infty}\frac{1}{\sqrt{s^{2n-4}-1}}\, \mathrm{d}s\] and \(r>2\) be sufficiently large such that the conclusion of Lemma 44 applies. Let \(\Sigma_{r,z}\) be the least area hypersurface in \((\mathbb{R}^{n},g)\) with \[\partial\Sigma_{r,z}=S_{r}^{n-2}(0)\times\{z\}.\] Using that \(\tilde{g}\leq g\) with strict inequality where \(g\neq\bar{g}\) and Lemma 44, it follows that \(\Sigma_{r,z}=\tilde{\Sigma}_{r,z}\). Passing to a limit as \(r\to\infty\), we see that \(\mathbb{R}^{n-1}\times\{z\}\) is area-minimizing in \((\mathbb{R}^{n},g)\). Since \(g=\bar{g}\) near \(\mathbb{R}^{n-1}\times\{z\}\), it follows that \(\mathbb{R}^{n-1}\times\{z\}\) is stable with respect to asymptotically constant variations. ## Appendix A Asymptotically flat manifolds In this section, we recall background on asymptotically flat manifolds. Let \((M,g)\) be a connected, complete Riemannian manifold of dimension \(n\geq 3\). We say that \((M,g)\) is asymptotically flat of rate \(\tau>(n-2)/2\) if there is a non-empty compact set \(K\subset M\) and a diffeomorphism \[\{x\in\mathbb{R}^{n}:|x|_{\bar{g}}>1/2\}\to M\setminus K \tag{42}\] such that, in the corresponding coordinate system, \[|g-\bar{g}|_{\bar{g}}+|x|_{\bar{g}}\,|\bar{D}g|_{\bar{g}}+|x|_{\bar{g}}^{2}\, |\bar{D}^{2}g|_{\bar{g}}=O(|x|_{\bar{g}}^{-\tau}).\] Here, \(\bar{g}\) is the Euclidean metric on \(\mathbb{R}^{n}\) and a bar indicates that a geometric quantity is computed with respect to \(\bar{g}\). In addition, we require the scalar curvature of \((M,g)\) to be integrable. If the boundary of \(M\) is non-empty, we require that the boundary is a minimal surface and that every closed minimal hypersurface in \((M,g)\) is contained in the boundary. The particular choice of diffeomorphism (42) is usually fixed and referred to as the chart at infinity of \((M,g)\). Given \(r\geq 1\), we use \(B_{r}\) to denote the open bounded set whose boundary corresponds to \(S_{r}^{n-1}(0)=\{x\in\mathbb{R}^{n}:|x|_{\tilde{g}}=r\}\) in this chart. If \((M,g)\) is asymptotically flat, the quantity \[m=\frac{1}{2\left(n-1\right)n\,\omega_{n}}\,\lim_{\lambda\to\infty}\lambda^{-1 }\,\int_{S_{\lambda}^{n-1}(0)}\sum_{i,\,j=1}^{n}x^{i}\,\left[(\bar{D}_{ej}g)(e_ {i},e_{j})-(\bar{D}_{e_{i}}g)(e_{j},e_{j})\right]\,\mathrm{d}\bar{\mu} \tag{43}\] is called the mass of \((M,g)\); see [2]. Here, \(e_{1},\ldots,e_{n}\) is the standard basis of \(\mathbb{R}^{n}\). The existence of the limit in (43) follows from the integrability of the scalar curvature and the decay assumptions on \(g\). It is independent of the particular choice of chart at infinity; see [4, Theorem 4.2]. Note that the mass vanishes if \(\tau>n-2\). ## Appendix B Isoperimetric regions and their limits In this section, we collect results on large isoperimetric regions. Let \((M,g)\) be a connected, complete Riemannian manifold of dimension \(3\leq n\leq 7\) that is asymptotically flat. Recall that the boundary of \(M\) is either empty or an outermost minimal surface. Let \(\hat{M}\) be an open manifold in which \(M\) is embedded. A subset \(\Omega\subset M\) is called a region if \[\hat{\Omega}=\Omega\cup(\hat{M}\setminus M)\] is a properly embedded \(n\)-dimensional submanifold of \(\hat{M}\). Note that the boundary of \(\hat{\Omega}\) is a properly embedded hypersurface of \(M\). It does not depend on the choice of extension \(\hat{M}\) and will be denoted by \(\partial\Omega\). Let \(\Omega\subset M\) be a region. The second fundamental form \(h\) and the mean curvature scalar \(H\) of \(\partial\Omega\) are computed with respect to the normal pointing out of \(\Omega\). We are interested in three special types of regions in this paper. * A region \(\Omega\subset M\) is isoperimetric if it is compact and \[|\partial\Omega|\leq|\partial\tilde{\Omega}|\] for every compact region \(\tilde{\Omega}\subset M\) with \(|\tilde{\Omega}|=|\Omega|\). * A region \(\Omega\subset M\) has area-minimizing boundary if, for every \(U\Subset M\) open, there holds \[|U\cap\partial\Omega|\leq|U\cap\partial\tilde{\Omega}|\] for every region \(\tilde{\Omega}\subset M\) with \(\tilde{\Omega}\bigtriangleup\Omega\Subset U\). * A region \(\Omega\subset M\) is locally isoperimetric if, for every \(U\Subset M\) open, there holds \[|U\cap\partial\Omega|\leq|U\cap\partial\tilde{\Omega}|\] for every region \(\tilde{\Omega}\subset M\) with \(\Omega\bigtriangleup\tilde{\Omega}\Subset U\) and \(|U\cap\tilde{\Omega}|=|U\cap\Omega|\). **Remark 45**.: _Alternatively, we could introduce these notions using sets with locally finite perimeter and their reduced boundaries instead of properly embedded \(n\)-dimensional submanifolds and their boundaries. Standard regularity theory shows that the reduced boundary of a locally isoperimetric such set is smooth; see, e.g., the survey of results in [13, Section 4]._ **Lemma 46**.: _There holds_ \[\sup_{k\geq 1}\,\sup_{r>1}r^{1-n}\,|B_{r}\cap\partial\Omega_{k}|<\infty. \tag{44}\] _Moreover, for every \(\alpha>0\), there is \(c>1\) such that_ \[\limsup_{k\to\infty}\,\sup_{1<s<t}\left(t^{n-1-\alpha}+s^{n-1-\alpha}\right)^ {-1}\,\int_{(B_{t}\setminus B_{s})\cap\partial\Omega_{k}}|x|_{\bar{g}}^{- \alpha}\,\mathrm{d}\bar{\mu}<\infty.\] Proof.: To prove (44), for \(r>1\) sufficiently large and such that \(\partial B_{r}\) and \(\partial\Omega_{k}\) intersect transversely, we may obtain comparison regions \(\tilde{\Omega}_{k}\) by cutting \(B_{r}\cap\Omega_{k}\) from \(\Omega_{k}\) and pasting in a ball of the removed amount of volume. Using also Lemma 65, the assertion follows. **Lemma 47**.: _Let \(\Omega\subset M\) be a locally isoperimetric region of \((M,g)\) with \(\partial\Omega\neq\emptyset\). Then \(\partial\Omega\) has constant mean curvature. The mean curvature is zero when the boundary is area-minimizing._ **Lemma 48**.: _Assume that \(\tilde{\Omega}\subset\mathbb{R}^{n}\) is a locally isoperimetric region with \(\partial\tilde{\Omega}\neq\emptyset\). Then \(\partial\tilde{\Omega}\) is either a hyperplane or a coordinate sphere._ Proof.: This is [22, Proposition 1]. **Remark 49**.: _The same characterization holds for sets of locally finite perimeter that are locally isoperimetric in \(\mathbb{R}^{n}\setminus\{0\}\). The potential singularity in the reduced boundary of such a set at \(0\) is removable; cp. Remark 45._ Let \(\Omega_{1}\), \(\Omega_{2},\ldots\subset M\) be locally isoperimetric regions of \((M,g)\) with \(\partial\Omega_{k}\neq\emptyset\). **Lemma 50**.: _Assume that \(\limsup_{k\to\infty}|H(\partial\Omega_{k})|<\infty\). Then \(\limsup_{k\to\infty}|h(\partial\Omega_{k})|<\infty\). Moreover, there exists a locally isoperimetric region \(\Omega\subset M\) such that, passing to a subsequence, \(\Omega_{k}\to\Omega\) locally smoothly._ Proof.: This is a standard result from geometric measure theory; see, e.g., the survey in [13, Section 4]. **Lemma 51**.: _Let \(\Omega\subset M\) be a locally isoperimetric region with non-compact boundary. Then \(\Omega\) has area-minimizing boundary. All components of \(\partial\Omega\) except for one are components of the boundary of \(M\)._ Proof.: Suppose, for a contradiction, that \(H(\partial\Omega)\neq 0\). Let \(\{x_{\ell}\}_{\ell=1}^{\infty}\) be a sequence of points \(x_{\ell}\in\partial\Omega\setminus B_{1}\) with \(|x_{\ell}|_{\bar{g}}\to\infty\). It follows from Lemma 50 that, passing to a subsequence, \[-x_{\ell}+\Omega\setminus B_{1}\to\tilde{\Omega}\text{ locally smoothly in }\mathbb{R}^{n}\] where \(\tilde{\Omega}\subset\mathbb{R}^{n}\) is a locally isoperimetric region with \(\partial\tilde{\Omega}\neq\emptyset\) and \(\tilde{H}(\partial\tilde{\Omega})=H(\partial\Omega)\). By Lemma 48, \(\partial\tilde{\Omega}\) is a coordinate sphere of radius \((n-1)\,|H(\partial\Omega)|^{-1}\). It follows that \(\Omega\) has infinitely many bounded components, each one close to a coordinate ball of radius \((n-1)\,|H(\partial\Omega)|\). Such a configuration contradicts the Euclidean isoperimetric inequality. Thus \(H(\partial\Omega)=0\). Let \(\{s_{\ell}\}_{\ell=1}^{\infty}\) be a sequence of numbers \(s_{\ell}>1\) with \(s_{\ell}\to\infty\). By Lemma 50, passing to a subsequence, \[s_{\ell}^{-1}\left(\Omega\setminus B_{1}\right)\to\breve{\Omega}\text{ locally smoothly in }\mathbb{R}^{n}\setminus\{0\}\] where \(\breve{\Omega}\subset\mathbb{R}^{n}\) is a locally isoperimetric region whose boundary is non-empty with mean curvature zero. By Lemma 48, \(\breve{\Omega}\) is a half-space. It follows that \(\partial\Omega\) has only one unbounded component. To see that the boundary of \(\Omega\) is area-minimizing, observe that there are constants \(\delta,c>0\) such that, for every \(\ell\) sufficiently large and all \(v\in(-\delta,\delta)\), there is a region \(\tilde{\Omega}\subset M\) with \(\tilde{\Omega}\,\triangle\,\Omega\Subset\{x\in\mathbb{R}^{n}:s_{\ell}<|x|_{ \bar{g}}<2\,s_{\ell}\}=U_{\ell}\) and such that \[s_{\ell}^{-n}\left(|U_{\ell}\cap\Omega|-|U_{\ell}\cap\tilde{\Omega}|\right)=v \qquad\text{and}\qquad s_{\ell}^{1-n}\left||U_{\ell}\cap\partial\Omega|-|U_{ \ell}\cap\partial\tilde{\Omega}|\right|\leq c\,v^{2}.\] Thus, we can add and subtract an amount of volume \(V\) from \(\Omega\) at the cost of changing the boundary area by an amount of order \(s_{\ell}^{-1}\,V^{2}\). Since \(s_{\ell}^{-1}\,V^{2}=o(1)\), it follows that \(\Omega\) has area-minimizing boundary. See [13, Appendix C] for additional details on this argument in the case where \(n=3\). The preceding argument also shows that, in the case where the boundary of \(M\) is empty, \(\partial\Omega\) has no bounded components, since deleting such components and compensating for the loss of volume far out decreases area. In the case where the boundary of \(M\) is non-empty, bounded components of \(\partial\Omega\), being closed minimal surfaces, are contained in the boundary of \(M\). We now assume that \(\Omega_{k}\subset M\) are isoperimetric regions with \(|\Omega_{k}|\to\infty\). **Lemma 52**.: _There holds_ \[\lim_{k\to\infty}\lambda(\partial\Omega_{k})^{-n}\,|\Omega_{k}|=\omega_{n} \tag{45}\] Proof.: By comparison with coordinate balls far out in the asymptotically flat end, we see that \[\liminf_{k\to\infty}\lambda(\partial\Omega_{k})^{-n}\,|\Omega_{k}|\geq\omega_{ n}.\] Let \(\varepsilon>0\) and \(r>1\) be large such that \(\partial B_{r}\) and \(\partial\Omega_{k}\) intersect transversally for all \(k\) sufficiently large. Let \(\Omega_{k,r}=\Omega_{k}\setminus B_{r}\). Note that \[|\Omega_{k,r}|\geq|\Omega_{k}|-|B_{r}|\qquad\text{and}\qquad|\partial\Omega_{k }|\geq|\partial\Omega_{k,r}|-|\partial B_{r}|. \tag{46}\] By the Euclidean isoperimetric inequality, \[\bar{\lambda}(\partial\Omega_{k,r})^{-n}\,|\Omega_{k,r}|_{\bar{g}}\leq\omega_{ n}.\] Increasing \(r>1\), if necessary, we obtain \[\lambda(\partial\Omega_{k,r})^{-n}\,|\Omega_{k,r}|\leq\omega_{n}+\varepsilon\] for all \(k\) sufficiently large. Letting \(k\to\infty\) and using (46), we conclude that \[\limsup_{k\to\infty}\lambda(\partial\Omega_{k})^{-n}\,|\Omega_{k}|\leq\omega_{ n}+\varepsilon.\] The assertion follows. It follows from Lemma 52 that \(\lambda(\partial\Omega_{k})\to\infty\). **Lemma 53**.: _If the boundary of \(M\) is non-empty, then \(H(\partial\Omega_{k})>0\) for all \(k\). If the boundary of \(M\) is empty, then \(H(\partial\Omega_{k})>0\) for all sufficiently large \(k\)._ We now assume that \(H(\partial\Omega_{k})>0\) for all \(k\). **Lemma 54**.: _There holds \(\limsup_{k\to\infty}\lambda(\partial\Omega_{k})\,H(\partial\Omega_{k})<\infty\)._ Proof.: Suppose, for a contradiction, that \(\lambda(\partial\Omega_{k})\,H(\partial\Omega_{k})\to\infty\). Let \(x_{k}\in\partial\Omega_{k}\) be such that \(\liminf_{k\to\infty}\lambda(\partial\Omega_{k})^{-1}\,|x_{k}|_{\bar{g}}>0\) and \(r_{k}=(n-1)\,H(\partial\Omega_{k})^{-1}\). Passing to a subsequence, \[r_{k}^{-1}\left(-x_{k}+\Omega_{k}\setminus B_{1}\right)\to\{x\in\mathbb{R}^{n }:|x|_{\bar{g}}\leq 1\}\text{ locally smoothly in }\mathbb{R}^{n}.\] It follows that the component of \(\Omega_{k}\) which contains \(x_{k}\) is close to a coordinate ball of radius \(r_{k}=o(\lambda(\partial\Omega_{k}))\). Using Lemma 52, we see that \(\Omega_{k}\) contains a second such component. By the usual cut-and-paste argument, see, e.g., the proof of Lemma 51, this contradicts the assumption that \(\Omega_{k}\) is isoperimetric. **Lemma 55**.: _Assume that there is a sequence \(\{x_{k}\}_{k=1}^{\infty}\) of points \(x_{k}\in\partial\Omega_{k}\setminus B_{1}\) with \(\lambda(\partial\Omega_{k})^{-1}\,|x_{k}|_{\bar{g}}\to\infty\). The component of \(\Omega_{k}\) which contains \(x_{k}\) is close to a coordinate ball of radius \((n-1)\,H(\partial\Omega_{k})^{-1}\)._ It follows from Lemma 55 that \(\Omega_{k}\) has at most one component that is, on the scale of \(\lambda(\partial\Omega_{k})\), far from \(B_{1}\). **Lemma 56**.: _Assume that there are \(x_{k}\in\partial\Omega_{k}\setminus B_{1}\) with \(0<\liminf_{k\to\infty}\lambda(\partial\Omega_{k})^{-1}\,|x_{k}|_{\bar{g}}<\infty\). There is \(\xi\in\mathbb{R}^{n}\) such that, passing to a subsequence,_ \[\lambda(\partial\Omega_{k})^{-1}\left(\Omega_{k}\setminus B_{1}\right)\to\{x \in\mathbb{R}^{n}:|x-\xi|_{\bar{g}}\leq 1\}\text{ locally smoothly in }\mathbb{R}^{n}\setminus\{0\}.\] Proof.: The assumption implies locally smooth convergence of a subsequence to a non-trivial locally isoperimetric region in \(\mathbb{R}^{n}\), the area of whose boundary is at most \(n\,\omega_{n}\). Such a region is a ball of radius \(0<r\leq 1\). Note that \((n-1)\,/\,r=\lim_{k\to\infty}\lambda(\partial\Omega_{k})\,H(\partial\Omega_{k})\). Suppose, for a contradiction, that \(r<1\). Using Lemma 55, we see that \(\Omega_{k}\) contains has at least one additional large component that lies far out. By the usual cut-and-paste argument, this contradicts the assumption that \(\Omega_{k}\) is isoperimetric. **Corollary 57**.: _There holds_ \[\lim_{k\to\infty}\lambda(\partial\Omega_{k})\,H(\partial\Omega_{k})=n-1. \tag{47}\] Now, we assume in addition that there is \(K\subset M\) compact and disjoint from the boundary of \(M\) such that, for all \(k\), \[K\cap\partial\Omega_{k}\neq\emptyset.\] **Proposition 58**.: _There is a region \(\Omega\subset M\) with non-compact area-minimizing boundary such that, passing to a subsequence, \(\Omega_{k}\to\Omega\) locally smoothly. There is \(\xi\in\mathbb{R}^{n}\) with \(|\xi|_{\bar{g}}=1\) such that, passing to a subsequence, \(\lambda(\partial\Omega_{k})^{-1}\left(\Omega_{k}\setminus B_{1}\right)\to\{x \in\mathbb{R}^{n}:|x-\xi|_{\bar{g}}\leq 1\}\) locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\)._ Proof.: This follows from Lemma 50, Lemma 51, and Lemma 56. In the following lemma, \(\overset{\circ}{h}\) denotes the traceless second fundamental form. **Lemma 59**.: _Let \(\{x_{k}\}_{k=1}^{\infty}\) be a sequence of points \(x_{k}\in\partial\Omega_{k}\setminus B_{1}\) with \(|x_{k}|_{\bar{g}}\to\infty\). Then_ \[\limsup_{k\to\infty}|x_{k}|_{\bar{g}}\,|\mathring{h}(\partial\Omega_{k})(x_{k}) |=0. \tag{48}\] Proof.: Suppose first that \(\lim_{k\to\infty}\lambda(\partial\Omega_{k})^{-1}|x_{k}|_{\bar{g}}=0\). By Lemma Lemma 48 and 50, passing to a subsequence, \(|x_{k}|_{\bar{g}}^{-1}\,(\Omega_{k}\setminus B_{1})\) converges to a half-space locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\). Suppose second that \(\liminf_{k\to\infty}\,\lambda(\partial\Omega_{k})^{-1}|x_{k}|_{\bar{g}}>0\). By Proposition 58, passing to a subsequence, \(|x_{k}|_{\bar{g}}^{-1}\,(\Omega_{k}\setminus B_{1})\) converges to a ball locally smoothly in \(\mathbb{R}^{n}\setminus\{0\}\). Either way, the assertion follows. The isoperimetric profile of \((M,g)\) is the function \(A:(0,\infty)\to(0,\infty)\) given by \[A(V)=\inf\{|\partial\Omega|:\Omega\subset M\text{ is a compact region with }|\Omega|=V\}. \tag{49}\] **Lemma 60**.: _The isoperimetric profile of \((M,g)\) is absolutely continuous. As \(V\to\infty\),_ \[\left(\omega_{n}^{-1}\,V\right)^{\frac{1-n}{n}}\,A(V)=n\,\omega_{n}+o(1)\] _and, almost everywhere,_ \[\left(\omega_{n}^{-1}\,V\right)^{\frac{1}{n}}\,A^{\prime}(V)=(n-1)+o(1).\] _If the boundary of \(M\) is non-empty, then \(A\) is a strictly increasing function._ Proof.: See, e.g., [10, Appendix A] and [13, Proposition 4]. **Lemma 61** ([8, Theorem 1.12]).: _Assume that \((M,g)\) has positive mass. For every sufficiently large amount of volume \(V>0\), there exists an isoperimetric region \(\Omega\subset M\) with \(|\Omega|=V\)._ ## Appendix C Variation of area and volume In this section, we recall the first and second variational formulae for area and volume and the definition of a stable constant mean curvature surface; see, e.g., [8, Appendix H]. Let \((M,g)\) be a Riemannian manifold without boundary of dimension \(n\geq 3\). Let \(\Sigma\subset M\) be a closed hypersurface bounding a compact region \(\Omega\). We denote by \(\mathrm{d}\mu\) the area element, by \(\nu\) the outward pointing unit normal, and by \(h\) and \(H\) the second fundamental form and mean curvature, respectively, computed with respect to \(\nu\). Let \(\varepsilon>0\) and \(U\in C^{\infty}(\Sigma\times(-\varepsilon,\varepsilon))\) with \(U(x,0)=0\) for all \(x\in\Sigma\). Decreasing \(\varepsilon\) if necessary, we obtain a smooth family \(\{\Sigma(s):s\in(-\varepsilon,\varepsilon)\}\) of hypersurfaces \(\Sigma(s)\subset M\) where \[\Sigma(s)=\{\exp_{x}(U(x,s)\,\nu(x)):x\in\Sigma\}.\] We define the initial velocity \(u\in C^{\infty}(\Sigma)\) and the initial acceleration \(v\in C^{\infty}(\Sigma)\) by \[u(x)=\dot{U}(x,0)\qquad\text{and}\qquad v=\ddot{U}(x,0).\] Let \(\Omega(s)\) be the compact region bounded by \(\Sigma(s)\). **Lemma 62**.: _There holds_ \[\left.\frac{d}{ds}\right|_{s=0}|\Sigma(s)| =\int_{\Sigma}H\,u\,\mathrm{d}\mu, \tag{50}\] \[\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}|\Sigma(s)| =\int_{\Sigma}H\,v+H^{2}\,u^{2}+|\nabla u|^{2}-\left(|h|^{2}+Ric( \nu,\nu)\right)u^{2}\,\mathrm{d}\mu.\] _Moreover,_ \[\left.\frac{d}{ds}\right|_{s=0}\!|\Omega(s)|=\int_{\Sigma}u\,\mathrm{d }\mu,\] \[\left.\frac{d^{2}}{ds^{2}}\right|_{s=0}\!|\Omega(s)|=\int_{\Sigma}v+ H\,u^{2}\,\mathrm{d}\mu.\] Assume that for every such variation satisfying also \[\left.\frac{d}{ds}\right|_{s=0}\!|\Omega(s)|=\left.\frac{d^{2}}{ds^{2}}\right|_ {s=0}\!|\Omega(s)|=0,\] there holds \[\left.\frac{d}{ds}\right|_{s=0}\!|\Sigma(s)|=0\qquad\text{and}\qquad\left. \frac{d^{2}}{ds^{2}}\right|_{s=0}\!|\Sigma(s)|\geq 0.\] Note that the mean curvature \(H\) of \(\Sigma\) is constant in this case. We say that \(\Sigma\) is a stable constant mean curvature surface Note that each component of the boundary of an isoperimetric region \(\Omega\subset M\) is a stable constant mean curvature surface with the same constant mean curvature. We record the following integration by parts formula for the second variation of area formula (50) with respect to a Euclidean translation. **Lemma 63**.: _Let \(\Sigma\subset\mathbb{R}^{n}\) be a closed oriented hypersurface with boundary \(\partial\Sigma\). Let \(\bar{\nu}\) be a unit normal of \(\Sigma\) and \(\bar{\omega}\) the outward-pointing conormal of \(\partial\Sigma\). Let \(\bar{u},\bar{v}:\Sigma\to\mathbb{R}\) be given by_ \[\bar{u}=\bar{g}(e_{n},\bar{\nu})\qquad\text{and}\qquad\bar{v}=-\bar{h}(e_{n}^ {\bar{\top}},e_{n}^{\bar{\top}}).\] _There holds_ \[\int_{\Sigma}\bar{H}\,\bar{v}+\bar{H}^{2}\,\bar{u}^{2}+|\bar{\nabla}\bar{u}|_ {\bar{g}}^{2}-|\bar{h}|_{\bar{g}}^{2}\,\bar{u}^{2}\,\mathrm{d}\bar{\mu}=\int_ {\partial\Sigma}\bar{h}(\bar{\omega},e_{n}^{\bar{\top}})\,\bar{u}\,\mathrm{d} \bar{\ell}-\int_{\partial\Sigma}\bar{g}(e_{n}^{\bar{\top}},\bar{\omega})\bar{ H}\,\bar{u}\,\mathrm{d}\bar{\ell}.\] Proof.: Using that \(\bar{\Delta}\bar{u}+|\bar{h}|_{\bar{g}}^{2}\,\bar{u}=(\bar{\nabla}_{e_{n}^{ \bar{\top}}}\bar{H})\,\bar{u}\), we obtain \[\int_{\Sigma}|\bar{\nabla}u|_{\bar{g}}^{2}\,\mathrm{d}\bar{\mu}=\int_{ \partial\Sigma}\bar{h}(\bar{\omega},e_{n}^{\top})\,\bar{u}\,\mathrm{d}\bar{ \ell}+\int_{\Sigma}|\bar{h}|_{\bar{g}}^{2}\,\bar{u}^{2}-(\bar{\nabla}_{e_{n}^ {\bar{\top}}}\bar{H})\,\bar{u}\,\mathrm{d}\bar{\mu}.\] Using the Codazzi equation \(\bar{\nabla}_{e_{n}^{\bar{\top}}}\bar{H}=(\mathrm{d}\bar{\mathrm{iv}}\,\bar{ h})(e_{n}^{\bar{\top}})\) and integrating by parts again, we have \[\int_{\Sigma}(\bar{\nabla}_{e_{n}^{\bar{\top}}}\bar{H})\,\bar{u}\,\mathrm{d} \bar{\mu}=\int_{\partial\Sigma}\bar{g}(e_{n}^{\top},\bar{\omega})\bar{H}\, \bar{u}\,\mathrm{d}\bar{\ell}+\int_{\Sigma}\bar{H}^{2}\,\bar{u}^{2}+\bar{H}\, \bar{v}\,\mathrm{d}\bar{\mu}.\] ## Appendix D. Hypersurface geometry in an asymptotically flat end In this section, we assume that \(g\) is a Riemannian metric on \(\mathbb{R}^{n}\) where \(n\geq 3\) and that, for some \(\tau>0\), \[|g-\bar{g}|+|x|_{\bar{g}}\,|\bar{D}g|_{\bar{g}}=O(|x|_{\bar{g}}^{-\tau}).\] Let \(\Sigma\subset\mathbb{R}^{n}\) be a two-sided hypersurface with area element \(\mathrm{d}\mu\), designated normal \(\nu\), and second fundamental form \(h\) and mean curvature \(H\) with respect to \(\nu\). The corresponding Euclidean quantities are denoted with a bar. **Lemma 64**.: _As \(x\to\infty\),_ \[\nu= \,\bar{\nu}+O(|x|_{\bar{g}}^{-\tau}),\] \[\,\mathrm{d}\mu= \,(1+O(|x|_{\bar{g}}^{-\tau}))\,\mathrm{d}\bar{\mu},\] \[|x|_{\bar{g}}\,h= |x|_{\bar{g}}\,\bar{h}+O(|x|_{\bar{g}}^{-\tau})+O(|x|_{\bar{g}}^{ 1-\tau}\,|\bar{h}|_{\bar{g}}),\text{ and }\] \[|x|_{\bar{g}}\,H= |x|_{\bar{g}}\,\bar{H}+O(|x|_{\bar{g}}^{-\tau})+O(|x|_{\bar{g}}^{ 1-\tau}\,|\bar{h}|_{\bar{g}}).\] **Lemma 65**.: _Suppose that \(\partial\Sigma=\emptyset\). Let \(\alpha>0\), \(0<s<t\), and suppose that, for some \(c\geq 1\),_ \[r^{1-n}\,|B_{r}^{n}(0)\cap\Sigma|_{\bar{g}}\leq c\] _for all \(s<r<t\). There holds_ \[\int_{(B_{t}^{n}(0)\setminus B_{s}^{n}(0))\cap\Sigma}|x|_{\bar{g}}^{-\alpha} \,\mathrm{d}\bar{\mu}\leq c\,t^{-\alpha}+\frac{c\,\alpha}{n-1-\alpha}\,\left(t ^{n-1-\alpha}+s^{n-1-\alpha}\right).\] ## Appendix E A Liouville theorem on the slab In this section, we prove a Liouville theorem for harmonic functions on a slab. **Lemma 66**.: _Let \(n\geq 2\). Let \(f\in C^{\infty}(\mathbb{R}^{n-1}\times[0,2])\) be a non-negative harmonic function with \(f(x)=0\) for all \(x\in\mathbb{R}^{n-1}\times\{0,\,2\}\). Assume that_ \[\sup\{|(\bar{D}f)(x)|_{\bar{g}}:x\in\mathbb{R}^{n-1}\times\{0\}\}<\infty. \tag{51}\] _Then \(f=0\)._ Proof.: Let \(x_{0}\in\mathbb{R}^{n-1}\times\{1\}\) and \(v\in C^{\infty}(\mathbb{R}^{n-1}\times[0,2])\) be given by \(v(x)=f(x_{0})\,x^{n}\). Note that \(v\) is harmonic. By the Boundary Harnack comparison principle, see, e.g., [6, Theorem 11.6], there is a constant \(c>0\) depending only on \(n\) such that \(v\leq c\,f\) on \(B_{1}^{n}(x_{0}-e_{n})\cap(\mathbb{R}^{n-1}\times[0,2])\). In particular, \[f(x_{0})=(\partial_{e_{n}}v)(x_{0}-e_{n})\leq c\,(\partial_{e_{n}}f)(x_{0}-e_ {n}).\] In conjunction with (51), we see that \(f\) is bounded on \(\mathbb{R}^{n-1}\times\{1\}\). By the Boundary Harnack inequality, see, e.g., [6, Theorem 11.5], it follows that \(f\) is bounded in all of \(\mathbb{R}^{n-1}\times[0,2]\). We extend \(f\) to a bounded periodic harmonic function \(\tilde{f}\in C^{\infty}(\mathbb{R}^{n})\). By the Liouville theorem, \(\tilde{f}\) is constant. **Remark 67**.: _The function \(f\in C^{\infty}(\mathbb{R}^{n-1}\times[0,2])\) given by \(f(x)=\exp(x^{1}/(2\,\pi))\,\sin(x^{n}/(2\,\pi))\) is non-negative, harmonic, satisfies \(f(x)=0\) for all \(x\in\mathbb{R}^{n-1}\times\{0,\,2\},\) but violates (51)._
2305.10941
Evaluating the validity of a German translation of an uncanniness questionnaire
When researching on the acceptance of robots in Human-Robot-Interaction the Uncanny Valley needs to be considered. Reusable and standardized measures for it are essential. In this paper one such questionnaire got translated into German. The translated indices got evaluated (n=140) for reliability with Cronbach's alpha. Additionally the items were tested with an exploratory and a confirmatory factor analysis for problematic correlations. The results yield a good reliability for the translated indices and showed some items that need to be further checked.
Sarah Wingert, Christian Becker-Asano
2023-05-18T12:58:59Z
http://arxiv.org/abs/2305.10941v1
# Evaluating the validity of a German translation of an uncanniness questionnaire ###### Abstract When researching on the acceptance of robots in Human-Robot-Interaction the Uncanny Valley needs to be considered. Reusable and standardized measures for it are essential. In this paper one such questionnaire got translated into German. The translated indices got evaluated (n=140) for reliability with Cronbach's alpha. Additionally the items were tested with an exploratory and a confirmatory factor analysis for problematic correlations. The results yield a good reliability for the translated indices and showed some items that need to be further checked. uncanny valley, questionnaire translation, German language, human-robot interaction, evaluation ## I Introduction When developing robots that are intended for Human-Robot-Interaction (HRI) it is important that they are being accepted by the target group. It has been found that anthropomorphic features support the social acceptance of the robot given the design is appropriate for its task [9]. However, as Mori suggested, the anthropomorphic design might lead to a feeling of repulsion if too close to an actual human. This effect is being called the "Uncanny Valley" [7]. Since the social acceptance of a robot cannot directly be determined by objective measures, it is necessary to have appropriate tools to measure the acceptance or contrary the eeriness of a robot within the target group [2]. For research regarding the Uncanny Valley it is also useful to have a measure for human-likeness. Because most engineers are not trained to create valid questionnaires nor have the capacities to validate created questionnaires properly, standardized measures that can be reused between research are of great value [2]. Two such questionnaires are the GODSped-questionnaire [2] and the questionnaire by MacDorman and Ho [6] that specifically addresses research on the Uncanny Valley. The GODSped-questionnaire contains five parts measuring the concepts of anthropomorphism, animacy, likeability, perceived intelligence and perceived safety. Each concept contains three to six items presented in the form of semantic differentials [2]. At the time of writing the GODSped-questionnaire is available in 18 different languages [1]. This enables the use of the questionnaire in different cultures and language areas. However, when researching the Uncanny Valley, the lack of an index measuring eeriness is problematic, as it is inherently different from reverse likeability and essential for identifying an Uncanny Valley. Additionally the indices contained in the GODSped-questionnaire are significantly correlated with each other, which leads to a highly skewed diagram, if two indices of the set are used as x- and y-axis of one chart [6]. Since the correlation between the indices in the GODSped-questionnaire got traced back to the correlation of each with interpersonal warmth, MacDorman and Ho created indices that are decorrelated from interpersonal warmth and from each other, to enable the measurement of less skewed data when researching the Uncanny Valley [6]. This questionnaire got revised again with a broader range of sample data to avoid clustering in the responses [5]. ## II Translation of the questionnaire to German In our work we researched on the effect of three different faces on the perception of a mobile robot [12]. The research specifically focused on the question whether a "Perceptual Mismatch" between robot and face would lead to a high eeriness as proposed in the literature [4], when measures were taken to ensure, that the robot and the face are processed individually. To avoid distortion of the results it is important that the axes for humanlikeness and eeriness are decorrelated. Therefore the revised questionnaire of MacDorman and Ho was chosen. However, as the work was performed in Germany, a German version of the questionnaire was required. To the best of our knowledge, at the time this translation was done, there was no other German translation available. Hence we carried out the translation ourselves, cf. Table I to III. However, we recently came across another German translation [10]. Generally the back translation method is advised to ensure a proper translation of a questionnaire into another language [2]. In this method one bilingual person translates the original questionnaire into the target language, while a second bilingual person translates the target language back into the source language without knowing the original version. If the result of the back translation is equivalent to the original source, it can be assumed that the translation is equivalent as well [3]. In our translation the back translation approach was successful for the humanlikeness index, cf. Table I. Two independent back translators each translated 80% of the words back identically as the source, leading to a total coverage of 90% identical translations. The wording "without definite lifetime" didn't get translated back exactly, but the back translation of "unlimited lifetime" was interpreted as equivalent. Also the plausibility question was translated back identically by one back translator. In the index of eeriness a few adaptations were made, cf. Table II. As the English questionnaire used some words that in German would describe feelings evoked by the robot mixed with words directly describing the robots perception (e.g. "dull" vs. "freaky") the translations were adapted to equivalent words describing the robots perception. For example the German translation for "dull" is a direct translation of its synonym "plain" instead of the word itself. Despite the adaptations particular attention was paid to the fact that the connotations with positive (and negative) affect within one semantic differential didn't change. As English has a variety of words describing emotional states and the translations were singular words without a broader context, the back translation process was not viable for emotional states. Therefore, the index was checked for equivalent meanings by only one bilingual native speaker proofreading the source and target translation. After careful consideration the translation of "weird" with "komisch" was identified as a double meaning, because the German word can also mean "funny". However, there were no noticeable problems with it in the analysis. An adaption to the alternative word "seltsam" might still be advisable. The index of attractiveness got translated, but not checked, because it wasn't part of the statistical analysis of our work. The translations are reported in Table III, but using the back translation method for confirmation is advised. ## III Validation and reliability testing As mentioned in Section II, our German version of the questionnaire was used to evaluate, how the humanlikeness and eeriness of a robot's physical appearance are changing with different versions of facial display of emotions. The robots use case was the transportation and sale of food and water bottles. A human confronting the robot could either buy food or not. To support anthropomorphism and emotional bonding to the robot, emotional reactions were implemented. When a human approached, the robot showed surprise. When the human bought something, the robot showed happiness, and if nothing was bought, it showed sadness. These emotions were modelled with three faces with different levels of humanlikeness, depending on the test group, cf. Figure 1. ### _Data collection procedure_ The survey was conducted online with a series of videos being presented to each participant, see Figure 2 for an overview. Following a between-subjects experimental design the participants were split into three groups. Each group watched only one version of the animated face. The participants were recruited at three technical universities and one IT-company in Germany. Sixty-seven responses were returned, of which six were excluded, because they didn't answer the control question correctly, and one person was excluded for watching none of the videos. This resulted in 60 complete surveys for our analysis. After exclusion of mentioned participants, the first group contained 21 participants (face 1 condition), Fig. 1: The faces in the neutral position: from left to right, face 1, face 2, and face 3 the second group 19 (face 2 condition), and the third group 20 (face 3 condition). 43.3% of the participants were female and 56.7% male. Overall the participants were rather young (M=28.0 years; SD=8.51 years). 46.7% had at least one university degree and 91.7% had at least an admission to higher education. The sequence of the steps of the online survey is presented in Fig. 2. Each participant first watched the robot alone (with its display turned off, cf. Fig. 3, left). One video showed the system when something was bought, the other video showed it when nothing was bought. This covered the whole use case of the robot. Afterwards the participants were asked to rate the robot with the questionnaire. In the next step each test group watched videos of the face corresponding to their group displaying the three emotions mentioned above. They were asked to identify the emotions and rate their appropriateness. Afterwards an image of the face in neutral position was given and the participants were asked to rate the overall perception of the face on the questionnaire. Finally each group watched the robot in combination with the respective face shown on the screen in the same situations as in the beginning, cf. Fig. 3, right. After that they rated this combination of robot and face filling in the German questionnaire once more.1 Footnote 1: The results of the study itself will be reported at a later time. The combination of the data of the three groups resulted in a total of 180 ratings, of which 60 were ratings of the robot alone, 21 of face 1 alone, 19 of face 2 alone, 20 of face 3 alone, and equivalent amounts of ratings for the combinations of each face with the robot. For validation of the questionnaire's translation, 20 of the 60 ratings of the robot were chosen at random so that each condition provided a similar amount of data. Thus, in total 140 data sets are included in the reliability testing. ### _Reliability analysis_ With the resulting data set Cronbach's Alpha [8] was determined for each of the indices and under exclusion of single question items to check for reliability. In addition an exploratory factor analysis and a confirmatory factor analysis were performed to identify any severe issues appearing in the translation. All analyses were performed using Jamovi [11]. Cronbach's alpha indicated a good reliability for all the indices. Humanlikeness had a value of 0.901, eeriness a value of 0.807 and attractiveness a value of 0.880 (see Table IV). Cronbach's alpha of the humanlikeness index under exclusion of the German translation of the pair "without definite lifetime"-"mortal" was slightly higher (\(0.916>0.901\)). An analysis revealed that this pair showed a negative correlation with the other items in the data sets of the robot. Due to its design the robot seemed to appear fragile to the participants, which they might have interpreted as mortal. This needs to be considered in further use of the German version of the survey. ### _Exploratory analysis_ The exploratory factor analysis showed that some items correlated with the factor assigned to the attractiveness items Fig. 3: Two exemplary frames of the videos watched by the participants: left, robot without a face, and right, robot with face 2 Fig. 2: The experimental procedure (see Table V, items marked with an asterisk). The item "gewohnlich" - "ungewohnlich" had a strong bias and also loaded on the humanlikeness factor. As this issue was already mentioned in the original questionnaire [5], it is not being considered an issue of the German translation. The uniqueness of the items in the humanlikeness index are relatively low. This indicates that they are strongly correlated with each other. One exception is the item "mit unbegrenzter Lebenszeit"- "sterblich" which can be explained by its negative correlation with the other items in the ratings of the robot and supports the exclusion of this item. Most factors of eeriness have a relatively high uniqueness as can be derived from their low correlations with each other. However, their common variance is still a little less than 0.5. The low correlations can be explained by the two sub-factors contained within the set of eeriness items. Since the items only load on one of the two sub-factors, but were summarized in this exploratory factor analysis, high values of uniqueness result. The uniqueness values of the attractiveness index are slightly higher than the items of humanlikeness. Still they are all below 0.5, which supports that the items are well correlated with each other. ### _Confirmatory analysis_ The confirmatory factor analysis confirmed that all items loaded significantly on their factor (\(p<0.01\)), cf. Table VI. The eeriness factor had no significant covariance with the other factors (Eeriness-Humanlikeness \(p=0.541\); Eeriness-Attractiveness \(p=0.443\)), but the humanlikeness factor significantly correlated with the attractiveness factor (\(p<0.01\)), cf. Table VII. This was already the case in the source translation [5]. However, since the translation of the attractiveness factor wasn't used for the analysis in our work and therefore wasn't validated by a back translator, a validation and check for this covariance in larger sample sets is still advisable. ## IV Conclusion Overall the analysis reported a high reliability of the question items within the translation. The translation of the pair "without definite lifetime" - "mortal" may need to be excluded when fragile robots are to be evaluated due to culture specific difference in interpretation. The attractiveness indices need to be back translated for validation and the covariance with humanlikeness should be checked with a broader range of samples and more participants. Despite its inconspicuousness in the analysis the word "weird" may better be translated with "seltsam", to avoid double meaning. With a total of 60 participants the explanatory power of the results is still limited. However, the results show that a German translation of the original questionnaire is possible and can give meaningful results regarding the humanlikeness and eeriness of robots as perceived by German speakers.
2310.14991
Deterministic Impartial Selection with Weights
In the impartial selection problem, a subset of agents up to a fixed size $k$ among a group of $n$ is to be chosen based on votes cast by the agents themselves. A selection mechanism is impartial if no agent can influence its own chance of being selected by changing its vote. It is $\alpha$-optimal if, for every instance, the ratio between the votes received by the selected subset is at least a fraction of $\alpha$ of the votes received by the subset of size $k$ with the highest number of votes. We study deterministic impartial mechanisms in a more general setting with arbitrarily weighted votes and provide the first approximation guarantee, roughly $1/\lceil 2n/k\rceil$. When the number of agents to select is large enough compared to the total number of agents, this yields an improvement on the previously best known approximation ratio of $1/k$ for the unweighted setting. We further show that our mechanism can be adapted to the impartial assignment problem, in which multiple sets of up to $k$ agents are to be selected, with a loss in the approximation ratio of $1/2$.
Javier Cembrano, Svenja M. Griesbach, Maximilian J. Stahlberg
2023-10-23T14:44:44Z
http://arxiv.org/abs/2310.14991v2
# Deterministic Impartial Selection with Weights ###### Abstract In the impartial selection problem, a subset of agents up to a fixed size \(k\) among a group of \(n\) is to be chosen based on votes cast by the agents themselves. A selection mechanism is _impartial_ if no agent can influence its own chance of being selected by changing its vote. It is _\(\alpha\)-optimal_ if, for every instance, the ratio between the votes received by the selected subset is at least a fraction of \(\alpha\) of the votes received by the subset of size \(k\) with the highest number of votes. We study deterministic impartial mechanisms in a more general setting with arbitrarily weighted votes and provide the first approximation guarantee, roughly \(1/\lceil 2n/k\rceil\). When the number of agents to select is large enough compared to the total number of agents, this yields an improvement on the previously best known approximation ratio of \(1/k\) for the unweighted setting. We further show that our mechanism can be adapted to the impartial assignment problem, in which multiple sets of up to \(k\) agents are to be selected, with a loss in the approximation ratio of \(1/2\). ## 1 Introduction Votes and referrals are a key mechanism in the self-organization of communities: political parties elect their representatives, researchers review and rate each other's manuscripts, and hyperlinks on the web attribute topical relevance to an external resource. Oftentimes, the agents who give the recommendations are themselves interested in being within a top-rated fraction of their group: to occupy a prestigious position, be invited to a conference, or to have a website appear more prominently in search results. Objectives like these provide an incentive to deviate from a fair evaluation of one's peers. In particular, agents might omit a recommendation for an immediate contender in order to be ranked above them when the votes are counted. In a seminal work, Alon et al. [1] initiated the search for impartial mechanisms to aggregate the votes cast by \(n\) agents who want to elect \(k\) individuals among them, which we refer to as the exact \((n,k)\)-selection problem. The authors require that no agent is able to influence their own chance of being selected by adjusting the subset of peers that they vote for, while, at the same time, the agents selected by the mechanism should receive an expected sum of votes that is close to that of the highest voted subset of size \(k\). We refer to the first condition as _impartiality_ and to the second as _\(\alpha\)-optimality_, where \(\alpha\in[0,1]\) denotes the performance guarantee. If the mechanism is allowed to make use of random choice and agents may vote for any subset of their peers, then the best known performance guarantee is \(\frac{k}{k+1}\left(1-\left(\frac{k-1}{k}\right)^{k+1}\right)\), which gives \(1/2\) for the selection of a single agent and approaches \(1-1/e\) as \(k\rightarrow\infty\)[4]. It is further known that no impartial mechanism can be better than \(k/(k+1)\)-optimal, which is tight only for \(k=1\). We discuss variants with a limited number of votes per participant as related work. The problem only becomes more difficult in the deterministic setting, where the mechanism is forced to choose one agent over another even for highly symmetric input. The instance in which two agents vote for each other and one of them shall be selected requires the mechanism to break the tie, based on an external preference list, in favor of one of the agents. Impartiality demands that the same agent must be selected also when the other agent withdraws its vote. But then, an agent with no votes is selected, even though the other agent still receives one. This yields a performance guarantee of zero for the selection of a single agent in the worst case. Even for \(k>1\), no positive performance guarantee is possible [1], unless, surprisingly, when the mechanism is allowed to select less than \(k\) agents in some instances. In this case an algorithm achieving \(\alpha=1/k\) is known [4]. We refer to this relaxation as the _inexact_\((n,k)\)-selection problem. Since this insight, the gap towards the best known upper bound, which is \((k-1)/k\) in the inexact selection setting, remained remarkably wide. More generally, the selection problem allows for votes to be weighted: one then compares the total weight of the selected agents to that of the maximum-weight subset of size \(k\). In a peer review setting, reviewers are often asked to rate the manuscript under consideration on a point scale that ranges from a recommendation to reject to a claim of excellence. An editor or program chair would then aggregate these scores and accept a limited number of highly rated submissions. While the established rule to disclose any conflicts of interest protects, if obeyed, against abuse based on personal ties, authors whose papers are on the verge of selection might still profit from giving ratings below their honest estimate, unless the selection mechanism is impartial. In this setting, although computational studies have been made [2], no deterministic mechanism providing a worst-case guarantee was known to date. ### Our Contribution We propose a deterministic impartial mechanism that can be applied in the weighted setting and which achieves a performance guarantee of \(1/\lceil 2n/k\rceil\), for \(k\geq 2\sqrt{n}\) even, and \((k-1)/(k\lceil 2n/(k-1)\rceil)\), for \(k\geq 2\sqrt{n}+1\) odd. In particular, it achieves asymptotically a guarantee of \(\alpha=1/4\) for selecting at most half and \(\alpha=1/3\) for selecting at most two thirds of the agents. These are the first lower bounds for deterministic selection with weights. In its applicable range, the mechanism further improves upon the previous best bound of \(1/k\) in the unweighted setting. The improvement is most noticeable when \(k\) is large, where the gap between the previously best known lower and upper bounds of \(1/k\) and \((k-1)/k\), respectively, has been widest. The construction is best behaved whenever \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\): here a guarantee of \(\alpha=1/b\) is provided and the analysis of the mechanism is tight. The mechanism uses a well-structured set of partitions of the agents, whose existence we study in Section 3 using a connection to hypergraph theory and graph coloring. The mechanism itself and the proof of the approximation guarantee are presented in Section 4. In Section 5, we show how the mechanism can be adapted to assign agents to multiple size-limited subsets, which may represent tasks to distribute or committees to form. In this setting, we lose only a factor of \(1/2\) in the performance guarantee, independent of the number of subsets. ### Related Work Impartiality as a desirable axiom in multi-agent problems was introduced by De Clippel et al. [11] and was first studied in the context of peer selection in parallel by both Holzman and Moulin [15] and Alon et al. [1]: The work by Holzman and Moulin studied the existence of impartial mechanisms satisfying further axioms such as unanimity and notions of monotonicity, while the research by Alon et al. showed that no deterministic impartial mechanism aiming to select exactly \(k\) agents can achieve any constant approximation ratio. In response, Bjelde et al. [4] showed that when fewer than \(k\) agents may be selected, \(1/k\)-optimality is guaranteed by the _bidirectional permutation_ mechanism, which picks either one or two agents, depending on the instance. The authors further proved an upper bound of \((k-1)/k\) for any deterministic impartial mechanism. Continuing the axiomatic line, Tamura and Ohseto [24] studied \(k\)-selection in the single-nomination setting and showed that impartiality is compatible with two natural notions of unanimity. Their mechanism was extended to the case of a higher, but constant, maximum number of nominations by Cembrano et al. [9]. Further, Aziz et al. [2] proposed a mechanism satisfying certain monotonicity properties and confirmed its performance in a computational study. Several works have focused on randomized impartial selection. Alon et al. proposed a family of mechanisms based on a random partition of the agents that yield the first lower bounds on the approximation ratio for this setting, namely \(1/4\) for \(k=1\) and \(1-O(1/\sqrt[3]{k})\) for general \(k\). They also provided respective upper bounds of \(1/2\) and \(1-\Omega(1/k^{2})\). Fischer and Klimm [14] closed the gap for \(k=1\) by giving a \(1/2\)-approximation algorithm. Bousquet et al. [5] designed a mechanism with an approximation guarantee that goes to one as the maximum score of an agent goes to infinity. A restricted variant of particular importance, first studied in the work of Holzman and Moulin, arises when each agent can vote for exactly one other agent. Here, Fischer and Klimm provided both lower and upper bounds which were later improved by Cembrano et al. [10]. A setting closely related to the impartial selection of \(k\) agents is that of _peer review_ in which, in contrast to the classic \(k\)-selection problem, the votes are weighted and represent a score assigned to a submission. Kurokawa et al. [18] studied a model where first a limited number of weighted votes is sampled and then the selection is performed. The authors proposed an impartial randomized mechanism providing a constant approximation ratio with respect to the (non-impartial) mechanism that randomly samples the votes and selects the best possible set of \(k\) agents given these votes. Mattei et al. [21] studied this problem from an axiomatic and experimental point of view, while Lev et al. [19] extended this work to the setting with noisy assessments. Dhull et al. [12] explored the scope and limitations of partition-based mechanisms for peer review in terms of approximating the selection of the best \(k\) papers. Beyond multiplicative approximation, some works have studied the scope and limitations of impartial mechanisms in terms of additive guarantees [6, 7, 8] and additional economic axioms [13, 20]. Impartiality has also been considered for the selection of agents where preferences come from correlated types [22], for the selection of vertices in graphs with maximal progeny [3, 26, 27], and for generating social rankings of agents who rank each other [16]. For a survey on incentive handling in peer mechanisms, see Olckers and Walsh [23]. ## 2 Preliminaries For \(n\in\mathbb{N}\coloneqq\mathbb{Z}_{\geq 1}\), we define the ranges \([n]\coloneqq\{1,\ldots,n\}\) and \(\left[n\right]_{0}\coloneqq\{0,\ldots,n-1\}\) and we write \(\mathcal{A}_{n}\) for the set of non-negative \(n\times n\) matrices with zero diagonal. An instance of the weighted selection problem is fully described by an integer \(k\) and a weight matrix \(A\in\mathcal{A}_{n}\), where \(k\) is the number of agents to be selected and \(A_{ij}\) corresponds to the weight of the vote that agent \(i\) casts for agent \(j\). For \(A\in\mathcal{A}_{n}\), we write \(A_{-i}\) for the matrix obtained when removing the \(i\)-th row of \(A\). Given \(A\in\mathcal{A}_{n}\) and \(R,S\subseteq[n]\), we write \[\sigma_{R}(S;A)\coloneqq\sum_{i\in R,\ j\in S}A_{ij}\] for the score of the agents in \(S\) limited to \(R\), and \(\sigma(S;A)\) short for \(\sigma_{[n]}(S;A)\). We omit the weight matrix \(A\) whenever it is clear from the context and we write \(j\) short for \(S=\{j\}\) in the above definitions. Let \(n,k\in\mathbb{N}\) with \(k<n\) in the following. For \(A\in\mathcal{A}_{n}\), we let \[\textsc{Opt}_{k}(A)\coloneqq\operatorname*{arg\,max}_{S\subseteq[n]:\ |S|=k} \sigma(S;A)\] denote an arbitrary set with the largest score among vertex subsets of size \(k\). We write just \(\textsc{Opt}_{k}\) when the weight matrix is clear. An \((n,k)\)-selection mechanism is a function \(f\colon\mathcal{A}_{n}\to 2^{[n]}\) such that \(|f(A)|\leq k\) for every \(A\in\mathcal{A}_{n}\). Such a mechanism is _impartial_ if, for every pair of instances \(A,A^{\prime}\in\mathcal{A}_{n}\) and for all \(i\in[n]\) such that \(A_{-i}=A^{\prime}_{-i}\), it holds that \(f(A)\cap\{i\}=f(A^{\prime})\cap\{i\}\). We further call an \((n,k)\)-selection mechanism \(\alpha\)_-optimal_ if \[\frac{\sigma(f(A);A)}{\sigma(\textsc{Opt}_{k}(A);A)}\geq\alpha\] holds for all \(A\in\mathcal{A}_{n}\) and some \(\alpha\in[0,1]\). We write \(E\mathbin{\dot{\cup}}F\) for the disjoint union of sets \(E\) and \(F\). For a multiset \(E\), we write \(\mu_{E}(e)\) for the multiplicity of \(e\in E\) and \(\mu(E)\) for the cardinality of \(E\). A hypergraph is a pair \(H=(V,E)\) where \(V\) is a finite set of _vertices_ and where \(E\subseteq 2^{V}\) is a multiset of _(hyper-)edges_. We say that \(H\) is _\(d\)-regular_ if each vertex is contained in exactly \(d\) edges, i.e., \(\mu\left(\{e\in E\mid v\in e\}\right)=d\) for all \(v\in V\); _\(b\)-uniform_ if each edge contains exactly \(b\) vertices, i.e., \(|e|=b\) for all \(e\in E\); and _linear_ if two distinct edges intersect in at most one vertex, i.e., \(|e_{1}\cap e_{2}|\leq 1\) for all \(e_{1},e_{2}\in E\) with \(\mu_{E}(e_{1})>1\) or \(e_{1}\neq e_{2}\). The _dual_ of \(H\) is \(H^{*}=(E,X)\) where \(X\coloneqq\{\{e\in E\mid v\in e\}\mid v\in V\}\) is a multiset of sets. One may think of the dual graph in terms of the vertex-edge incidence matrix, which is transposed when taking the dual graph. Note that the dual graph may have repeated edges and loops even if the original graph does not have either. We call a \(2\)-uniform hypergraph without repeated edges a (simple) graph. For a graph \(G=(V,E)\), an edge \(b\)-coloring is a mapping \(\pi\colon E\to[b]\). It is _feasible_ if \(\pi(e_{1})\neq\pi(e_{2})\) for all \(e_{1},e_{2}\in E\) with \(e_{1}\cap e_{2}\neq\emptyset\). Likewise, a vertex \(b\)-coloring is a mapping \(\pi\colon V\to[b]\) that we call feasible if \(\pi(u)\neq\pi(v)\) for all \(u,v\in V\) such that \(u,v\in e\) for some \(e\in E\). ## 3 Partition Systems The present work takes inspiration from the _partition mechanism_. This mechanism was first proposed by Alon et al. [1] for the setting of randomized \((n,1)\)-selection, and variants for selecting more than one agent have been studied by Bjelde et al. [4], Aziz et al. [2], and Xu et al. [25]. In its original formulation due to Alon et al., the partition mechanism assigns each agent into a _voter set_\(S_{1}\) and a _candidate set_\(S_{2}\) uniformly at random. It then considers only votes from agents in \(S_{1}\) to agents in \(S_{2}\) and selects an agent from \(S_{2}\) with maximum score. This mechanism is impartial as it considers only votes of agents with no chance of being selected and it is \(1/4\)-optimal, intuitively, as we see every fourth vote in expectation. The \((n,k)\)-selection variant by Bjelde et al. [4] partitions the agents into \(k\) sets instead of two and selects one agent from each set that has the highest score from all other sets, additionally considering internal votes that are directed from left to right according to a random permutation of the agents. This variant preserves impartiality and provides a guarantee that varies from \(1/2\) to \(1-1/e\) as \(k\) grows from \(1\) to infinity. The partition mechanism, although achieving a good ratio when randomization is possible, performs poorly in the deterministic setting. If agents are assigned in any fixed way, votes may be adversarially placed between agents in the same set (and opposite to the order given by the permutation of the agents if such a step is considered), so that the mechanism cannot do any better, in the worst case, than selecting agents with no votes, while the maximum score may be arbitrarily high. In the following, we build the foundation for a partition-based \((n,k)\)-selection mechanism that is robust against such adversarial placement of votes. To achieve this, agents appear in the candidate set of more than one partition and with a disjoint set of contenders each time. This way, votes not seen for a candidate agent in one partition will be seen in another partition wherein that agent re-appears as a candidate. Of course, repeated candidacy may lead to the same agent being selected multiple times, at the expense of contenders with a high number of votes. To minimize this possibility, we let every agent contest just twice and we remove duplicate votes. As our goal is to select up to \(k\) agents, we define \(k\) such partitions. For now, we make also the simplifying assumption that \(n\) and \(k\) allow the candidate sets to have equal size \(b\). This is without loss of generality as we may fill smaller partitions with dummy agents who cast and receive no votes and are disfavored when breaking ties. We call a collection of partitions meeting these requirements a _balanced partition system_. A partition into voters and candidates is fully described by either set. A balanced partition system may thus be written as a family \(E\) of candidate subsets of the set of agents \(V\) or, in other words, as a hypergraph \(H=(V,E)\) without repeated edges, where each \(e\in E\) is the candidate set of a single partition. To fulfill the requirements of a balanced partition system, \(H\) has to be \(2\)-regular, so that every agent appears in exactly two candidate sets, and \(b\)-uniform, so that all candidate sets \(e\in E\) have the same size \(|e|=b\). The remaining requirement that no two agents compete twice against each other, formally \(|e_{1}\cap e_{2}|\leq 1\) for all \(e_{1},e_{2}\in E\) with \(e_{1}\neq e_{2}\), translates to \(H\) being linear. The following lemma implies that we can represent a partition system further by a simple graph. **Lemma 3.1**.: _A hypergraph is \(2\)-regular and linear if and only if its dual is a simple graph._ Proof.: Let \(H=(V,E)\) be a \(2\)-regular and linear hypergraph. Its dual graph is \(H^{*}=(E,X)\) where \(X=\{\{e\in E\mid v\in e\}\mid v\in V\}\) is a multiset of sets. We show that \(H^{*}\) is a graph, i.e., that \(H^{*}\) is \(2\)-uniform and has no repeated edges. Let \(x\in X\). Then, \(x=\{e\in E\mid v\in e\}\) for some \(v\in V\). Since \(H\) is \(2\)-regular, we have \(|x|=2\) for all \(x\in X\), so \(H^{*}\) is \(2\)-uniform. It remains to show that \(H^{*}\) has no repeated edges. Assume towards a contradiction that there is an \(x\in X\) with multiplicity at least two. Then, there are \(v_{1}\neq v_{2}\in V\) with \(x=\{e\in E\mid v_{1}\in e\}=\{e\in E\mid v_{2}\in e\}\). Since \(H\) is \(2\)-regular, again \(|x|=2\) holds. Let thus \(e_{1}\neq e_{2}\in E\) with \(v_{1},v_{2}\in e_{1}\) and \(v_{1},v_{2}\in e_{2}\). Then, \(\{v_{1},v_{2}\}\subseteq e_{1}\cap e_{2}\), hence \(|e_{1}\cap e_{2}|\geq 2\) contradicts that \(H\) is linear. Let next \(G=(V,E)\) be a graph. Its dual graph is a hypergraph \(G^{*}=(E,X)\) with \(X\) defined as before. We show that \(G^{*}\) is \(2\)-regular and linear. To this end, let \(e\in E\) be a vertex of \(G^{*}\). Since \(G\) is \(2\)-uniform, it is \(e=\{v_{1},v_{2}\}\) with \(v_{1}\neq v_{2}\) and thus \[\deg_{G^{*}}(e) =\mu\left(\{x\in X\mid e\in x\}\right)\] \[=\mu\left(\{x\in\{\{f\in E\mid v\in f\}\mid v\in V\}\mid e\in x\}\right)\] \[=\mu\left(\{\{f\in E\mid v\in f\}\mid v\in V\wedge e\in\{f\in E \mid v\in f\}\right)\] \[=\mu\left(\{\{f\in E\mid v\in f\}\mid v\in V\wedge v\in e\}\right)\] \[=\mu\left(\{\{f\in E\mid v_{1}\in f\},\{f\in E\mid v_{2}\in f\} \}\right)=2,\] so \(G^{*}\) is \(2\)-regular. Finally, assume towards a contradiction that \(G^{*}\) is not linear. Then, there are \(v_{1}\neq v_{2}\in V\) such that \(x_{1}\coloneqq\{e\in E\mid v_{1}\in e\}\in X\) and \(x_{2}\coloneqq\{e\in E\mid v_{2}\in e\}\in X\), possibly \(x_{1}=x_{2}\) as \(X\) is a multiset, and \(|x_{1}\cap x_{2}|\geq 2\). Since \(G\) is simple, \(E\) is a set, so there are \(e_{1}\neq e_{2}\in E\) with \(v_{1},v_{2}\in e_{1}\) and \(v_{1},v_{2}\in e_{2}\). Since \(G\) is \(2\)-uniform, \(e_{1}=\{v_{1},v_{2}\}=e_{2}\), a contradiction to \(G\) being simple. By Lemma 3.1 and the fact that order and size as well as degree and rank are dual for hypergraphs, there is a one-to-one correspondence between balanced partition systems where \(n\) agents are distributed among \(k\) candidate sets of size \(b\) on the one hand, and \(b\)-regular simple graphs of order \(k\) and size \(n\) on the other hand. In the simple graph representation, edges correspond to agents while incident vertices correspond to candidate sets that the agents appear in. In the analysis of the mechanism, we will bound the weight selected by it by that of a subset \(U\) of top-voted agents that pairwise do not compete. More precisely, \(U\) will be a set of maximum weight among a partition of the \(k\) top-voted agents into \(b\) many subsets with this property. If the mechanism does not select some agent \(i\) from \(U\), then only because it makes up for the agent's score in the two partitions that agent \(i\) appears in, and which are pairwise disjoint for the agents in \(U\). This leads to a lower bound of \((k/b)/k=1/b\), stated in Lemma 4.2. To ensure the existence of \(b\) such sets, we require that any subgraph of \(H\) induced by \(k\) vertices can be partitioned into \(b\) many (internally) independent sets. We call a balanced partition system whose corresponding hypergraph has this property _robust_. In terms of the \(b\)-regular dual graph \(G\coloneqq H^{*}\), the condition is equivalent to the existence of an edge coloring with \(b\) colors for every subgraph induced by \(k\) edges: the edges of any one color do not share a vertex, which corresponds to vertices not sharing a hyperedge in \(H\). By Konig's line coloring theorem [17], a sufficient condition for such a coloring to exist is that \(G\) is bipartite. The proofs of Lemmas 3.3 and 4.2 will formalize these ideas. Bipartite and \(b\)-regular graphs of even order \(k\) and size \(n\) exist for all \(b=2n/k\) with \(b\leq k/2\). A simple construction is depicted in Figure 1 and described by the following lemma. **Lemma 3.2**.: _Let \(b,k,n\in\mathbb{N}\) with \(k^{\prime}\coloneqq k/2\in\mathbb{N}\) and \(b=2n/k\leq k^{\prime}\). Then, \(G=(V,E)\) with \(V\coloneqq\left[k\right]_{0}\) and \(E\coloneqq\left\{\left\{i,k^{\prime}+\left(\left(i+\ell\right)\bmod k^{\prime }\right)\right\}\mid i\in\left[k^{\prime}\right]_{0},\,\ell\in\left[b\right]_{ 0}\right\}\) is a \(b\)-regular bipartite graph of order \(k\) and size \(n\)._ Proof.: The order is given by \(\left|V\right|=k\). To see that \(G\) is bipartite, let \(\left\{u,v\right\}\in E\). Then, \(\left\{u,v\right\}=\left\{i,j\right\}\) with \(j\coloneqq k^{\prime}+\left(\left(i+\ell\right)\bmod k^{\prime}\right)\) for some \(i\in\left[k^{\prime}\right]_{0}\) and \(\ell\in\left[b\right]_{0}\). It follows from \[i<k^{\prime}\leq k^{\prime}+\left(\left(i+\ell\right)\bmod k^{\prime}\right)=j \tag{1}\] that \(u<k^{\prime}\) if and only if \(v\geq k^{\prime}\). Hence, a bipartition of the vertices is given by \(V=V_{1}\mathbin{\dot{\cup}}V_{2}\) with \(V_{1}=\left[k^{\prime}\right]_{0}\) and \(V_{2}=\left[k\right]_{0}\setminus\left[k^{\prime}\right]_{0}\). To prove that the size of \(G\) is indeed \(n\), we first show that \(f\colon\left[k^{\prime}\right]_{0}\times\left[b\right]_{0}\to V\times V\) with \[f(i,\ell)\coloneqq\left(i,k^{\prime}+\left(\left(i+\ell\right)\bmod k^{ \prime}\right)\right)\] is injective. To this end, let \(f(i,\ell)=f(i^{\prime},\ell^{\prime})\) for some \((i,\ell),(i^{\prime},\ell^{\prime})\in\left[k^{\prime}\right]_{0}\times\left[b \right]_{0}\). Clearly, it is \(i=i^{\prime}\), so it remains to show that \(\ell=\ell^{\prime}\). This is the case as \[k^{\prime}+\left(\left(i+\ell\right)\bmod k^{\prime}\right) =k^{\prime}+\left(\left(i+\ell^{\prime}\right)\bmod k^{\prime}\right) \tag{2}\] \[\iff \qquad\qquad\qquad\qquad\qquad\qquad\qquad i+\ell =i+\ell^{\prime}\pmod{k^{\prime}}\] \[\iff \qquad\qquad\qquad\qquad\qquad\qquad\ell =\ell^{\prime}\pmod{k^{\prime}}\] which holds since \(0\leq\ell,\ell^{\prime}<b\leq k^{\prime}\). From inequality (1), it follows that also \(f^{\prime}\colon\left[k^{\prime}\right]_{0}\times\left[b\right]_{0}\to E\) with \(f^{\prime}(i,\ell)=\left\{f(i,\ell)_{1},f(i,\ell)_{2}\right\}\) is injective, so \(\left|E\right|=k^{\prime}b=kb/2=n\) as required. For the degree, we consider each bipartition \(V_{1}\) and \(V_{2}\) separately. For \(i\in V_{1}\), let \(f_{i}\colon\left[b\right]_{0}\to\mathbb{N}\) with \(f_{i}(\ell)\coloneqq f(i,\ell)_{2}\) enumerate the neighbors of vertex \(i\) and assume \(f_{i}(\ell)=f_{i}(\ell^{\prime})\) for some \(\ell,\ell^{\prime}\in\left[b\right]_{0}\). As this implies equation (2), we have again that \(\ell=\ell^{\prime}\), so also \(f_{i}\) is injective and \(\deg(i)=b\). For \(j\in V_{2}\), the degree of vertex \(j\) is \(\deg(j)=\left|\left\{\left(i,\ell\right)\in\left[k^{\prime}\right]_{0}\times \left[b\right]_{0}\mid f_{i}(\ell)=j\right\}\right|\). It is \[f_{i}(\ell) =j\] \[\iff \qquad\qquad\qquad\qquad\qquad(i+\ell)\bmod k^{\prime} =j-k^{\prime}\] \[\iff \qquad\qquad\qquad\qquad\qquad\qquad i =j-k^{\prime}-\ell\pmod{k^{\prime}} \tag{3}\] where the last equivalence follows from \(0\leq j-k^{\prime}<k^{\prime}\). Equation (3) has a unique solution \(i\in\left[k^{\prime}\right]_{0}\) for any fixed \(\ell\in\left[b\right]_{0}\), so \(\deg(j)=b\) and \(G\) is \(b\)-regular. We condense the findings of this section in the following lemma. **Lemma 3.3**.: _Let \(n,k\in\mathbb{N}\) with \(k<n\) be such that \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). Let further \(V\) with \(|V|=n\) denote a set of agents. Then, one may form \(k\) partitions \(S_{1}^{p}\cup S_{2}^{p}=V\), \(p\in[k]\), such that_ 1. \(|S_{2}^{p}|=b\) _for all_ \(p\in[k]\)_,_ 2. \(|S_{2}^{p}\cap S_{2}^{p}|\leq 1\) _for all_ \(p,q\in[k]\) _with_ \(p\neq q\)_,_ 3. \(|\{p\in[k]\mid v\in S_{2}^{p}\}|=2\) _for all_ \(v\in V\)_, and_ 4. _for every_ \(U\subseteq V\)_, there is a partition_ \(\dot{\bigcup}_{t\in[b]}U_{t}=U\) _with_ \(u\in S_{2}^{p}\Rightarrow v\not\in S_{2}^{p}\) _for all_ \(t\in[b]\)_,_ \(u,v\in U_{t}\) _with_ \(u\neq v\)_, and_ \(p\in[k]\)_._ Proof.: For \(n\), \(k\), and \(b\) as in the statement, Lemma 3.2 guarantees the existence of a \(b\)-regular bipartite graph \(G=(X,V)\) of order \(|X|=k\) and size \(|V|=n\). Let \(H\coloneqq G^{*}=(V,E)\) be its dual graph. Note that \(H\) is \(b\)-uniform and has order \(n\) and size \(k\). By Lemma 3.1, \(H\) is further \(2\)-regular and linear. As \(b\geq 2\) by definition, it follows from linearity that \(H\) has no repeated edges, i.e., \(E\) is a set. We use \(H\) to form a system of partitions of \(V\). First, enumerate \(E\) by an arbitrary but fixed bijection \(\phi\colon[k]\to E\). Then, for every \(p\in[k]\), define a candidate set \(S_{2}^{p}\coloneqq\phi(p)\) and the associated voter set \(S_{1}^{p}\coloneqq V\setminus\phi(p)\). As \(H\) is \(b\)-uniform, we have (i) by construction. As it is linear, (ii) follows. Since \(H\) is \(2\)-regular, also (iii) holds. It remains to show property (iv). By Konig's line coloring theorem [17], there exists a feasible edge \(b\)-coloring \(\pi\colon V\to[b]\) of \(G\). Let \(G^{\prime}\) be the subgraph of \(G\) induced by an edge set \(U\subseteq V\). Clearly, \(\pi\) restricted to \(U\) remains a feasible edge \(b\)-coloring. The dual \(H^{\prime}\coloneqq\left(G^{\prime}\right)^{*}\) is the subgraph of \(H\) induced by the vertex set \(U\). In terms of \(H^{\prime}\), \(\pi\) assigns colors to vertices. Since \(\pi\) restricted to \(U\) is feasible for \(G^{\prime}\), it follows from vertex-edge duality that vertices in \(H^{\prime}\) are colored differently if they appear in a hyperedge together, i.e., \(\pi\) is a feasible vertex coloring for \(H\). Define thus \(U_{t}\coloneqq\{v\in U\mid\pi(v)=t\}\) for each color \(t\in[b]\). Then, the sets \(U_{t}\) are disjoint by definition and \(\dot{\bigcup}_{t\in[b]}U_{t}=U\) as \(\pi(U)\subseteq\pi(V)\subseteq[b]\). Let finally \(t\in[b]\) and \(u,v\in U_{t}\) with \(u\neq v\) and assume towards a contradiction that \(u,v\in S_{2}^{p}\) for some \(p\in[k]\). Then, \(u,v\in\phi(p)\in E\) and \(\pi(u)=t=\pi(v)\) by construction of \(S_{2}^{p}\) and \(U_{t}\), contradicting that \(\pi\) is a feasible vertex coloring for \(H=(V,E)\). Formally, we write \(\mathcal{S}(n,k)\) for an arbitrary but fixed sequence \(\left((S_{1}^{p},S_{2}^{p})\right)_{p\in[k]}\) with \(S_{1}^{p}\dot{\cup}\,S_{2}^{p}=[n]\) for every \(p\in[k]\) that fulfills the conditions of Lemma 3.3. We assume for technical reasons that \(S_{2}^{1}=[b]\). Figure 1: The construction of Lemma 3.2 for \(k=8\) vertices and degree \(b\in[4]\): (a) the \(4P_{2}\) (\(n=4\) edges), (b) the cycle \(C_{8}\) (\(n=8\)), (c) the cube graph \(Q_{3}\) (\(n=12\)), and (d) the complete bipartite graph \(K_{4,4}\) (\(n=16\)). Every edge represents an agent and every vertex corresponds to a partition. A vertex and an edge are incident if the corresponding agent is in the corresponding candidate set. More generally, one obtains the \((k/2)P_{2}\) for \(b=1\), the \(C_{k}\) for \(b=2\), the prism \(Y_{k/2}\) for \(b=3\), and the \(K_{b,b}\) for \(b=k/2\). ## 4 Impartial Selection We are prepared to construct a mechanism that provides the first approximation guarantee for deterministic impartial selection with weighted votes. Our main result is the following. **Theorem 4.1**.: _Let \(n,k\in\mathbb{N}\) with \(1<k<n\) and \(k-k\bmod 2\geq 2\sqrt{n}\). Then, there exists an \((n,k)\)-selection mechanism that is impartial and \(\alpha\)-optimal with_ \[\alpha=\frac{k-k\bmod 2}{k\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil}.\] The performance guarantee of Theorem 4.1 is shown in Figure 2. It starts from \(2/k\) for \(k-k\bmod 2=2\sqrt{n}\) and grows up to \(1/3\) for \(k-k\bmod 2\in[2n/3,n-1]\). The main idea of the algorithm is as follows. We construct a robust partition system of the set of agents, i.e., a set of \(k\) many partitions of the agents into voters and candidates such that each agent appears as a candidate twice and with disjoint sets of contenders. For the second candidacy, we remove votes that are already present in the first candidacy to avoid double-counting. Then, the mechanism selects the top scoring candidate from each partition, possibly selecting some agents twice. This mechanism is impartial as voters and candidates are disjoint in each partition. The performance guarantee stems mainly from the fact that every vote is counted exactly once. In Section 3, we showed that a robust partition system is guaranteed to exist as long as \(n\) and \(k\) satisfy \(k<n,\ b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). In the following, we assume these conditions in order to define and analyze our mechanism; we lift them in the end to obtain the general result stated in Theorem 4.1. Given \(n\) and \(k\) as in Lemma 3.3, our selection mechanism is formally described by Algorithm 1; we refer to it as \(\textsc{Select}_{k}\) and denote its output by \(\textsc{Select}_{k}(A)\) for a given input matrix \(A\in\mathcal{A}_{n}\). The procedure considers a partition system with the properties stated Figure 2: The performance guarantee of Theorem 4.1 for permissible \(n\) and \(k\). in Lemma 3.3 and performs two main steps. Recall that each agent \(j\in[n]\) appears in two candidate sets; we denote their indices by \(l(j)<r(j)\in[k]\) such that \(j\in S_{2}^{l(j)}\cap S_{2}^{r(j)}\). The mechanism first computes the _modified score_\(\hat{\sigma}_{S_{1}^{p}}(j)\) for each \(j\in[n]\) and each \(p\in\{l(j),r(j)\}\), which is simply the actual score \(\sigma_{S_{1}^{l(j)}}(j)\) for \(p=l(j)\). For \(p=r(j)\), however, we omit the votes from agents \(i\in S_{1}^{l(j)}\) in order to avoid double counting. The mechanism then selects the vertex with the highest modified score out of each candidate set, breaking ties in favor of the largest index.1 Figure 3 illustrates a possible execution of \(\textsc{Select}_{6}\) on an instance \(A\in\mathcal{A}_{9}\). Footnote 1: We sometimes compare tuples, for example \((\sigma(j),j)\), in lexicographical order. We use standard inequality signs as well as the min and max operators for this purpose. Throughout this section, whenever \(n\), \(k\), and \(A\in\mathcal{A}_{n}\) are fixed, we write \(((S_{1}^{1},S_{2}^{1}),\dots,(S_{1}^{k},S_{2}^{k}))\), \(l(j)\), \(r(j)\), \(\hat{\sigma}_{S_{1}^{p}}(j)\), \(i^{p}\), and \(X\) for each \(p\in[k]\) and \(j\in[n]\) to refer to the objects defined in \(\textsc{Select}_{k}\). We only specify the input matrix \(A\) as an argument when it is not clear from the context. The following lemma constitutes the main technical ingredient for the proof of Theorem 4.1. **Lemma 4.2**.: _Let \(n,k\in\mathbb{N}\) with \(k<n\) be such that \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). Then, \(\textsc{Select}_{k}\) is an impartial and \(1/b\)-optimal \((n,k)\)-selection mechanism._ Proof.: We consider \(n\) and \(k\) as in the statement. We first note that \(\textsc{Select}_{k}\) returns a subset of \([n]\) of size at most \(k\) and is well-defined as we have \(|\{p\in[k]:j\in S_{2}^{p}\}|=2\) for every \(j\in[n]\). The former holds since \(i^{p}\) is a single vertex for every \(p\in[k]\) and \(X=\bigcup_{p\in[k]}\{i^{p}\}\); the latter follows from property (iii) of Lemma 3.3 since \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). To see that \(\textsc{Select}_{k}\) is impartial, let \(A,A^{\prime}\in\mathcal{A}_{n}\) and \(j\in[n]\) such that \(A_{-j}=A^{\prime}_{-j}\). Suppose \(j\in\textsc{Select}_{k}(A)\). From the definition of the mechanism, we have that there is \(p\in[k]\) such that \(j=\arg\max_{i\in S_{2}^{p}}(\hat{\sigma}_{S_{1}^{p}}(i;A),i)\). Since \(j\in S_{2}^{p}\) and \(A_{-j}=A^{\prime}_{-j}\), we have both that \(\hat{\sigma}_{S_{1}^{p}}(j;A)=\hat{\sigma}_{S_{1}^{p}}(j;A^{\prime})\) and, for every \(i\in S_{2}^{p}\setminus\{j\}\), that \(\hat{\sigma}_{S_{1}^{p}}(i;A)=\hat{\sigma}_{S_{1}^{p}}(i;A^{\prime})\). This yields \(j=\arg\max_{i\in S_{2}^{p}}(\hat{\sigma}_{S_{1}^{p}}(i;A^{\prime}),i)\). Thus, we obtain from the definition of the mechanism that \(j\in\textsc{Select}_{k}(A^{\prime})\). We conclude that \(\textsc{Select}_{k}(A)\cap\{j\}=\textsc{Select}_{k}(A^{\prime})\cap\{j\}\). It remains to show that \(\textsc{Select}_{k}\) has an approximation ratio of \(1/b\). To this end, we let \(A\in\mathcal{A}_{n}\) be an arbitrary weight matrix. First, observe that \[\hat{\sigma}_{S_{1}^{l(j)}}(j)+\hat{\sigma}_{S_{1}^{r(j)}}(j)=\sigma_{S_{1}^{l( j)}}(j)+\sigma_{S_{1}^{r(j)}\setminus S_{1}^{l(j)}}(j)=\sigma(j) \tag{4}\] for every \(j\in[n]\), since property (ii) of Lemma 3.3 implies \(S_{1}^{l(j)}\cup S_{1}^{r(j)}=[n]\setminus\{j\}\). Furthermore, the definition of \(i^{p}\) yields that \[\hat{\sigma}_{S_{1}^{p}}(i^{p})\geq\hat{\sigma}_{S_{1}^{p}}(j) \tag{5}\] for every \(p\in[k]\) and \(j\in S_{2}^{p}\). Given these two facts, we claim that \[\hat{\sigma}_{S_{1}^{l(j)}}(i^{l(j)})+\hat{\sigma}_{S_{1}^{r(j)}}(i^{r(j)})\geq \sigma(j) \tag{6}\] for every \(j\in[n]\). To see this, we fix \(j\in[n]\). If \(i^{p}=j\) for each \(p\in\{l(j),r(j)\}\), inequality (6) follows immediately from equality (4). If \(|\{j\}\cap\{i^{p}:p\in\{l(j),r(j)\}\}|=1\), say w.l.o.g. \(i^{l(j)}=j\) and \(i^{r(j)}=h\neq j\), we have that \[\hat{\sigma}_{S_{1}^{r(j)}}(h)\geq\hat{\sigma}_{S_{1}^{r(j)}}(j)=\sigma(j)- \hat{\sigma}_{S_{1}^{l(j)}}(j),\] where the inequality follows from (5) and the equality from (4). In this case, inequality (6) follows from \(j=i^{l(j)}\) and \(h=i^{r(j)}\). Finally, if \(j\not\in\{i^{p}:p\in\{l(j),r(j)\}\}\), we have from (5) that \[\hat{\sigma}_{S_{1}^{l(j)}}(i^{l(j)})\geq\hat{\sigma}_{S_{1}^{l(j)}}(j)\quad \text{and}\quad\hat{\sigma}_{S_{1}^{r(j)}}(i^{r(j)})\geq\hat{\sigma}_{S_{1}^{r (j)}}(j)\] so that inequality (6) follows from summing up these two inequalities and applying equality (4). This concludes the proof of inequality (6). Letting \(\chi\) denote the indicator function for logical propositions, we note that \[\sigma(\textsc{Select}_{k}(A)) =\sum_{j\in\textsc{Select}_{k}(A)}\sigma(j)\quad=\sum_{j\in \textsc{Select}_{k}(A)}\left(\hat{\sigma}_{S_{1}^{l(j)}}(j)+\hat{\sigma}_{S_{1} ^{r(j)}}(j)\right)\] \[\geq\sum_{j\in\textsc{Select}_{k}(A)}\left(\hat{\sigma}_{S_{1}^{ l(j)}}(j)\chi(j=i^{l(j)})+\hat{\sigma}_{S_{1}^{r(j)}}(j)\chi(j=i^{r(j)})\right)\] \[=\sum_{p\in[k]}\hat{\sigma}_{S_{1}^{l}}(i^{p}). \tag{7}\] Figure 3: Example of \(\textsc{Select}_{6}(A)\) for \(A\in\mathcal{A}_{9}\). The weight matrix \(A\) is shown alongside its graph representation, where edges of weight \(1\) are in blue, weight \(2\) are in orange, weight \(3\) are in red, and edges of weight \(0\) are not included. The partition system is given below, where omitted edges are shown in gray. For each partition, the selected vertex is highlighted in light blue. Observe that \(\sigma(\textsc{Select}_{6}(A))=17\) and \(\sigma(\textsc{Opt}_{6}(A))=27\); the multiplicative guarantee provided by Lemma 4.2 for this instance is \(1/3\). Indeed, the first equality follows from the definition of \(\textsc{Select}_{k}(A)\), the second one from equality (4), the inequality simply from \(\chi(\cdot)\leq 1\), and the last equality follows from the definition of \(i^{p}\) for each \(p\in[k]\). We next use inequalities (6) and (7) to conclude the bound stated in the lemma. For \(b\coloneqq 2n/k\), we know from property (iv) of Lemma 3.3 that there is a partition \(\dot{\bigcup}_{t\in[b]}U_{t}=\textsc{Opt}_{k}(A)\) such that \(i\in S_{2}^{p}\) implies \(j\not\in S_{2}^{p}\) for all \(t\in[b]\), \(i,j\in U_{t}\) with \(i\neq j\), and \(p\in[k]\). We obtain that, for every \(t\in[b]\), \[\sigma(\textsc{Select}_{k}(A))\geq\sum_{p\in[k]}\hat{\sigma}_{S_{1}^{p}}(i^{p} )\geq\sum_{j\in U_{t}}\left(\hat{\sigma}_{S_{1}^{l(j)}}(i^{l(j)})+\hat{\sigma} _{S_{1}^{r(j)}}(i^{r(j)})\right)\geq\sigma(U_{t}), \tag{8}\] where the first inequality follows from inequality (7), the second one from the fact that \(\{l(i),r(i)\}\cap\{l(j),r(j)\}=\emptyset\) for every \(t\in[b]\) and every \(i,j\in U_{t}\) with \(i\neq j\), and the last one from inequality (6). This yields \[\sigma(\textsc{Select}_{k}(A))\geq\max_{t\in[b]}\sigma(U_{t})\geq\frac{1}{b} \sum_{t\in[b]}\sigma(U_{t})=\frac{1}{b}\sigma(\textsc{Opt}_{k}(A)).\] Here, the first inequality follows from (8), the second one from the observation that the maximum of a set of values is at least as large as their average, and the equality from the fact that \(\{U_{t}\}_{t\in[b]}\) is a partition of \(\textsc{Opt}_{k}(A)\). Therefore, we obtain that \(\textsc{Select}_{k}\) is \(\alpha\)-optimal for \[\frac{\sigma(\textsc{Select}_{k}(A))}{\sigma(\textsc{Opt}_{k}(A))}\geq\frac{1} {b}=\alpha.\qed\] In order to conclude our main result, it only remains to extend the bound given by Lemma 4.2 to the case where at least one of the conditions \(b\coloneqq 2n/k\in\mathbb{N}\) or \(b\leq k/2\in\mathbb{N}\) is not satisfied. To this end, we show a general way to extend bounds on the approximation ratio for given values of \(\tilde{n}\) and \(\tilde{k}\) to other values \(n\) and \(k\): whenever \(n\leq\tilde{n}\) and \(k\geq\tilde{k}\), we can do so preserving impartiality and only losing a factor of \(\tilde{k}/k\). Given \(k,\tilde{k},\tilde{n},n\in\mathbb{N}\) with \(k\leq\tilde{k}<\tilde{n}\leq n\), and an \((\tilde{n},\tilde{k})\)-selection mechanism Alg, we can generalize Alg to the \((n,k)\)-selection mechanism \(\textsc{Gen}_{\textsc{Alg},k}\). This is formally described by Algorithm 2, whose output is denoted by \(\textsc{Gen}_{\textsc{Alg},k}(A)\) for an input matrix \(A\in\mathcal{A}_{n}\). This algorithm simply extends \(A\) to the \(\tilde{n}\times\tilde{n}\) matrix \(\tilde{A}\) by adding \(\tilde{n}-n\) many all-zero rows and columns to it, and then applies Alg on \(\tilde{A}\). As before, whenever \(\tilde{n},\ n,\ k,\ \textsc{Alg}\), and \(A\in\mathcal{A}_{n}\) are fixed, we use \(\tilde{A}\) to refer to the object defined in Algorithm 2 for this input. In a slight overload of notation, when we consider \(A^{\prime}\in\mathcal{A}_{n}\) as an input, we write simply \(\tilde{A}^{\prime}\) for the matrix defined in Algorithm 2 on input \(A^{\prime}\). We obtain the following lemma. **Lemma 4.3**.: _Let \(\tilde{k},k,n,\tilde{n}\in\mathbb{N}\) with \(\tilde{k}\leq k<n\leq\tilde{n}\) be such that there exists an impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},\tilde{k})\)-selection mechanism Alg. Then \(\textsc{Gen}_{\textsc{Alg},k}\) is an impartial and \(\alpha\)-optimal \((n,k)\)-selection mechanism with \(\alpha=(\tilde{k}/k)\tilde{\alpha}\)._ Proof.: Let \(n\), \(k\), \(\tilde{n}\), and \(\tilde{k}\) be as in the statement. Let also Alg denote the impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},\tilde{k})\)-selection mechanism. In order to see that \(\textsc{Gen}_{\textsc{Alg},k}\) is impartial, let \(A,A^{\prime}\in\mathcal{A}_{n}\) and \(i\in[n]\) such that \(A_{-i}=A^{\prime}_{-i}\). This implies \(\tilde{A}_{-i}=\tilde{A}^{\prime}_{-i}\), thus the impartial of Alg yields \[\textsc{Gen}_{\textsc{Alg},k}(A)\cap\{i\}=\textsc{Alg}(\tilde{A})\cap\{i\}= \textsc{Alg}(\tilde{A}^{\prime})\cap\{i\}=\textsc{Gen}_{\textsc{Alg},k}(A^{ \prime})\cap\{i\}.\] To prove the approximation guarantee, we let \(A\in\mathcal{A}_{n}\) be an arbitrary weight matrix and observe that \[\frac{\sigma(\textsc{Gen}_{\textsc{Alg},k}(A))}{\sigma(\textsc{Opt}_{\tilde{ k}}(\tilde{A}))}=\frac{\sigma(\textsc{Alg}(\tilde{A}))}{\sigma(\textsc{Opt}_{ \tilde{k}}(\tilde{A}))}\geq\tilde{\alpha}, \tag{9}\] where the equality follows from the definition of \(\textsc{Gen}_{\textsc{Alg},k}\) and the inequality follows from the \(\tilde{\alpha}\)-optimality of Alg. On the other hand, as \(\tilde{k}\leq k\) and \(\sigma(j,\tilde{A})=0\) for every \(j\not\in[n]\), we know that \[\frac{\sigma(\textsc{Opt}_{k}(A))}{k}=\frac{1}{k}\max_{S\subseteq[n]:\;|S|=k} \sigma(S;A)\leq\frac{1}{\tilde{k}}\max_{S\subseteq[n]:\;|S|=\tilde{k}}\sigma(S ;A)=\frac{\sigma(\textsc{Opt}_{\tilde{k}}(\tilde{A}))}{\tilde{k}},\] i.e., the average score of the \(k\) top-voted agents of input \(A\) can be no larger than the average score of the \(\tilde{k}\) top-voted agents of input \(\tilde{A}\). Plugging this inequality into (9) concludes the proof as \[\frac{\sigma(\textsc{Gen}_{\textsc{Alg},k}(A))}{\sigma(\textsc{Opt}_{k}(A))} \geq\frac{\tilde{k}}{k}\frac{\sigma(\textsc{Gen}_{\textsc{Alg},k}(A))}{ \sigma(\textsc{Opt}_{\tilde{k}}(\tilde{A}))}\geq\frac{\tilde{k}}{k}\tilde{ \alpha}.\qed\] Our main result now follows from the last two lemmas. Proof of Theorem 4.1.: Let \(n\) and \(k\) be as in the statement. We define \[\tilde{k}\coloneqq k-k\bmod 2\quad\text{and}\quad\tilde{n}\coloneqq\frac{k-k \bmod 2}{2}\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil.\] It is clear that \(\tilde{n},\tilde{k}\) are natural numbers with \(\tilde{k}\leq k<n\leq\tilde{n}\) and that \[b\coloneqq\frac{2\tilde{n}}{\tilde{k}}=\left\lceil\frac{2n}{k-k \bmod 2}\right\rceil\in\mathbb{N}.\] Moreover, we have that \[\tilde{n}=\frac{k-k\bmod 2}{2}\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil \leq\frac{k-k\bmod 2}{2}\left\lceil\frac{2\frac{(k-k\bmod 2)^{2}}{4}}{k-k \bmod 2}\right\rceil=\frac{\tilde{k}^{2}}{4},\] where the inequality follows from the condition \(k-k\bmod 2\geq 2\sqrt{n}\) in the statement. This yields \(b=2\tilde{n}/\tilde{k}\leq\tilde{k}/2\in\mathbb{N}\). By Lemma 4.2, this implies that \(\textsc{Select}_{\tilde{k}}\) is an impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},\tilde{k})\)-selection mechanism with \[\tilde{\alpha}=\frac{1}{b}=\frac{1}{\left\lceil\frac{2n}{k-k\bmod 2} \right\rceil}.\] Since \(\tilde{n},\tilde{k}\in\mathbb{N}\) are such that \(\tilde{k}\leq k\) and \(\tilde{n}\geq n\), Lemma 4.3 implies that \(\textsc{Gen}_{\textsc{Select}_{\tilde{k}},k}\) is an impartial and \(\alpha\)-optimal \((n,k)\)-selection mechanism with \[\alpha=\frac{\tilde{k}}{\tilde{k}}\tilde{\alpha}=\frac{k-k\bmod 2}{k\left\lceil \frac{2n}{k-k\bmod 2}\right\rceil}.\qed\] The mechanism and its approximation ratio naturally extend to the widely studied unweighted setting, where one restricts to matrices \(A\in\mathcal{A}_{n}\) with \(A_{ij}\in\{0,1\}\) for every \(i,j\in[n]\). This improves on the previous best lower bound of \(1/k\) whenever the number of agents to select is high enough compared to \(n\) for Theorem 4.1 to be applicable: if \(k-k\bmod 2\geq 2\sqrt{n}\), the theorem guarantees the existence of an \((n,k)\)-selection mechanism that is impartial and \(\alpha\)-optimal with \[\alpha=\frac{k-k\bmod 2}{k\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil} \geq\frac{k-k\bmod 2}{k\left\lceil\frac{2(k-k\bmod 2)^{2}}{4(k-k\bmod 2)} \right\rceil}=\frac{2}{k}.\] We end this section by showing that the analysis of our \((n,k)\)-selection mechanism \(\textsc{Select}_{k}\) for \(n\) and \(k\) satisfying the conditions of Lemma 4.2 is tight. **Theorem 4.4**.: _Let \(n,k\in\mathbb{N}\) with \(k<n\) be such that \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). Then, for every \(\varepsilon>0\) we have that \(\textsc{Select}_{k}\) is not \((1/b+\varepsilon)\)-optimal._ Proof.: Let \(n\) and \(k\) be as in the statement and consider the partition system \(((S^{1}_{1},S^{1}_{2}),\ldots,(S^{k}_{1},S^{k}_{2}))=\mathcal{S}(n,k)\). Recall that we defined \(\mathcal{S}(n,k)\) such that \(S^{1}_{2}=[b]\). Considering \(l(j)\) and \(r(j)\) as defined in Algorithm 1 for every \(j\in[n]\), we note that for each \(j\in S^{1}_{2}\) we have \(l(j)=1\). For each \(j\in S^{1}_{2}\), we let \(h(j)\) be an arbitrary agent in \(S^{1}_{1}\) such that \(h(j)\in S^{r(j)}_{2}\). Such vertex is guaranteed to exist, since from property (ii) of Lemma 3.3 we know that \(S^{(j)}_{2}\cap S^{r(j)}_{2}=\{j\}\), and from property (i) we have that \(|S^{r(j)}_{2}|=b>1\). We consider the instance given by \(A\in\mathcal{A}_{n}\) with \(A_{ij}=1\), if \(j\in S^{1}_{2}\) and \(i=h(j)\), and \(A_{ij}=0\), otherwise. Intuitively, this construction aims to have \(A_{ij}>0\) for some \(i\in S^{p}_{1}\) and \(j\in S^{p}_{2}\) only if \(p=1\), so that the only agent with a strictly positive score selected by the mechanism, among \(b\) agents with a strictly positive score, is \(i^{1}\). An example of this construction and the corresponding outcome of the mechanism is illustrated in Figure 4. It is clear that \(\textsc{Opt}_{k}(A)=[b]\) and \(\sigma(\textsc{Opt}_{k}(A))=b\). On the other hand, we have that \(\sigma(i^{1})=1\) and, for every \(p\in\{2,3,\ldots,k\}\), that \(\hat{\sigma}_{S^{p}_{1}}(j)=0\) for every \(j\in S^{p}_{2}\). This is because we have \(\sigma(j)=0\) for every \(j\not\in[b]\) and, whenever there is a \(j\in[b]\cap S^{2}_{p}\), we also have \(h(j)\in S^{2}_{p}\). Moreover, for every \(p\in\{2,3,\ldots,k\}\) such that there exists a \(j\in[b]\cap S^{2}_{p}\), we have that \(j\neq\max S^{p}_{2}\) since \(h(j)\in S^{p}_{2}\) and \(h(j)>j\). This yields \(\sigma(i^{p})=0\) for every \(p\in\{2,3,\ldots,k\}\), thus \(\sigma(\textsc{Select}_{k}(A))=1\). This concludes the proof as \[\frac{\sigma(\textsc{Select}_{k}(A))}{\sigma(\textsc{Opt}_{k}(A))}=\frac{1}{b}.\qed\] In terms of general upper bounds on the approximation ratio that an impartial mechanism can achieve, the best known is \((k-1)/k\)[4]. Even for the regime \(k-k\bmod 2\geq 2n/3\), in which our mechanism provides a lower bound of \(1/3\) and considerably improves the previously best bound of \(1/k\)[4], the gap remains large. Further improvements in either lower or upper bounds arise as the main direction for future work. Figure 4: Example of the construction of the proof of Theorem 4.4 for \(n=9\) and \(k=6\) with \(3\) votes of weight \(1\): agent \(4\) votes for agent \(1\), agent \(5\) votes for agent \(2\), and agent \(6\) votes for agent \(3\). All votes are only seen in the first partition. Since agents with positive scores have the smallest indices, they are not selected in their second candidate set. ## 5 Impartial Assignment In this section, we consider a generalization of the impartial selection problem in which agents are not selected into one but _assigned_ to at most one of \(m\) many sets, which we refer to as _jobs_. Each job \(\ell\in[m]\) can be assigned at most \(k\) agents, so that we obtain the impartial selection problem as the special case where \(m=1\). We first extend the notation from Section 2 to this new setting. For \(n,m\in\mathbb{N}\) with \(m\leq n\), we consider \(m\)-tuples of weight matrices \(\mathbf{A}=(A_{1},A_{2},\ldots,A_{m})\in\mathcal{A}_{n}^{m}\), each of them representing the weighted votes for one job. Let further \(k<n\) in the following; an instance of the assignment problem is then given by the tuple \(\mathbf{A}\) and the value \(k\). We let \[\mathcal{X}_{k}\coloneqq\big{\{} \mathbf{X}=(X_{1},X_{2},\ldots,X_{m})\in\big{(}2^{[n]}\big{)}^{m }\colon|X_{i}|\leq k\text{ and }X_{i}\cap X_{j}=\emptyset\] \[\text{ for every }i,j\in[m]\text{ with }i\neq j\big{\}}\] denote the set of feasible assignments, i.e., the set of tuples \(\mathbf{X}\) containing \(m\) pairwise disjoint subsets of agents, each with cardinality at most \(k\). In a slight overload of notation, for \(\mathbf{X}\in\mathcal{X}_{k}\) and \(\mathbf{A}\in\mathcal{A}_{n}^{m}\), we write \[\sigma(\mathbf{X};\mathbf{A})\coloneqq\sum_{\ell\in[m]}\sigma(X_{\ell};A_{ \ell})\] to refer to the sum, over the jobs, of the score of the set assigned to each job according to \(\mathbf{X}\), and we simply write \(\sigma(\mathbf{X})\) when the instance is clear from the context. Finally, for \(\mathbf{A}\in\mathcal{A}_{n}^{m}\), we let \[\textsc{Opt}_{k}(\mathbf{A})\coloneqq\operatorname*{arg\,max}_{\mathbf{X}\in \mathcal{X}_{k}}\sigma(\mathbf{X};\mathbf{A})\] denote an arbitrary assignment with the largest score among feasible assignments. We write just \(\textsc{Opt}_{k}\) when the instance is clear. An \((n,m,k)\)-assignment mechanism is a function \(f\colon\mathcal{A}_{n}^{m}\to\big{(}2^{[n]}\big{)}^{m}\) such that \(f(\mathbf{A})\in\mathcal{X}_{k}\) for every \(\mathbf{A}\in\mathcal{A}_{n}^{m}\). Such a mechanism is _impartial_ if, for every pair of instances \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) and \(\mathbf{A}^{\prime}\in\mathcal{A}_{n}^{m}\) and for all agents \(i\in[n]\) such that \((A_{\ell})_{-i}=(A_{\ell}^{\prime})_{-i}\) holds for each job \(\ell\in[m]\), it also holds that \((f(\mathbf{A}))_{\ell}\cap\{i\}=\left(f(\mathbf{A}^{\prime})\right)_{\ell} \cap\{i\}\) for every \(\ell\in[m]\). We further call an \((n,m,k)\)-assignment mechanism \(\alpha\)_-optimal_ if \[\frac{\sigma(f(\mathbf{A});\mathbf{A})}{\sigma(\textsc{Opt}_{k}(\mathbf{A}); \mathbf{A})}\geq\alpha\] holds for all \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) and some \(\alpha\in[0,1]\). We are prepared to state the main theorem of this section. **Theorem 5.1**.: _Let \(n,m,k\in\mathbb{N}\) with \(1<k<n,\ mk\leq n\), and \(k-k\bmod 2\geq 2\sqrt{n}\). Then, there exists an \((n,m,k)\)-assignment mechanism that is impartial and \(\alpha\)-optimal with_ \[\alpha=\frac{k-k\bmod 2}{2k\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil}.\] The main ingredient of the proof is an adaptation of our mechanism from Section 4 that selects from each partition not one but \(m\) many agents: one for each set \(\ell\in[m]\). We leave the partitioning step unchanged and, for the second step, assign \(m\) agents from each candidate set to different jobs in a way that the score obtained for each partition is maximized. In case an agent is assigned to two different jobs, we assign it to the one for which it receives the highest number of votes. The adapted procedure is formally described in Algorithm 3; we refer to it as \(\textsc{Assign}_{k}\) and denote its output by \(\textsc{Assign}_{k}(\mathbf{A})\) for a given input tuple of matrices \(\mathbf{A}\in\mathcal{A}_{n}^{m}\). Impartiality of this mechanism follows from a similar reasoning as in the proof of Theorem 4.1: whenever the vote of an agent is taken into account, the agent is not part of the candidate set. The approximation guarantee makes use of a detailed analysis of the case \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\), which is somewhat more intricate than the analysis in Section 4. We consider subsets of agents that are assigned to any job in the optimal assignment and are not mutual contenders. We then use the key fact that, when considering the two partitions in which some agent \(i\) is in the candidate set, the mechanism assigns agents in a way that the sum of votes of the assigned agents in both partitions is at least the number of votes that \(i\) receives for any job. Exploiting the robust partitioning structure as before allows us to take the best of these subsets and conclude via an averaging argument. Here we lose an additional factor of \(1/2\) due to the possibility that an agent is initially assigned to two jobs. The extension to general values \(n,\ m\), and \(k\) is then analogous to that of Section 4. Throughout this section, whenever \(n\), \(m\), \(k\), and \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) are fixed, we use \(((S_{1}^{1},S_{2}^{1}),\dots,(S_{1}^{k},S_{2}^{k}))\), \(l(j)\), \(r(j)\), \(\hat{\sigma}_{S_{1}^{p}}(v)\), \(\mathbf{x}^{p}\), and \(\mathbf{X}\) for each \(p\in[k]\) and \(j\in[n]\) to refer to the objects defined in \(\textsc{Assign}_{k}\) for input \(\mathbf{A}\). We specify the input tuple of weight matrices as an argument when not clear from the context. The following lemma, which plays an analogous role to Lemma 4.2, constitutes the main technical ingredient for the proof of Theorem 5.1. **Lemma 5.2**.: _Let \(n,m,k\in\mathbb{N}\) with \(k<n\) and \(mk\leq n\) be such that \(b\coloneqq 2n/k\in\mathbb{N}\) and \(b\leq k/2\in\mathbb{N}\). Then, \(\textsc{Assign}_{k}\) is an impartial and \(1/2b\)-optimal \((n,m,k)\)-assignment mechanism._ Proof.: We consider \(n\) and \(k\) as in the statement. We first note that \(\textsc{Assign}_{k}\) is well-defined as we have \(|\{p\in[k]:j\in S_{2}^{p}\}|=2\) for every \(j\in[n]\) and \(b=2n/k\geq m\). On the other hand, since \(x_{\ell}^{p}\) is a single agent for every \(p\in[k]\) and \(\ell\in[m]\), and \(X_{\ell}\subseteq\bigcup_{p\in[k]}\{x_{\ell}^{p}\}\) for each \(\ell\in[m]\), we have that \(|X_{\ell}|\leq k\) for every \(\ell\in[m]\). Further, due to the last step we have that for every \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) and every \(j\in[n]\) it holds that \(|\{\ell\in[m]:j\in(\textsc{Assign}_{k}(\mathbf{A}))_{\ell}\}|\leq 1\), thus \(\textsc{Assign}_{k}\) returns a feasible assignment. To see that \(\textsc{Assign}_{k}\) is impartial, let \(\mathbf{A},\mathbf{A}^{\prime}\in\mathcal{A}_{n}^{m}\) and \(j\in[n]\) be such that \((A_{\ell})_{-j}=(A_{\ell}^{\prime})_{-j}\) holds for every \(\ell\in[m]\). Suppose \(j\in(\textsc{Assign}_{k}(\mathbf{A}))_{\hat{\ell}}\) for some \(\hat{\ell}\in[m]\). From the definition of the mechanism, we have that there is a \(p\in[k]\) such that \(j=x_{\ell}^{p}(\mathbf{A})\) and such that, if \(j=x_{\ell}^{q}(\mathbf{A})\) for \(\ell\in[m]\setminus\{\hat{\ell}\}\) and \(q\in\{l(j),r(j)\}\setminus\{p\}\), then \((\sigma(j;A_{\hat{\ell}}),\hat{\ell})>(\sigma(j;A_{\ell}),\ell)\). Since \(j\in S_{2}^{p}\) and \((A_{\ell})_{-j}=(A_{\ell}^{\prime})_{-j}\) for every \(\ell\in[m]\), we have that \(\hat{\sigma}_{S_{1}^{p}}(i;A_{\ell})=\hat{\sigma}_{S_{1}^{p}}(i;A_{\ell}^{ \prime})\) for every \(p\in\{l(j),r(j)\}\), every \(\ell\in[m]\), and every \(i\in[n]\). Therefore, since the partial assignment \(\mathbf{x}^{p}(\mathbf{A})\) is defined as \[\mathbf{x}^{p}(\mathbf{A})=\operatorname*{arg\,max}_{v\in(S_{2}^{p})^{m}:v_{ \ell}\neq v_{\ell}\neq\ell^{\prime}}\Bigg{(}\sum_{\ell\in[m]}\hat{\sigma}_{S_{ 1}^{p}}(v_{\ell};A_{\ell}),v_{1},\ldots,v_{m}\Bigg{)},\] we obtain that \(\mathbf{x}^{p}(\mathbf{A})=\mathbf{x}^{p}(\mathbf{A}^{\prime})\) for \(p\in\{l(j),r(j)\}\). In particular, this yields \(j=x_{\ell}^{p}(\mathbf{A}^{\prime})\) and that, if \(j=x_{\ell}^{q}(\mathbf{A}^{\prime})\) for \(\ell\in[m]\setminus\{\hat{\ell}\}\) and \(q\in\{l(j),r(j)\}\setminus\{p\}\), then \((\sigma(j;A_{\ell}^{\prime}),\hat{\ell})>(\sigma(j;A_{\ell}^{\prime}),\ell)\). Thus, \(j\in(\textsc{Assign}_{k}(\mathbf{A}^{\prime}))_{\hat{\ell}}\). Since the previous reasoning is valid for all \(\hat{\ell}\in[m]\), we conclude that \((\textsc{Assign}_{k}(\mathbf{A}))_{\hat{\ell}}\cap\{j\}\) holds for every \(\ell\in[m]\). For the remainder of this proof, we let \(\mathbf{A}\in\mathcal{A}_{n}\) be an arbitrary tuple of weight matrices. We start by observing that, for every \(j\in[n]\) and \(\ell\in[m]\), \[\hat{\sigma}_{S_{1}^{(j)}}(j;A_{\ell})+\hat{\sigma}_{S_{1}^{r(j)}}(j;A_{\ell })=\sigma_{S_{1}^{l(j)}}(j;A_{\ell})+\sigma_{S_{1}^{r(j)}\setminus S_{1}^{l(j )}}(j;A_{\ell})=\sigma(j;A_{\ell}), \tag{10}\] since for every \(j\in[n]\), property (ii) of Lemma 3.3 implies \(S_{1}^{l(j)}\cup S_{1}^{r(j)}=[n]\setminus\{j\}\). Furthermore, the definition of \(\mathbf{x}^{p}\) yields \[\sum_{\ell\in[m]}\hat{\sigma}_{S_{1}^{p}}(x_{\ell}^{p};A_{\ell})\geq\hat{ \sigma}_{S_{1}^{p}}(j;A_{\hat{\ell}}) \tag{11}\] for every \(p\in[k]\), \(\hat{\ell}\in[m]\), and \(j\in S_{2}^{p}\). To see this, assume to the contrary that (11) were not true for some \(p\in[k]\), \(\hat{\ell}\in[m]\), and \(j\in S_{2}^{p}\). Then, taking an alternative partial assignment \(\mathbf{z}^{p}\in(S_{2}^{p})^{m}\) defined as \(z_{\hat{\ell}}^{p}=j\) and, for \(\ell\in[m]\setminus\{\hat{\ell}\}\), \(z_{\ell}^{p}\) equal to an arbitrary number in \([n]\) such that \(z_{\ell}^{p}\neq z_{\ell^{\prime}}^{p}\) for every \(\ell,\ell^{\prime}\in[m]\) with \(\ell\neq\ell^{\prime}\), we would obtain \[\sum_{\ell\in[m]}\hat{\sigma}_{S_{1}^{p}}(z_{\ell}^{p};A_{\ell})\geq\hat{ \sigma}_{S_{1}^{p}}(j;A_{\hat{\ell}})>\sum_{\ell\in[m]}\hat{\sigma}_{S_{1}^{p} }(x_{\ell}^{p};A_{\ell}).\] However, this contradicts the definition of \(\mathbf{x}^{p}\). Given these two facts, we claim that \[\sum_{\ell\in[m]}\big{(}\hat{\sigma}_{S_{1}^{l(j)}}(x_{\ell}^{l(j)};A_{\ell})+ \hat{\sigma}_{S_{1}^{r(j)}}(x_{\ell}^{r(j)};A_{\ell})\big{)}\geq\max_{\ell\in[ m]}\sigma(j;A_{\ell}) \tag{12}\] for every \(j\in[n]\). To prove this, we fix \(j\in[n]\) and observe that, for each \(p\in\{l(j),r(j)\}\), inequality (11) directly implies \[\sum_{\ell\in[m]}\hat{\sigma}_{S_{1}^{p}}(x_{\ell}^{p};A_{\ell})\geq\max_{\ell \in[m]}\hat{\sigma}_{S_{1}^{p}}(j;A_{\ell}).\] Therefore, \[\sum_{\ell\in[m]}\big{(}\hat{\sigma}_{S_{1}^{l(j)}}(x_{\ell}^{l(j )};A_{\ell})+\hat{\sigma}_{S_{1}^{r(j)}}(x_{\ell}^{r(j)};A_{\ell})\big{)} \geq\max_{\ell\in[m]}\hat{\sigma}_{S_{1}^{l(j)}}(j;A_{\ell})+ \max_{\ell\in[m]}\hat{\sigma}_{S_{1}^{r(j)}}(j;A_{\ell})\] \[\geq\max_{\ell\in[m]}\sigma(j;A_{\ell}),\] where the last inequality follows from equality (10). This concludes the proof of inequality (12). We next obtain a second inequality needed to conclude the approximation guarantee. Denoting by \(Z\coloneqq\bigcup_{\ell\in[m]}\big{(}\textsc{Assign}_{k}(\mathbf{A})\big{)}_{\hat {\ell}}\) the set of selected agents, for each \(j\in Z\) we consider the set \[L(j)\coloneqq\Big{\{}\ell\in[m]\ \Big{|}\ j=x_{\ell}^{l(r)}\ \text{or}\ j=x_{\ell}^{r(j)}\Big{\}}\] containing the jobs \(\ell\) to which \(j\) has been assigned to before the last **for** loop. Note that \(|L(j)|\in\{1,2\}\) for every \(j\in Z\). We now observe that \[\sigma(\textsc{Assign}_{k}(\mathbf{A})) =\sum_{j\in Z}\max_{\ell\in L(j)}\sigma(j;A_{\ell})\] \[\geq\frac{1}{2}\sum_{j\in Z}\sum_{\ell\in L(j)}\sigma(j;A_{\ell})\] \[=\frac{1}{2}\sum_{\ell\in[m]}\sum_{p\in[k]}\hat{\sigma}_{S_{1}^{ p}}(x_{\ell}^{p};A_{\ell}). \tag{13}\] Indeed, the first equality follows from the last **for** loop in the definition of \(\textsc{Assign}_{k}\), the inequality from the fact that a maximum of two values is at least their average, and the last equality from equality (10) and the definition of \(L(j)\). We now use inequalities (12) and (13) to conclude the bound stated in the lemma. For each \(j\in\bigcup_{\ell\in[m]}\left(\textsc{Opt}_{k}(\mathbf{A})\right)_{\ell}\), we define \(\ell(j):=\hat{\ell}\) for \(\hat{\ell}\in[m]\) such that \(j\in\left(\textsc{Opt}_{k}(\mathbf{A})\right)_{\hat{\ell}}\). We further let \(b\coloneqq 2n/k\). Since \(\big{|}\bigcup_{\ell\in[m]}\left(\textsc{Opt}_{k}(\mathbf{A})\right)_{\ell} \big{|}=km\leq n\), we know from property (iv) of Lemma 3.3, that there is a partition \(\bigcup_{t\in[b]}U_{t}=\bigcup_{\ell\in[m]}\left(\textsc{Opt}_{k}(\mathbf{A}) \right)_{\ell}\) such that \(i\in S_{2}^{p}\) implies \(j\not\in S_{2}^{p}\) for all \(t\in[b]\), \(i,j\in U_{t}\) with \(i\neq j\), and \(p\in[k]\). We obtain that, for every \(t\in[b]\), \[\sigma(\textsc{Assign}_{k}(\mathbf{A})) \geq\frac{1}{2}\sum_{\ell\in[m]}\sum_{p\in[k]}\hat{\sigma}_{S_{1} ^{p}}(x_{\ell}^{p};A_{\ell})\] \[\geq\frac{1}{2}\sum_{\ell\in[m]}\sum_{j\in U_{t}}\Big{(}\hat{ \sigma}_{S_{1}^{l(j)}}(x_{\ell}^{l(j)};A_{\ell})+\hat{\sigma}_{S_{1}^{r(j)}}( x_{\ell}^{r(j)};A_{\ell})\Big{)}\] \[\geq\frac{1}{2}\sum_{j\in U_{t}}\sigma(j;A_{\ell(j)}), \tag{14}\] where the first inequality follows from inequality (13), the second one from the fact that \(\{l(i),r(i)\}\cap\{l(j),r(j)\}=\emptyset\) for every \(\ell\in[b]\) and every \(i,j\in U_{\ell}\) with \(i\neq j\), and the last one from inequality (12). This yields \[\sigma(\textsc{Assign}_{k}(\mathbf{A})) \geq\frac{1}{2}\max_{t\in[b]}\sum_{j\in U_{t}}\sigma(j;A_{\ell(j)})\] \[\geq\frac{1}{2b}\sum_{t\in[b]}\sum_{j\in U_{t}}\sigma(j;A_{\ell(j )})\] \[=\frac{1}{2b}\sum_{\ell\in[m]}\sigma\left(\left(\textsc{Opt}_{k}( \mathbf{A})\right)_{\hat{\ell}};A_{\ell}\right)\] \[=\frac{1}{2b}\sigma(\textsc{Opt}_{k}(\mathbf{A})),\] where the first inequality follows from (14), the second inequality from the observation that the maximum of a set of values is at least their average, the first equality from the fact that \(\{U_{t}\}_{t\in[b]}\) is a partition of \(\bigcup_{\ell\in[m]}\left(\textsc{Opt}_{k}(\mathbf{A})\right)_{\hat{\ell}}\) together with the definition of \(\ell(j)\) for each \(j\) in this set, and the last equality from the definition of \(\sigma(\textsc{Opt}_{k}(\mathbf{A}))\). We conclude that \(\textsc{Assign}_{k}\) is \(\alpha\)-optimal for \[\frac{\sigma(\textsc{Assign}_{k}(\mathbf{A}))}{\sigma(\textsc{Opt}_{k}( \mathbf{A}))}\geq\frac{1}{2b}=\alpha.\qed\] The next lemma, analogous to Lemma 4.3, allows us to extend the bound given by Lemma 5.2 to the case when \(n\) and \(k\) do not satisfy the conditions of Lemma 5.2. Given \(\tilde{n},m,\tilde{k}\in\mathbb{N}\) with \(m\tilde{k}<\tilde{n}\), an \((\tilde{n},m,\tilde{k})\)-assignment mechanism Alg, and \(n,k\in\mathbb{N}\) with \(k\geq\tilde{k}\), \(n\leq\tilde{n}\), \(k<n\), and \(mk\leq n\), this is achieved by the \((n,m,k)\)-assignment mechanism that we formally describe as Algorithm 4 and whose output we denote as \(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}(\mathbf{A})\) for an input tuple of weight matrices \(\mathbf{A}\in\mathcal{A}_{n}^{m}\). This algorithm simply extends \(A_{\ell}\), for each \(\ell\in[m]\), to the \(\tilde{n}\times\tilde{n}\) matrix \(\tilde{A}_{\ell}\) by adding \(\tilde{n}-n\) many all-zero rows and columns, and then applies Alg on \(\tilde{A}\). As before, whenever \(\tilde{n},m,\ n,\ k,\) Alg, and \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) are fixed, we use \(\tilde{\mathbf{A}}\) to refer to the object defined in Algorithm 4 for this input. In a slight overload of notation, when we consider \(\mathbf{A}^{\prime}\in\mathcal{A}_{n}\) as input, we write simply \(\tilde{\mathbf{A}}^{\prime}\). **Lemma 5.3**.: _Let \(\tilde{n},m,\tilde{k}\in\mathbb{N}\) with \(m\tilde{k}\leq\tilde{n}\) be such that there exists an impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},m,\tilde{k})\)-assignment mechanism Alg. Then, for every \(k,n\in\mathbb{N}\) with \(k\geq\tilde{k}\), \(n\leq\tilde{n}\), \(k<n\), and \(mk\leq n\), \(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}\) is an impartial and \(\alpha\)-optimal \((n,m,k)\)-assignment mechanism with \(\alpha=(\tilde{k}/k)\tilde{\alpha}\)._ Proof.: Let \(n\), \(m\), \(k\), \(\tilde{n}\) and \(\tilde{k}\) be as in the statement. Let also Alg denote the impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},m,\tilde{k})\)-assignment mechanism. In order to see that \(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}\) is impartial, let \(\mathbf{A},\mathbf{A}^{\prime}\in\mathcal{A}_{n}^{m}\) and let \(i\in[n]\) be such that \(\left(A_{\ell}\right)_{-i}=\left(A_{\ell}^{\prime}\right)_{-i}\) for every \(\ell\in[m]\). This implies \((\tilde{A}_{\ell})_{-i}=\left(\tilde{A}_{\ell}^{\prime}\right)_{-i}\), thus the impartial yields that for every \(\ell\in[m]\), \[\left(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}(\mathbf{A}) \right)_{\ell}\cap\{i\} =\left(\textsc{Alg}(\tilde{\mathbf{A}})\right)_{\ell}\cap\{i\}\] \[=\left(\textsc{Alg}(\tilde{\mathbf{A}}^{\prime})\right)_{\ell} \cap\{i\}\] \[=\left(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}(\mathbf{A}^{ \prime})\right)_{\ell}\cap\{i\}.\] To prove the approximation guarantee, we let \(\mathbf{A}\in\mathcal{A}_{n}^{m}\) be an arbitrary tuple of weight matrices and observe that \[\frac{\sigma(\textsc{Gen}^{\textsc{AS}}_{\textsc{Alg},k}(\mathbf{A}))}{ \sigma(\textsc{Opt}_{\tilde{k}}(\tilde{\mathbf{A}}))}=\frac{\sigma(\textsc{Alg }(\tilde{\mathbf{A}}))}{\sigma(\textsc{Opt}_{\tilde{k}}(\tilde{\mathbf{A}}))} \geq\tilde{\alpha}, \tag{15}\] where the equality follows from the definition of Assign\({}_{\textsc{Alg},k}\) and the inequality follows from the \(\tilde{\alpha}\)-optimality of Alg. On the other hand, as \(\tilde{k}\leq k\) and \(\sigma(j,\tilde{A}_{\ell})=0\) for every \(j\not\in[n]\) and every \(\ell\in[m]\), we know that \[\frac{\sigma(\textsc{Opt}_{k}(\mathbf{A}))}{k} =\frac{1}{k}\max_{\mathbf{X}\in\mathcal{X}_{k}}\sum_{\ell\in[m]} \sigma(X_{\ell};A_{\ell})\] \[\leq\frac{1}{\tilde{k}}\max_{\mathbf{X}\in\mathcal{X}_{k}}\sum_{ \ell\in[m]}\sigma(X_{\ell};A_{\ell})=\frac{\sigma(\textsc{Opt}_{\tilde{k}}( \tilde{\mathbf{A}}))}{\tilde{k}},\] i.e., the average score of the \(k\) assigned agents of input \(A\), under the best assignment, can be no larger than the average score of the \(\tilde{k}\) assigned agents of input \(\tilde{A}\), under the best assignment. To see this, note that we can obtain an assignment where we restrict to at most \(\tilde{k}\) agents for each job from \(\textsc{Opt}_{k}\) by deleting the \(k-\tilde{k}\) agents per set \((\textsc{Opt}_{k}(\mathbf{A}))_{\ell}\) for each \(\ell\in[m]\) that have the lowest score for this job. This can only increase the average score of the assignment. Plugging this inequality into (15) concludes the proof as \[\frac{\sigma(\textsc{Gen}^{\textsc{AS}}_{\textsc{ALG},k}(\mathbf{A}))}{\sigma (\textsc{Opt}_{k}(\mathbf{A}))}\geq\frac{\tilde{k}}{k}\frac{\sigma(\textsc{Gen} ^{\textsc{AS}}_{\textsc{ALG},k}(\mathbf{A}))}{\sigma(\textsc{Opt}_{\tilde{k}} (\tilde{\mathbf{A}}))}\geq\frac{\tilde{k}}{k}\tilde{\alpha}.\qed\] Theorem 5.1 now follows from the last two lemmas. Proof of Theorem 5.1.: Let \(n,m,k\in\mathbb{N}\) with \(1<k<n,\ mk\leq n\) and \(k-k\bmod 2\geq 2\sqrt{n}\). We define \[\tilde{k}\coloneqq k-k\bmod 2,\quad\tilde{n}\coloneqq\frac{k-k\bmod 2}{2} \left\lceil\frac{2n}{k-k\bmod 2}\right\rceil.\] It is clear that \(\tilde{n},\tilde{k}\) are natural numbers with \(\tilde{k}\leq k<n\leq\tilde{n}\), that \(m\tilde{k}\leq\tilde{n}\), and that \[b\coloneqq\frac{2\tilde{n}}{\tilde{k}}=\left\lceil\frac{2n}{k-k\bmod 2} \right\rceil\in\mathbb{N}.\] Moreover, we have that \[\tilde{n}=\frac{k-k\bmod 2}{2}\left\lceil\frac{2n}{k-k\bmod 2}\right\rceil \leq\frac{k-k\bmod 2}{2}\left\lceil\frac{2\frac{(k-k\bmod 2)^{2}}{4}}{k-k \bmod 2}\right\rceil=\frac{\tilde{k}^{2}}{4},\] where the inequality follows from the condition \(k-k\bmod 2\geq 2\sqrt{n}\) in the statement. This yields \(b=2\tilde{n}/\tilde{k}\leq\tilde{k}/2\in\mathbb{N}\). By Lemma 5.2, this implies that \(\textsc{Assign}_{\tilde{k}}\) is an impartial and \(\tilde{\alpha}\)-optimal \((\tilde{n},m,\tilde{k})\)-assignment mechanism with \[\tilde{\alpha}=\frac{1}{\tilde{b}}=\frac{1}{\left\lceil\frac{2n}{k-k\bmod 2 }\right\rceil}.\] Since \(\tilde{n},\tilde{k}\in\mathbb{N}\) are such that \(\tilde{k}\leq k\) and \(\tilde{n}\geq n\), Lemma 5.3 implies that \(\textsc{Gen}^{\textsc{AS}}_{\textsc{Assign}_{\tilde{k}},k}\) is an impartial and \(\alpha\)-optimal \((n,m,k)\)-assignment mechanism with \[\alpha=\frac{\tilde{k}}{\tilde{k}}\tilde{\alpha}=\frac{k-k\bmod 2}{k\left\lceil \frac{2n}{k-k\bmod 2}\right\rceil}.\qed\]
2310.05644
Diagnosing Catastrophe: Large parts of accuracy loss in continual learning can be accounted for by readout misalignment
Unlike primates, training artificial neural networks on changing data distributions leads to a rapid decrease in performance on old tasks. This phenomenon is commonly referred to as catastrophic forgetting. In this paper, we investigate the representational changes that underlie this performance decrease and identify three distinct processes that together account for the phenomenon. The largest component is a misalignment between hidden representations and readout layers. Misalignment occurs due to learning on additional tasks and causes internal representations to shift. Representational geometry is partially conserved under this misalignment and only a small part of the information is irrecoverably lost. All types of representational changes scale with the dimensionality of hidden representations. These insights have implications for deep learning applications that need to be continuously updated, but may also aid aligning ANN models to the rather robust biological vision.
Daniel Anthes, Sushrut Thorat, Peter König, Tim C. Kietzmann
2023-10-09T11:57:46Z
http://arxiv.org/abs/2310.05644v1
# Diagnosing Catastrophe: Large Parts of Accuracy Loss in ###### Abstract **Unlike primates, training artificial neural networks (ANNs) on changing data distributions leads to a rapid decrease in performance on old tasks. This phenomenon is commonly referred to as catastrophic forgetting. In this paper, we investigate the representational changes that underlie this performance decrease and identify three distinct processes that together account for the phenomenon. The largest component is a misalignment between hidden representations and readout layers. Misalignment occurs due to learning on additional tasks and causes internal representations to shift. Representational geometry is partially conserved under this misalignment and only a small part of the information is irrecoverably lost. All types of representational changes scale with the dimensionality of hidden representations. These insights have implications for deep learning applications that need to be continuously updated, but may also aid aligning ANN models to the rather robust biological vision.** **Keywords:** continual learning; catastrophic forgetting; artificial neural networks; representations; lifelong learning ## Introduction Our world is inherently sequential. Adapted to this, humans are successful in continuously learning new skills over their lifetime. However, most state-of-the-art training procedures for artificial neural networks (ANNs) rely on data being independent and identically distributed. In settings where the data distribution changes, networks have been reported to rapidly forget previous knowledge (Parisi, Kemker, Part, Kanan, & Wermter, 2019; Hadsell, Rao, Rusu, & Pascanu, 2020). This phenomenon is commonly termed _catastrophic forgetting_(French, 1999; McCloskey & Cohen, 1989). A number of factors influence the degree to which performance decreases in sequential learning scenarios: the dimensionality of representations (Mirzadeh et al., 2022), pre-training (Ramasesh, Lewkowycz, & Dyer, 2022), objective function (S. Li, Du, van de Ven, & Mordatch, 2022; Davari, Asadi, Mudur, Aljundi, & Belilovsky, 2022) and task similarity (Ramasesh, Dyer, & Raghu, 2020). However, the changes to the task-relevant representations during continual learning remain to be fully characterized (see Davari et al. (2022) for first steps). In this work, we characterize changes in representational geometry and their contribution to the observed decrease in performance. We find that rather than forgetting, much of the degraded performance can be explained by a misalignment of representations and the readouts of the network. Our model system is a standard four layer convolutional network The training procedure, task and network architecture are identical to Zenke, Poole, and Ganguli (2017). We study catastrophic forgetting in the task-incremental scenario (Van de Ven & Tolias, 2019), initializing a new classification head every time a novel task is encountered. After pretraining the network on CIFAR10 (Krizhevsky & Hinton, 2009), we sequentially train on ten equal task splits from CIFAR-100. We repeat this procedure 5 times controlling for the effects of task similarity by randomly assigning each class to a task (Ramasesh et al., 2020). We characterise the information present throughout learning by training diagnostic readouts for all tasks after every phase of training. A drop in performance, despite adjusted readout, constitutes a loss of task relevant information. This scenario constitutes true _forgetting_. Contrary to this, performance loss attributed to _misalignment_ is computed by the difference in performance between the original readout (\(t=0\)) and the newly trained diagnostic readouts at every phase of training. Third, to estimate the extent to which misalignment is due to rotation, translation, and uniform scaling of an otherwise static geometry, we align representations for each task after each training phase to the representations immediately after learning the task (\(t=0\)) with a geometry-preserving Procrustes transformation (Gower, 1975). Finally, as increasing layer width has been shown to alleviate catastrophic forgetting (Mirzadeh et al., 2022), we vary the width of the final hidden layer to investigate how the different components of representational change are modulated by network capacity. ## Results As expected, we observe effects of catastrophic forgetting, i.e. a rapid drop in performance of the original readouts as the network is trained on additional tasks (Fig. 1A, 'continual' at T\(>\)0). Notably, however, performance of diagnostic readouts decreases much less, indicating that the discriminability of the old classes is indeed preserved, i.e. there is little "actual" forgetting. The primary cause of decreased performance is readout misalignment, the extent of which is shown by the large difference between 'continual' performance and performance measured at the diagnostic readouts (Fig. 1A, 'diagnostic'), in line with similar previous analyses (Davari et al., 2022). Does misalignment preserve the original representational geometry? If so, we'd expect that Procrustes alignment should yield performance as good as the linear diagnostic readouts. We observe that aligning representations accounts for approximately half of the performance difference between continual and diagnostic readouts (Fig. 1 A, 'procrustes'). Therefore, misalignment can be characterized as a combination of geometry preserving and deforming changes of representations. An open question that remains from our and previous work is whether the comparably good performance of the diagnostic readout is explained by transfer learning based on features learned for earlier tasks in the sequence. Indeed, we observe that the features learned for previously encountered tasks transfer to unseen tasks ('Feature Transfer' in Fig. 1). Yet, transfer cannot fully explain the performance observed with diagnostic readouts, as a clear discontinuity in the diagnostic readout performance trajectory from before to after training a new task (\(t=0\)) can be seen. This suggests that newly learned features better support the new task. This additional information stays preserved in the network over learning of multiple additional tasks, as evidenced by the fact that diagnostic readout performance stays above the performance measured at \(t=-1\) for the subsequent phases (\(t>0\)). Finally, characterizing the influence of network size in continual learning with our new analysis techniques, we find that varying the width of the final hidden layer attenuates all three measures of representational change. Yet, we still observe small amounts of changes to the representational geometry and misalignment with the readouts of the respective networks (Fig. 1 C & D). ## Discussion In characterizing representational changes in a neural network during continual learning, we observed that misalignment of the pre-readout representations with the task readouts explains large parts of performance degradation that is commonly referred to as 'catastrophic forgetting'. Interestingly, only a small amount of performance cannot be linearly read out and is irrecoverably 'forgotten'. Many algorithms addressing catastrophic forgetting rely on restricting learning at synapses that encode information for previous tasks (Zenke et al., 2017; Kirkpatrick et al., 2017) or regularize learning of representations for new tasks (Z. Li & Hoiem, 2017) in order to not lose information relevant for the previous tasks. We argue that information in hidden layers is largely preserved, even without restricting learning trajectories or placing constraints on representations the network is allowed to learn. This is especially prominent in larger networks. We hypothesize that catastrophic forgetting may instead be efficiently addressed by solving the problem of readout misalignment without influencing the learning of new tasks (See also: (Lesort, George, & Fish, 2021)). Indeed, there may be benefits to not restricting learning of representations more than necessary, as restrictions to the learning dynamics of the network may lead to decreased plasticity or sub-optimal solutions over long sequences of tasks. Lastly, the primate visual system is successfully able to learn new tasks without exhibiting forgetting of old tasks. If we are to use ANNs as models of biological vision, then the discrepancies in the learning dynamics of the two systems re Figure 1: **A:** Classification accuracy averaged over the ten tasks sampled from CIFAR100. Prior to averaging, task performance trajectories are temporally aligned to task onset such that the x-axis reflects performance after t additional tasks have been learned. The shaded area around each line indicates the standard error computed over five repetitions of the procedure. **B:** Mean class representations for a task split. Task mean representations from all phases are projected to a shared two dimensional space using multidimensional scaling (Torgerson, 1952). Shown are representation vectors directly after learning the task, after learning 5, and after learning 9 additional tasks (left to right). **C:** Mean performance over all tasks at task onset (\(t=0\)). Network size does not have an effect on how well tasks are learned initially. **D:** Performance loss measured as the difference between performance at \(t=0\) and the mean over performances measured at all \(t>0\). Standard error for additional networks is computed over three simulations. main to be addressed. Future work will test the currently described analysis framework for characterising representational changes in continual learning on biological data to further understand where, how, and when the visual system copes with the newly arriving information. ## Acknowledgments The project was financed by the Deutsche Forschungsgemeinschaft (DFG, research training group "Computational Cognition", GRK2340), as well as the European Research Council (ERC, TIME, Project 101039524). Compute resources used for this project are funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project number 456666331.
2303.02013
Dembowski's Theorem on Finite Inversive Planes of Even Order
A remarkable theorem due to Peter Dembowski states that if $I$ is an inversive plane of even order $q$ then $q$ must be a power of two and $I$ must be the incidence system of points versus plane ovals in an ovoid in the projective $3$-space over the field of order $q$. In this paper we present a short and self-contained proof of this result. Our proof depends on the classification due to Benson of the symmetric and regular finite generalized quadrangles. Included here is a deduction of Benson's Theorem from the Dembowski-Wagner combinatorial characterization of finite projective geometries.
Bhaskar Bagchi
2023-03-03T15:28:41Z
http://arxiv.org/abs/2303.02013v1
# Dembowski's Theorem on Finite Inversive Planes of Even Order ###### Abstract A remarkable theorem due to Peter Dembowski states that if \(I\) is an inversive plane of even order \(q\) then \(q\) must be a power of two and \(I\) must be the incidence system of points versus plane ovals in an ovoid in the projective 3-space over the field of order \(q\). In this paper we present a short and self-contained proof of this result. Our proof depends on the classification due to Benson of the symmetric and regular finite generalized quadrangles. Included here is a deduction of Benson's Theorem from the Dembowski-Wagner combinatorial characterization of finite projective geometries. ## 1 Introduction A \(t-(v,k,\lambda)\) design \(D\) is an incidence system satisfying (i) \(D\) has \(v\) points, (ii) each block of \(D\) is incident with exactly \(k\) points, and (iii) any \(t\) distinct points of \(D\) are together incident with exactly \(\lambda\) blocks of \(D\). An easy counting argument shows that, for \(0\leq s\leq t\), any \(t-(v,k,\lambda)\) design is also an \(s-(v,k,\lambda_{s})\) design where \(\lambda_{s}=\lambda{v-s\choose t-s}/{k-s\choose t-s}\). (In the following, we shall use this formula several times, without further mention.) In particular, the number of blocks of the \(t\)-design is \(b:=\lambda_{0}={v\choose t}/{k\choose t}\), and, when \(t>0\), the number of blocks incident with each point is \(r:=\lambda_{1}=bk/v\). A famous result of Fisher states that the parameters of any non-trivial 2-design satisfy \(b\geq v\) (equivalently \(r\geq k\)). A 2-design with \(b=v\) (equivalently \(r=k\)) is called a symmetric 2-design. A 2-design is symmetric iff any two distinct blocks of the design are together incident with \(\lambda\) points. This happens iff the dual (obtained by interchanging the notion of points and blocks) of the 2-design is again a 2-design (necessarily with the same parameters). Note that a \(2-(v,k,\lambda)\) design is symmetric iff its parameters satisfy \(k(k-1)=\lambda(v-1)\). The parameter \(\lambda\) of a \(t\)-design is called its **balance**. A \(t\)-design with balance \(\lambda=1\) is said to be a **Steiner t-design**. A **partial linear space** is an incidence system with at most one block incident with each pair of distinct points. In particular, when each point-pair is incident with a unique block, the incidence system is said to be a **linear space**. The blocks of a partial linear space are usually called its **lines**. Note that the Steiner 2-designs are linear spaces. The symmetric Steiner 2-designs are the **finite projective planes**. These are precisely the 2-designs with parameters \(v=n^{2}+n+1,k=n+1,\lambda=1\). This number \(n\) is called the **order of the finite projective plane**. An **affine plane of order n** is a \(2-(n^{2},n,1)\) design. Given a projective plane of order \(n\) and a line \(\ell\) in it, the incidence system obtained by deleting the line \(\ell\) and the points on \(\ell\) is an affine plane of order \(n\). This process may be reversed as follows. It can be shown that the lines of an affine plane naturally break up into \(n+1\) "parallel classes" such that the lines in each parallel class partition the point set and any two lines from different parallel classes intersect (here we have adopted the usual convention of identifying any line of a partial linear space with the set of points incident with it). Given an affine plane of order \(n\), one obtains a projective plane of order \(n\) by adjoining \(n+1\) points ("at infinity") corresponding to the parallel classes of the affine plane, and adjoining a single new line ("at infinity") incident with these points at infinity. This is called the **projective closure** of the given affine plane. Given an incidence system \(D\), and a point \(x\) of \(D\), the **contraction**\(D_{x}\) of \(D\) at \(x\) is the incidence system whose points are the points of \(D\) other than \(x\), blocks are the blocks of \(D\) incident with \(x\), and whose incidence is the restriction of the incidence relation of \(D\) to these points and blocks. Clearly, when \(t>0\), each point contraction of a \(t-(v,k,\lambda)\) design is a \((t-1)-(v-1,k-1,\lambda)\) design. An **one-point extension** of a \(t-(v,k,\lambda)\) design \(D\) is a \((t+1)-(v+1,k+1,\lambda)\)\(E\) (when it exists) such that \(D\) is the contraction of \(E\) at some point. An one-point extension of an affine plane of order \(n\) is called an **inversive plane of order n**. These are just the \(3-(n^{2}+1,n+1,1)\) designs. Clearly, the order of an inversive plane is the common order of all its point-contractions. The blocks of an inversive plane are usually called the **circles of the inversive plane**. Given a prime power \(q\), and \(n\geq 1\). the n-dimensional projective space over the field of order \(q\) is denoted by \(PG(n,q)\). We note that, when \(n\geq 2\), the incidence system of points versus hyper-planes of \(PG(n,q)\) is an example of a (symmetric) \(2-(q^{n+1}-1)/(q-1),(q^{n}-1)/(q-1),(q^{n-1}-1)/(q-1))\) design. The projective space may be uniquely recovered from this design: the flats of the projective space are just the finite intersections of the blocks of the design. In particular, for prime powers \(q\), \(PG(2,q)\) is an example of a projective plane of order \(q\). An **oval** in a projective plane of order n is a set of \(n+1\) points no three of which are collinear. Thus each line of the plane meets an oval in \(0,1\) or \(2\) points. Accordingly, the line is said to be a **passant, tangent or secant line to the oval**. It is easy to see that each point of an oval is on a unique tangent line, so that there is a total of \(n+1\) tangent lines. When \(C\) is an oval in a projective plane of **even order** n, there is a unique point \(x\) of the plane such that the tangents to \(C\) are precisely the lines through \(x\). This point is called the **nucleus** of the oval. Clearly, the nucleus of an oval does not belong to the oval. For a prime power \(q\), an **ovoid** in \(PG(3,q)\) is a set of points such that every plane in the projective space meets this set in an oval or in a single point. A plane is said to be **a tangent plane or a secant plane to the ovoid** according as it meets the ovoid in a point or in an oval It is easy to see that an ovoid in \(PG(3,q)\) has exactly \(q^{2}+1\) points in it, and every point \(x\) in the ovoid is on a unique tangent plane to the ovoid. Indeed, the tangent planes to an ovoid constitute an ovoid in the dual projective 3-space. It is easy to see that no three points in an ovoid are collinear. Since any three non-collinear points of \(PG(3,q)\) are together in a unique plane, it follows that any three points of an ovoid are together in a unique oval contained in the ovoid. Given an ovoid \(\mathcal{O}\) in \(PG(3,q)\), let \(I(\mathcal{O})\) denote the incidence system whose points are the points in \(\mathcal{O}\), and blocks are the ovals contained in \(\mathcal{O}\), It is immediate from the discussion in the previous paragraph that \(I(\mathcal{O})\) is an example of an inversive plane (of prime power order \(q\)). All the known finite inversive planes arise from this construction. The following is a famous theorem of Dembowski (see [4, 5]). Its proof occupies most of an entire chapter in Dembowski's book [5], even though this book has an extremely cryptic style. **Theorem 1.1** (Dembowski): Let \(I\) be an inversive plane of even order \(q\). Then \(q\) is a power of two and \(I\) is isomorphic to \(I(\mathcal{O})\) for some ovoid \(\mathcal{O}\) of \(PG(3,q)\). It is widely believed that the order of any finite projective plane (equivalently, of any finite affine plane) must be a prime power. Theorem 1.1 shows that for an affine plane of even order to have an one point extension, the order must be a power of two. While it required powerful computers to establish that there is no affine plane of order \(10\) (see [7]), it is immediate from this theorem that there is no inversive plane of order \(10\). Recall that a **linear complex of lines** in \(PG(3,q)\) is the set of all totally isotropic lines with respect to a non-degenerate symplectic form on (the underlying vector space of ) \(PG(3,q)\). In the next section, we discuss regularity of generalized quadrangles and present a proof of Benson's theorem characterizing the linear complexes of lines in \(PG(3,q)\) (\(q\) prime power) as the only regular symmetric finite generalized quadrangles. Our proof depends on a beautiful combinatorial characterization of finite projective spaces due to Dembowski and Wagner. In the third and final section, we present a short proof of Dembowski's theorem (Theorem 1.1), deducing it from Benson's theorem. We believe that the proof given here is much more transparent than Dembowski's original proof. For unproved assertions on Design Theory/ Finite Projective Spaces made in this paper, the reader may consult [2, 5] and [6]. ## 2 Benson's Theorem Recall ([8]) that, for integers \(s,t\), a **generalized quadrangle** (in short, a GQ) of order \((s,t)\) is a partial linear space \(X\) with \(s+1\) points on each line and \(t+1\) lines through each point satisfying the following property: given any point \(x\) and line \(\ell\) of \(X\) such that \(x\not\in\ell\), there is a unique point \(y\) such that \(y\in\ell\) and \(y\) is collinear with \(x\). An easy counting argument shows that a GQ \(X\) of order \((s,t)\) has \((s+1)(st+1)\) points and \((t+1)(st+1)\) lines. The **star of a point \(x\)** (denoted star(x)) in a GQ is the set of all points \(y\) (including \(x\) itself) collinear with \(x\). Clearly the star of any point contains \(s(t+1)+1\) points. It also readily follows from the definition of a GQ that, for any two non-collinear points \(x,y\) of \(X\), exactly \(t+1\) points are collinear with both \(x\) and \(y\). The set of these \(t+1\) points is called **the trace** of the pair \(x,y\). Since any two of the points in the trace are non-collinear, it follows that at most \(t+1\) points are collinear with all the points in the trace of \(x,y\). This set of points is called **the span** of the pair \(x,y\). An unordered pair \(\{x,y\}\) of non-collinear points in a GQ \(X\) is said to be regular if the span of \(\{x,y\}\) has (the maximum possible) size \(t+1\). \(X\) is said to be a **regular** GQ if all such pairs in \(X\) are regular. Observe that if \(\sigma\) is a span in a regular GQ, then the \(t+1\) points in \(\sigma\) are mutually non-collinear, and \(\sigma\) is the span of any two of its points. The **collinearity graph** of a partial linear space \(X\) is the graph having the points of \(X\) as vertices, where two points are adjacent iff they are distinct and collinear in \(X\). Recall that,for positive integers \(m,n\), the **complete bipartite graph**\(K_{m,n}\) has \(m+n\) vertices, split into two parts of size \(m\) and \(n\), such that two vertices of the graph are adjacent iff they belong to different parts, We have the following straightforward graphical reformulation of the notion of regularity of generalized quadrangles: **Lemma 2.1**: The collinearity graph of any GQ \(X\) of order \((s,t)\) has \(\leq\frac{1}{2}s^{2}(s+1)(st+1)(t+1)^{-1}\) induced subgraphs isomorphic to \(K_{t+1,t+1}\). Equality holds here iff \(X\) is regular. **Proof:** Let \(N\) be the total number of induced \(K_{t+1,t+1}\) in the collinearity graph of \(X\). Note that, for any unordered pair \(e\) of non-collinear points of \(X\), and any induced subgraph \(K_{t+1,t+1}\) containing \(e\) in the collinearity graph of \(X\), the two parts of this graph must be cthe trace and span of \(e\). Thus \(e\) is contained in at most one \(K_{t+1,t+1}\), and such a \(K_{t+1,t+1}\) exists iff \(e\) is regular. Since \(X\) has \((s+1)(st+1)\) points, and each point is non-collinear with \((s+1)(st+1)-(s(t+1)+1)=s^{2}t\) other points, it follows that there are \(\frac{1}{2}s^{2}t(s+1)(st+1)\) such pairs \(e\). Also each of the \(N\)\(K_{t+1,t+1}\) in \(X\) contains exactly \(2\binom{t+1}{2}\) such pairs. Therefore we may count in two ways the total number of pairs \((e,G)\) where \(e\) is as above and \(G\) is a copy of \(K_{t+1,t+1}\) whose vertex set contains \(e\). This yields \(2\binom{t+1}{2}N\leq\frac{1}{2}s^{2}t(s+1)(st+1)\), with equality iff \(X\) is regular. \(\Box\) Let \(q\) be a prime power. Recall that a **polarity** of \(PG(3,q)\) is an incidence preserving permutation interchanging points and planes (and hence mapping lines to lines). A point or plane is said to be absolute (with respect to a given polarity) if it is incident with its image under the polarity. A polarity is said to be a **null polarity** if all points (equivalently planes) are absolute. Let \(W(q)\) denote the partial linear space whose points are the points of \(PG(3,q)\), and whose lines are the lines of \(PG(3,q)\) fixed by a given null polarity. Under its action by conjugation, the collineation group of \(PG(3,q)\) is transitive on the null polarities on \(PG(3,q)\), so that this defines the incidence system \(W(q)\) uniquely, up to isomorphism. A null polarity is given by ortho-complementation with respect to a non-degenerate symplectic form on the underlying vector space. Using this fact, it is easy to verify that, for any prime power \(q\), \(W(q)\) is a regular GQ of order \((q,q)\). In [1], Benson proved: **Theorem 2.2** (Benson): If \(W\) is a regular GQ of order \((q,q)\), then \(q\) is a prime power and \(W\) is (isomorphic to) \(W(q)\). We recall that, if \(x,y\) are two distinct lines of a 2-design \(D\), then **the line of the 2-design** joining \(x\) and \(y\) is the intersection of all the blocks of \(D\) containing \(\{x,y\}\). If \(\lambda\) is the balance of \(D\), a line of \(D\) may be defined as a set of \(\geq 2\) points of \(D\) which can be expressed as the intersection of \(\lambda\) distinct blocks of \(D\). A famous theorem of Dembowski and Wagner ([3]) states: **Theorem 2.3** (Dembowski and Wagner): Let \(D\) be a symmetric 2-design of balance \(>1\), Suppose every line of \(D\) intersects every block of \(D\). Then there is prime power \(q\) and an integer \(n\geq 3\) such that \(D\) is isomorphic to the design of points versus hyper-planes in \(PG(n,q)\). **Proof of Theorem 2.2**: We begin with two claims. (i) every line of \(W\) intersects every star in \(W\), and (ii) every span in \(W\) intersects every star in \(W\). (i) is obvious since \(W\) is a GQ: if \(\ell\) is a line and \(x\) is a point of \(W\), \(\ell\subseteq\mathrm{star(x)}\) when \(x\in\ell\); and, when \(x\not\in\ell\), \(\ell\cap\mathrm{star(x)}=\{\mathrm{y}\}\) where \(y\) is the unique point in \(\ell\) collinear with \(x\). To prove (ii), note that, since \(W\) is regular, all point pairs in a span \(\sigma\) have a common trace, say \(\tau\). If \(x\in\tau\), then \(\sigma\subseteq\mathrm{star(x)}\). in the contrary case, \(\mathrm{star(x)}\) meets \(\sigma\) in at most one point. Therefore, the sets \(A(y):=\mathrm{star(y)}\setminus\tau,\ \mathrm{y}\in\sigma\), are \(q+1\) pairwise disjoint sets of size \(q^{2}\) each (If for \(y_{1}\neq y_{2}\) in \(\sigma\), \(z\in A(y_{1})\cap A(y_{2})\) then \(z\not\in\tau\) and \(z\in\mathrm{trace}(\{\mathrm{y}_{1},\mathrm{y}_{2}\})=\tau\), contradiction.). They are contained in the complement of \(\tau\) which is a set of size \((q+1)q^{2}\). So these sets partition the complement of \(\tau\). Therefore, for any point \(x\not\in\tau\), \(x\in\mathrm{star(y)}\) (i.e., \(y\in\mathrm{star(x)}\)) for a unique point \(y\in\sigma\). Hence \(\sigma\cap\mathrm{star(x)}=\{\mathrm{y}\}\). This proves (ii). Let \(D\) be the incidence system whose points are the points of \(W\) and whose blocks are the stars of points of \(W\). Clearly \(D\) has \((q+1)(q^{2}+1)\) points and equally many blocks. Also, as each point \(x\) of \(W\) is collinear with \(1+q(q+1)=q^{2}+q+1\) points, it follows that each block of \(D\) has size \(q^{2}+q+1\). Also, for points \(x_{1}\neq x_{2}\), the blocks of \(D\) containing both these points are \(\mathrm{star(x)},\mathrm{x}\in\mathrm{trace}(\{\mathrm{x}_{1},\mathrm{x}_{2}\})\) when \(x_{1},x_{2}\) are non-collinear in \(W\); when \(x_{1},x_{2}\) are collinear (say, lying on the line \(\ell\) of \(W\)), these blocks are \(\mathrm{star(x)},\mathrm{x}\in\ell\). Thus \(D\) is a (symmetric) \(2-((q+1)(q^{2}+1),q^{2}+q+1,q+1)\) design. Also, the line of \(D\) joining any two collinear points is just a line of \(W\), while the line of \(D\) joining any two non-collinear points is the span of this pair. Therefore, in view of the preceding paragraph, every line of \(D\) meets every block of \(D\). Therefore, as \(D\) is a symmetric design of balance \(>1\), Theorem 2.3 applies and shows that \(q\) is a prime power and \(D\) is the points versus planes design of \(PG(3,q)\). By this construction of \(PG(3,q)\) from \(W\), one sees that each plane of \(PG(3,q)\) is of the form \(\mathrm{star(x)}\) for a unique point \(x\). But, for any two points \(x,y\), we have \(y\in\mathrm{star(x)}\) iff \(x,y\) are collinear in \(W\). Since collinearity is a reflexive and symmetric relation on the point set, it follows that the map \(x\mapsto\mathrm{star}(x)\) is a null polarity of \(PG(3,q)\). From the description of the lines of \(PG(3,q)\) given in terms of \(W\) (as the lines and spans of \(W\)), it is now immediate that the lines of \(W\) are the fixed lines of this polarity. Hence \(W\) is \(W(q)\). \(\square\) Dembowski's Theorem An **ovoid of a generalized quadrangle** is a set of points of the GQ meeting every line of the GQ in a unique point. Clearly an ovoid in a GQ of order \((s,t)\) has \(st+1\) points, no two of which are collinear, and the star of a point \(x\) meets it in \(1\) or \(t+1\) points - according as \(x\) does or does not belong to the ovoid. **Lemma 3.1**: Any ovoid of \(W(q)\) is an ovoid of the ambient \(PG(3,q)\). **Proof:** Since the stars of points of \(W(q)\) are just the planes in the ambient \(PG(3,q)\), it follows that any plane of \(PG(3,q)\) meets an ovoid \({\cal O}\) of \(W(q)\) in one or \(q+1\) points. Let \(\ell\) be a line of \(PG(3,q)\) meeting \({\cal O}\) in \(m\geq 2\) points. The planes through \(\ell\) induce a partion of the \((q^{2}+1-m)\)-set \({\cal O}\setminus\ell\) into \(q+1\) parts of size \(q+1-m\) each. So we have \((q+1)(q+1-m)=q^{2}+1-m\),.i.e., \(m=2\). Thus, no three points of \({\cal O}\) are collinear in \(PG(3,q)\). It follows that \(|mathcalO\) is an ovoid of \(PG(3,q)\). \(\square\) Let \({\cal O}\) be an ovoid of \(W(q)\). Since the secant planes to \({\cal O}\) are just the stars of the points in the complement of \({\cal O}\), we may identify the secant planes (and hence the ovals in \({\cal O}\)) with the points of \(W\) outside \({\cal O}\). Thus, the inversive plane \(I({\cal O})\) (defined in the introduction) has the following intrinsic description in terms of \(W(q)\). The points and blocks of \(I({\cal O})\) are the points in \({\cal O}\) and the points in the complement of \({\cal O}\), respectively. For points \(x,y\) of \(W(q)\) such that \(x\in{\cal O},y\not\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! nucleus of the oval \(C\)) through which all the tangent lines to \(C\) in \(\pi\) pass. (Since the line at infinity is a passant to \(C\), the nucleus of \(C\) must be an affine point.) We say that two circles of an inversive plane are tangent if they have exactly one point in common. Recall ([5], p. 253) that a **pencil** in an inversive plane is a set of mutually tangent circles through some point \(x\) which induces a partition of the set of all points other than \(x\). The point \(x\) is said to be the **carrier** of the pencil. **Lemma 3.3**: Let \(I\) be an inversive plane of order \(q\). (a) If \(x\) is a point and \(C\) is a circle of \(I\) such that \(x\in C\), then there is a unique pencil \(p\) of \(I\) such that \(C\in p\) and \(x\) is the carrier of \(p\). (b) Suppose \(q\) is even. If \(p\) is a pencil of \(I\) with carrier \(x\) and \(C\) is a circle of \(I\) such that \(x\not\in C\) and \(C\not\in p\) then there is a unique circle in \(p\) which is tangent to \(C\). **Proof**: (a) Note that a set \(p\) of circles is a pencil of \(I\) iff \(\{C\setminus\{x\}:C\in p\}\) is a parallel class of lines in the affine plane \(\pi\) obtained by contracting \(I\) at \(x\). Therefore, this result follows as every line of the affine plane is in a unique parallel class. (b) Consider the contraction \(\pi\) of \(I\) at \(x\). Let \(\infty\) denote the point at infinity (in the projective closure of the affine plane \(\pi\)) corresponding to the parallel class \(\{D\setminus\{x\}:D\in p\}\). Note that \(C\) is an oval in this plane of even order \(q\). Since \(\infty\not\in C\), and the line at infinity is not tangent to \(C\) (obviously it is a passant to \(C\)) and passes through \(\infty\), it follows that there is a unique line \(l\) (in this projective closure) through \(\infty\) which is a tangent to \(C\). Then \((l\setminus\{\infty\})\cup\{x\}\) is the unique circle in the pencil \(p\) which is tangent to \(C\). \(\Box\) **Lemma 3.4**: Let \(e\) be a set of two points in an inversive plane \(I\) of even order \(q\). Then there are exactly \(q-1\) circles \(C\) of \(I\) such that \(C\) is tangent to all the circles of \(I\) containing \(e\). These \(q-1\) circles partition the complement of \(e\) in the point set of \(I\). **Proof**: Let \(e=\{x,y\}\). Let \({\cal C}_{e}\) be the set of all circles \(C\) of \(I\) such that all the circles containing \(e\) are tangent to \(C\). We first note that, for \(C\in{\cal C}_{e}\), we have \(C\cap e=\emptyset\). Clearly, at most one of the points \(x,y\) can be in \(C\). Say \(y\not\in C\). Let \(\pi\) be the affine plane obtained by contracting \(I\) at \(y\). Let \(\overline{\pi}\) be its projective closure. \(C\) is an oval in \(\overline{\pi}\). Since \(C\in{\cal C}_{e}\), all the lines of \(\overline{\pi}\) through \(x\) are tangent to \(C\). Therefore, \(x\) is the nucleus of \(C\). Hence \(x\not\in C\). Now, let \(C_{1},C_{2}\) be two distinct circles in \({\cal C}_{e}\). We claim that \(C_{1}\cap C_{2}=\emptyset\). Suppose not. Say, \(z\in C_{1}\cap C_{2}\). Let \(\pi\) be the contraction of \(I\) at \(z\). Let \(\overline{\pi}\) be the projective closure of \(\pi\), obtained by adjoining the line at infinity \(\ell\), say. Also, let \(m\) be the line of \(\overline{\pi}\) joining the points \(x,y\). For \(i=1,2\), let \(\ell_{i}\) be the line of \(\overline{\pi}\) extending the affine line \(C_{i}\setminus\{z\}\). Let \(w\) denote the point of \(\overline{\pi}\) at the intersection of \(\ell_{1},\ell_{2}\). Note that any of the \(q\) circles \(D\) containing \(e\) but not containing \(z\) is an oval of \(\overline{\pi}\). The point \(w\) is in at least two tangents to \(D\), namely \(\ell_{1},\ell_{2}\). Since \(\overline{\pi}\) is a projective plane of even order \(q\), it follows that \(w\) is the nucleus of \(D\). So, \(w\not\in D\) for any of the circles \(D\) of \(I\) such that \(z\not\in D,e\subseteq D\). Also, as \(\ell\) is a passant to \(D\) and \(m\) is a secant to \(D\), it follows that \(w\not\in\ell\cup m\). This is a contradiction since the lines \(\ell,m\) together with these \(q\) ovals \(D\) clearly cover the entire point set of \(\overline{\pi}\). This proves the claim. Thus, the circles in \({\cal C}_{e}\) are pairwise disjoint sets of size \(q+1\) each, contained in the complement (of size \(q^{2}-1\)) of \(e\). Therefore, \(\#({\cal C}_{e})\leq(q^{2}-1)/(q+1)=q-1\), with equality iff the circles in \({\cal C}_{e}\) partition the complement of \(e\). So, to complete the proof, it suffices to show that equality holds here for every 2-subset \(e\) of the point set of \(I\). For any circle \(C\) and point \(x\) of \(I\) such that \(x\not\in C\), \(C\) is an oval in the projective closure \(\overline{\pi}\) of the contraction \(\pi\) of \(I\) at \(x\). Since \(\overline{\pi}\) is of even order \(q\), there is a unique point \(y\) of \(\overline{\pi}\) such that all the lines of \(\overline{\pi}\) through \(y\) are tangents to \(C\). Since the line at infinity is a passant to \(C\), \(y\) must be an affine point. Thus, each of the \((q^{2}+1)-(q+1)=q^{2}-q\) points \(x\) of \(I\) in the complement of \(C\) is in a unique 2-set \(e=\{x,y\}\) such that \(C\in{\cal C}_{e}\). Therefore, for each of the \(q(q^{2}+1)\) circles \(C\) of \(I\), there are exactly \(\frac{q^{2}-q}{2}\) 2-sets \(e\) such that \(C\in{\cal C}_{e}\). Hence, as \(e\) varies over the \(\binom{q^{2}+1}{2}\) 2-subsets \(e\) of the point set of \(I\), the average size of \({\cal C}_{e}\) is \(q(q^{2}+1)\frac{q^{2}-q}{2}/\binom{q^{2}+1}{2}=q-1\). But we have seen that \(q-1\) is also an upper bound on the size of each \({\cal C}_{e}\). Hence \(\#({\cal C}_{e})=q-1\) for all \(e\). \(\square\) **Proof of Theorem 3.2:** Let \(I\) be an inversive plane of even order \(q\). Consider the incidence system \(W\) whose points are the points and circles of \(I\), and such that, corresponding to each pencil \(p\) of \(I\), \(W\) has a block \(\hat{p}\) consisting of the carrier of \(p\) and the circles in \(p\). From Lemma 3.3 (a), it is immediate that \(W\) is a partial linear space with \(q+1\) points on each line and \(q+1\) lines through each point. Notice that, by the construction of \(W\), we have (i) no two points of \(I\) are collinear in \(W\), (ii) if \(x\) is a point and \(C\) is a circle of \(I\), then \(x\) and \(C\) are collinear in \(W\) iff \(x\in C\), and (iii) two circles \(C_{1},C_{2}\) of \(I\) are collinear in \(W\) iff \(C_{1}\) and \(C_{2}\) are (equal or) tangent circles. We claim that \(W\) is isomorphic to \(W(q)\). Once this claim is established, the proof will be complete since, by the construction of \(W\), the point set \({\cal O}\) of \(I\) is an ovoid of \(W\) and \(I=I({\cal O})\). Let \(p\) be a pencil of \(I\) with carrier \(x\), and let \(l=\hat{p}\) be the corresponding line of \(W\). To prove that \(W\) is a GQ (of order (q,q)), we need to show that any given point of \(W\) not in the line \(l\) is collinear with a unique point of \(W\) in \(l\). Since the circles in the pencil \(p\) induce a partition of the points other than \(x\), this is obvious if the given point of \(W\) is a point \(y\neq x\) of \(I\) : in this case the unique circle in \(p\) containing \(y\) is the only point of \(l\) collinear with \(y\). Next let this point of \(W\) be a circle \(C\not\in p\) of \(I\). If \(x\in C\), then \(x\) is the only point on \(l\) collinear with \(C\) in \(W\). If \(x\not\in C\), then the circle of \(I\) guaranteed by Lemma 3.3 (b) is the unique point in \(l\) collinear with this point of \(W\). Thus \(W\) is indeed a GQ of order \((q,q)\). With each 2-subset \(e\) of the point set of \(I\), we associate a subset of the point set of \(W\) as follows. This set consists of the two points in \(e\), the \(q+1\) circles of \(I\) containing \(e\), and the \(q-1\) circles of \(I\) guaranteed by Lemma 3.4. It is immediate from Lemma 3.4 and the above description of collinearity in \(W\) that the collinearity graph of \(W\) induces a \(K_{q+1,q+1}\) on this set. Thus, corresponding to the \({q^{2}+1\choose 2}\) 2-subsets of the point set of \(I\), we have found \({q^{2}+1\choose 2}\)\(K_{q+1,q+1}\) in the collinearity graph of \(W\). But \({q^{2}+1\choose 2}\) is the upper bound in Lemma 2.1 in the case \(s=t=q\). Therefore, by Lemma 2.1, \(W\) is a regular GQ of order \((q,q)\). Hence Theorem 2.2 implies that \(q\) is a prime power and \(W\) is isomorphic to \(W(q)\). Since \(q\) is even, it follows that \(q\) is a power of two. \(\square\)
2310.15457
An Unconditionally Stable Iterative Decoupled Algorithm for Multiple-Network Poroelasticity
In this work, we introduce an iterative decoupled algorithm designed for addressing the quasi-static multiple-network poroelasticity problem. This problem pertains to the simultaneous modeling of fluid flow and deformations within an elastic porous medium permeated by multiple fluid networks, each with distinct characteristics. Our approach focuses on the total-pressure-based formulation, which treats the solid displacement, total pressure, and network pressures as primary unknowns. This formulation transforms the original problem into a combination of the generalized Stokes problem and the parabolic problem, offering certain advantages such as mitigating elastic locking effects and streamlining the discretization process. Notably, the algorithm ensures unconditional convergence to the solution of the total-pressure-based coupled algorithm. To validate the accuracy and efficiency of our method, we present numerical experiments. The robustness of the algorithm with respect to the physical parameters and the discretization parameters is carefully investigated.
Meng Lei, Mingchao Cai, Feng Wang
2023-10-24T02:11:15Z
http://arxiv.org/abs/2310.15457v2
# An Unconditionally Stable Iterative Decoupled Algorithm for Multiple-Network Poroelasticity ###### Abstract In this work, we introduce an iterative decoupled algorithm designed for addressing the quasi-static multiple-network poroelasticity problem. This problem pertains to the simultaneous modeling of fluid flow and deformations within an elastic porous medium permeated by multiple fluid networks, each with distinct characteristics. Our approach focuses on the total-pressure-based formulation, which treats the solid displacement, total pressure, and network pressures as primary unknowns. This formulation transforms the original problem into a combination of the generalized Stokes problem and the parabolic problem, offering certain advantages such as mitigating elastic locking effects and streamlining the discretization process. Notably, the algorithm ensures unconditional convergence to the solution of the total-pressure-based coupled algorithm. To validate the accuracy and efficiency of our method, we present numerical experiments. The robustness of the algorithm with respect to the physical parameters and the discretization parameters is carefully investigated. **Keywords: multiple-network poroelasticity, iterative decoupled algorithm, total pressure** ## 1 Introduction Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded polygonal domain. The quasi-static multiple-network poroelasticity problem [1] is to find the displacement \(\boldsymbol{u}\) and the network pressures \(\vec{p}\) such that \[-\operatorname{div}\sigma(\boldsymbol{u})+\nabla(\vec{\alpha}^{ \mathsf{T}}\vec{p})=\boldsymbol{f}\quad\text{ in }\Omega\times[0,T], \tag{1a}\] \[\vec{\alpha}\operatorname{div}\dot{\boldsymbol{u}}+S\dot{\vec{p}} +B\vec{p}-\operatorname{div}(K\nabla\vec{p})=\vec{g}\quad\text{ in }\Omega\times[0,T]. \tag{1b}\] Here, \(\boldsymbol{u}=(u_{1}(\boldsymbol{x},t),u_{2}(\boldsymbol{x},t))^{\mathsf{T}}\) and \(\vec{p}=(p_{1}(\boldsymbol{x},t),p_{2}(\boldsymbol{x},t),\cdots,p_{N}( \boldsymbol{x},t))^{\mathsf{T}}\), with a given number of networks \(N\). The operators and parameters are defined as follows. The effective stress and the strain tensor are denoted by \(\sigma(\boldsymbol{u})=2\mu\epsilon(\boldsymbol{u})+\lambda\operatorname{ div}(\boldsymbol{u})\boldsymbol{I}\) and \(\epsilon(\boldsymbol{u})=\frac{1}{2}\left(\nabla\boldsymbol{u}+(\nabla \boldsymbol{u})^{\mathsf{T}}\right)\), respectively. The Lame parameters \(\lambda\) and \(\mu\) are expressed in terms of the Young modulus \(E\) and Poisson ratio \(\nu\in[0,\frac{1}{2})\) by \(\lambda=\frac{\nu E}{(1+\nu)(1-2\nu)}\) and \(\mu=\frac{E}{2(1+\nu)}\), respectively. Column vector \(\vec{\alpha}=(\alpha_{1},\alpha_{2},\cdots,\alpha_{N})^{\mathsf{T}}\), where \(\alpha_{i}\in(0,1]\) is the Biot-Willis coefficient. Matrix \(S=\operatorname{diag}(c_{1},c_{2},\cdots,c_{N})\), and \(c_{i}\geq 0\) is the storage coefficient. Matrix \(K=\operatorname{diag}(K_{1},K_{2},\cdots,K_{N})\), and \(K_{i}\) is the hydraulic conductivity coefficient. The coefficient matrix \(B\) satisfies \((B\vec{p})_{i}=\sum_{j=1,j\neq i}^{N}\beta_{ij}\left(p_{i}-p_{j}\right)\), where the non-negative network transfer coefficient \(\beta_{ij}\) couple the network pressures and \(\beta_{ij}=\beta_{ji},1\leq i,j\leq N,j\neq i\). Further, \(\boldsymbol{f}=(f_{1},f_{2})^{\mathsf{T}}\), where \(f_{i}\) represents the body force and \(\vec{g}=(g_{1},g_{2},\cdots,g_{N})^{\mathsf{T}}\), where \(g_{i}\) represents the source in \(i\)th network. The system (1) is well-posed with proper boundary and initial conditions. In this paper, we consider the mixed boundary conditions, assuming \(\partial\Omega=\Gamma_{\boldsymbol{u},D}\cup\Gamma_{\boldsymbol{u},N}=\Gamma_ {\vec{p},D}\cup\Gamma_{\vec{p},N},|\Gamma_{\boldsymbol{u},D}\cap\Gamma_{ \boldsymbol{u},N}|=0,|\Gamma_{\vec{p},D}\cap\Gamma_{\vec{p},N}|=0,|\Gamma_{ \vec{p},D}|>0,|\Gamma_{\vec{p},D}|>0.\) More clearly, the following boundary and initial conditions [2] are imposed. \[\boldsymbol{u} =\boldsymbol{0} \text{ on }\Gamma_{\boldsymbol{u},D},\] \[\left(\sigma(\boldsymbol{u})-(\vec{\alpha}^{\mathsf{T}}\vec{p}) \boldsymbol{I}\right)\boldsymbol{n} =\boldsymbol{h} \text{ on }\Gamma_{\boldsymbol{u},N},\] \[\vec{p} =\vec{0} \text{ on }\Gamma_{\vec{p},D},\] \[(K\nabla\vec{p})\,\boldsymbol{n} =\vec{l} \text{ on }\Gamma_{\vec{p},N},\] \[\boldsymbol{u}(0) =\boldsymbol{u}_{0} \text{ in }\Omega,\] \[\vec{p}(0) =\vec{p}_{0} \text{ in }\Omega.\] The concept of multiple-network poroelasticity has been incorporated into the field of geomechanics [3] as a means to reflect mechanical deformation and fluid flow within porous materials. The multiple-network poroelasticity problem serves as an extension of Biot's model, which has been applied to geomechanics, geophysics, and biology. For instance, when considering the case with \(N=2\), one can obtain the Biot-Barenblatt model [4, 5], which states consolidation processes within a fluid-saturated double-diffusion model of fractured rock. In another context, Tully and Ventikos investigated a scenario involving four distinct networks (\(N=4\)) as a macroscopic model to describe the dynamics of fluid flows within brain tissue [6]. More recently, the multiple-network poroelasticity model has also been employed to gain a deeper understanding of the impact of biomechanical risk factors associated with the early stages of Alzheimer's disease [7]. The discretization of Biot's equations is widely recognized as a challenging task, due to the poroelastic locking phenomenon. Poroelastic locking is characterized by two main features [2], (1) underestimation of the solid deformation if the material is close to being incompressible, (2) nonphysical pressure oscillations. Many existing studies, such as [8; 9], adopt solid displacement and fluid pressure as the primary variables within Biot's consolidation model. However, it has been noted that the elastic locking phenomenon occurs in a model based on the two-field formulation [10; 11]. To address these challenges, researchers have employed various approaches, including the DG method [12], stabilization techniques [13; 14], and various three-field or four-field reformulations [15; 16; 17; 18; 19; 20; 21; 22]. In spite of the extensive research dedicated to exploring Biot's consolidation model, the multiple-network poroelasticity problem has received much less attention. Hong et al. introduced parameter-robust preconditioners [23], proposed a fixed-stress split method [1] and presented an augmented Lagrangian Uzawa algorithm [24] for multiple-network flux-based poroelasticity equations. Lee et al. analyzed a mixed finite element method [2] and proposed two partitioned numerical algorithms [25] for the multiple-network poroelasticity problem. By introducing an intermediate variable \(\xi=\vec{\alpha}^{\mathsf{T}}\vec{p}-\lambda\operatorname{div}\boldsymbol{u}\), we studied a formulation which has the solid displacement, the total pressure, and the network pressures as the primary unknowns. This formulation interprets the original problem as a combination of the generalized Stokes problem and the parabolic problem. The solid displacement and the total pressure are approximated by the classical stable Stokes finite elements, while the network pressures are discretized by the Lagrange finite elements. Importantly, this approach eliminates the need for any assumptions regarding additional physical parameters, such as storage coefficients \(c_{i}>0,i=1,2,\cdots,N\). The article presents an iterative decoupled algorithm for the multiple-network poroelasticity problem (1). Unlike the fixed-stress split method for multiple-network flux-based poroelasticity problem proposed in [1], the iterative decoupled algorithm does not require any stabilization parameters and unconditionally converges to the solution of the total-pressure-based coupled algorithm. In order to assess both the precision and effectiveness of our approach, we provide numerical experiments. Furthermore, we meticulously examine the algorithm's resilience concerning variations in the physical parameters and discretization parameters. The structure of this article is as follows. In Section 2, we present the total-pressure-based formulation which has the solid displacement, the total pressure, and the network pressures as the primary unknowns and the corresponding variational problems for the multiple-network poroelasticity equations. The iterative decoupled algorithm is given in Section 3. In Section 4, we analyze the convergence of the iterative decoupled algorithm. The error analysis of the coupled algorithm is studied in Appendix A. Numerical experiments are presented in Section 5, and conclusions are drawn in Section 6. ## 2 The total-pressure-based formulation and the weak form ### Notation and preliminaries We use \(L^{2}(\Omega)\) and \(L^{2}(\partial\Omega)\) to denote square-integrable real-valued functions defined on \(\Omega\) and \(\partial\Omega\) respectively. The inner product of \(L^{2}(\Omega)\) and the induced norm are denoted by \((\cdot,\cdot)\) and \(\|\cdot\|_{L^{2}(\Omega)}\) respectively. We use \(\langle\cdot,\cdot\rangle\) to denote the \(L^{2}(\partial\Omega)\) inner product. For vector-valued or matrix-valued functions where each component belongs to \(L^{2}(\Omega)\), we use the same notation \((\cdot,\cdot)\) and \(\|\cdot\|_{L^{2}(\Omega)}\) to denote the inner product and norm. For vector-valued functions where each component belongs to \(L^{2}(\partial\Omega)\), we also use the notation \(\langle\cdot,\cdot\rangle\) to denote the inner product. For a non-negative integer \(m\) and \(1<p<+\infty\), we introduce the following Sobolev spaces, \(W^{m,p}(\Omega)=\{u|D^{\alpha}u\in L^{p}(\Omega),0\leq|\alpha|\leq m,\|u\|_{W^ {m,p}(\Omega)}<+\infty\}\); When \(p=2\), we use \(H^{m}(\Omega)\) to denote \(W^{m,2}(\Omega)\). In cases involving vector-valued functions where each component belongs to \(H^{m}(\Omega)\), we consistently employ the notation \(\|\cdot\|_{H^{m}(\Omega)}\) to represent the norm. Furthermore, when \(m\geq 1\), we use \(H^{m}_{0,\Gamma}(\Omega)\) to denote the subspace of \(H^{m}(\Omega)\) comprising functions with a vanishing trace on \(\Gamma\subset\partial\Omega\). For ease of presentation, we assume that \(\Omega\) is a two-dimensional domain in this work. ### The total-pressure-based formulation and the variational problem In the following, we introduce an intermediate variable to rewrite the problem (1) into the total-pressure-based formulation and present the corresponding variational problem [2]. More clearly, we introduce the so-called "total pressure", \(\xi=\vec{\alpha}^{\mathrm{T}}\vec{p}-\lambda\operatorname{div}\boldsymbol{u}\). Then (1) can be rewritten as \[-2\mu\operatorname{div}\left(\varepsilon(\boldsymbol{u})\right)+ \nabla\xi=\boldsymbol{f}, \tag{2a}\] \[-\operatorname{div}\boldsymbol{u}-\frac{1}{\lambda}\xi+\frac{1} {\lambda}\vec{\alpha}^{\mathrm{T}}\vec{p}=0,\] (2b) \[\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathrm{T}} \right)\dot{\vec{p}}-\frac{1}{\lambda}\vec{\alpha}\dot{\xi}-\operatorname{ div}(K\nabla\vec{p})+B\vec{p}=\vec{g}. \tag{2c}\] The initial and boundary conditions we provided earlier can still be applied to the problem (2). In order to describe the variational problem for the total-pressure-based formulation (2), we will utilize the following functional spaces. \[\boldsymbol{V} :=\{\boldsymbol{v}\in[H^{1}(\Omega)]^{2};\boldsymbol{v}|_{\Gamma_ {\boldsymbol{u},D}}=\boldsymbol{0}\},\] \[W :=L^{2}(\Omega),\] \[M :=\{\vec{q}\in[H^{1}(\Omega)]^{N};\vec{q}|_{\Gamma_{\vec{p},D}}= \vec{0}\}.\] The properties of these spaces are as follows. The first Korn's inequality [26] holds on \(\boldsymbol{V}\), that is, there exists a constant \(C_{K}=C_{K}(\Omega,\Gamma_{\boldsymbol{u},D})>0\) such that \[|\boldsymbol{u}|_{H^{1}(\Omega)}\leq C_{K}|\epsilon(\boldsymbol{u})|_{L^{2}( \Omega)}\quad\forall\boldsymbol{u}\in\boldsymbol{V}.\] Furthermore, the following inf-sup condition [26] holds. There exists a constant \(\beta_{0}>0\) depending only on \(\Omega\) and \(\Gamma_{\boldsymbol{u},D}\) such that \[\sup_{\boldsymbol{v}\in\boldsymbol{V}}\frac{(\operatorname{div}\boldsymbol{v},\eta)}{|\boldsymbol{v}|_{H^{1}(\Omega)}}\geq\beta_{0}|\eta|_{L^{2}(\Omega)} \quad\forall\eta\in W. \tag{3}\] _Assumption 1_.: We assume that \(\boldsymbol{u}_{0}\in\boldsymbol{H}^{1}(\Omega),\boldsymbol{f}\in\boldsymbol {L}^{2}(\Omega),\boldsymbol{h}\in\boldsymbol{L}^{2}(\Gamma_{\boldsymbol{u},N}),\,\vec{p}_{0}\in[L^{2}(\Omega)]^{N},\)\(\vec{g}\in[L^{2}(\Omega)]^{N}\) and \(\vec{l}\in[L^{2}(\Gamma_{\vec{p},N})]^{N}\). We also assume that \(\mu>0\), \(\lambda>0\), \(K_{i}>0,i=1,\cdots,N\), \(\beta_{i,j}>0,1\leq i,j\leq N,j\neq i\), \(T>0\). For simplicity, we will assume **Assumption 1** holds in the rest of our paper. For ease of presentation, we also assume that \(\boldsymbol{f}\), \(\boldsymbol{h}\), \(\vec{g}\), \(\vec{l}\) are independent of \(t\). Given \(T>0\), \((\boldsymbol{u},\xi,\vec{p})\in\boldsymbol{V}\times W\times M\) with \[\boldsymbol{u}\in L^{\infty}(0,T;\boldsymbol{V}),\,\xi\in L^{ \infty}(0,T;W),\,\vec{p}\in[L^{\infty}(0,T;M)]^{N},\] \[\dot{\vec{p}}\in[L^{2}(0,T;M^{\prime})]^{N},\,\dot{\xi}\in L^{2}( 0,T;M^{\prime}),\] is called a weak solution of problem (2a)-(2c), if there holds \[2\mu(\epsilon(\boldsymbol{u}),\epsilon(\boldsymbol{v}))-(\xi, \operatorname{div}\boldsymbol{v})=(\boldsymbol{f},\boldsymbol{v})+\langle \boldsymbol{h},\boldsymbol{v}\rangle_{\Gamma_{\boldsymbol{u},N}}, \tag{4a}\] \[(\operatorname{div}\boldsymbol{u},\eta)+\frac{1}{\lambda}(\xi, \eta)-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}\vec{p},\eta)=0,\] (4b) \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{ \mathsf{T}}\right)\dot{\vec{p}},\vec{q}\right)-\frac{1}{\lambda}\left(\vec{ \alpha}\dot{\xi},\vec{q}\right)+(K\nabla\vec{p},\nabla\vec{q})+(B\vec{p},\vec{ q})=(\vec{g},\vec{q})+\left\langle\vec{l},\vec{q}\right\rangle_{\Gamma_{\vec{p},N}}, \tag{4c}\] for almost every \(t\in[0,T]\). ## 3 The algorithms In this section, based on the above total-pressure reformulation, we will present the coupled algorithm and the iterative decoupled algorithm. We will start with the introduction of the mixed finite element spaces for discretizing the variational problem (4). We will also provide the approximation properties of the corresponding projection operators. ### Finite element spaces Let \(\mathcal{T}_{h}\) be a shape-regular and conforming triangulation of the domain \(\Omega\). It is common in the literature to set \(h:=\max_{K\in\mathcal{T}_{h}}h_{K}\). The solid displacement and total pressure are approximated by the lowest-order Taylor-Hood elements, i.e.,\((\boldsymbol{P}_{2},P_{1})\) Lagrange finite elements, while the network pressures are discretized by using the \(P_{1}\) Lagrange finite elements [27]. The finite element spaces on \(\mathcal{T}_{h}\) are as follows. \[\boldsymbol{V}_{h} :=\{\boldsymbol{v}_{h}\in\boldsymbol{V}\cap[C^{0}(\overline{ \Omega})]^{2};\boldsymbol{v}_{h}|_{\Gamma_{\boldsymbol{u},D}}=\boldsymbol{0}, \boldsymbol{v}_{h}|_{K}\in[P_{2}(K)]^{2},\forall K\in T_{h}\},\] \[W_{h} :=\{\eta_{h}\in W\cap C^{0}(\overline{\Omega});\eta_{h}|_{K}\in P _{1}(K),\forall K\in T_{h}\},\] \[M_{h} :=\{\vec{q}_{h}\in M\cap[C^{0}(\overline{\Omega})]^{N};\vec{q}_{h} |_{\Gamma_{\vec{p},D}}=\vec{0},\vec{q}_{h}|_{K}\in[P_{1}(K)]^{N},\forall K\in T _{h}\}.\] We note that \(\boldsymbol{V}_{h}\times W_{h}\) is a stable Stokes pair, i.e., the following discrete inf-sup condition holds. There exists a constant \(\beta_{0}^{*}>0\), independent of \(h\), such that \[\sup_{\boldsymbol{v}_{h}\in\boldsymbol{V}_{h}}\frac{(\operatorname{div} \boldsymbol{v}_{h},\eta_{h})}{\|\boldsymbol{v}_{h}\|_{H^{1}(\Omega)}}\geq \beta_{0}^{*}\|\eta_{h}\|_{L^{2}(\Omega)}\quad\forall\eta_{h}\in W_{h}. \tag{5}\] ### The coupled algorithm The time discretization is taken as an equidistant partition \(0=t^{0}<t^{1}<\cdots<t^{L}=T\). For simplicity, we define \(\boldsymbol{u}^{n}=\boldsymbol{u}(t^{n}),\xi^{n}=\xi(t^{n}),\vec{p}^{n}=\vec{p} (t^{n})\). Suppose that initial values \((\boldsymbol{u}^{0}_{h},\xi^{0}_{h},\vec{p}^{0}_{h})\in\boldsymbol{V}_{h} \times W_{h}\times M_{h}\) are provided, we apply a backward Euler scheme for the time discretization. The following algorithm is considered. For all \(n=1,2,\cdots,L\), given \((\boldsymbol{u}^{n-1}_{h},\xi^{n-1}_{h},\vec{p}^{n-1}_{h})\in\boldsymbol{V}_{ h}\times W_{h}\times M_{h}\), find \((\boldsymbol{u}^{n}_{h},\xi^{n}_{h},\vec{p}^{n}_{h})\in\boldsymbol{V}_{h} \times W_{h}\times M_{h}\), for all \((\boldsymbol{v}_{h},\eta_{h},\vec{q}_{h})\in\boldsymbol{V}_{h}\times W_{h} \times M_{h}\) such that \[2\mu(\epsilon(\boldsymbol{u}^{n}_{h}),\epsilon(\boldsymbol{v}_{ h}))-(\xi^{n}_{h},\operatorname{div}\boldsymbol{v}_{h})=(\boldsymbol{f}, \boldsymbol{v}_{h})+\langle\boldsymbol{h},\boldsymbol{v}_{h}\rangle_{\Gamma_{ \boldsymbol{u},N}}, \tag{6a}\] \[(\operatorname{div}\boldsymbol{u}^{n}_{h},\eta_{h})+\frac{1}{ \lambda}(\xi^{n}_{h},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{\mathrm{T}} \vec{p}^{n}_{h},\eta_{h})=0,\] (6b) \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{ \mathrm{T}}\right)\frac{\vec{p}^{n}_{h}-\vec{p}^{n-1}_{h}}{\Delta t},\vec{q}_{ h}\right)-\frac{1}{\lambda}\left(\vec{\alpha}\frac{\xi^{n}_{h}-\xi^{n-1}_{h}}{ \Delta t},\vec{q}_{h}\right)+(K\nabla\vec{p}^{n}_{h},\nabla\vec{q}_{h})+(B \vec{p}^{n}_{h},\vec{q}_{h})\] (6c) \[=(\vec{g},\vec{q}_{h})+\left\langle\vec{l},\vec{q}_{h}\right\rangle_{\Gamma_{ \vec{p},N}}.\] ### The iterative decoupled algorithm Now, we introduce the iterative decoupled algorithm. In each time step of the algorithm, the previous iteration is used as the initial values, and then solve the reaction-diffusion equation for \(\vec{p}\) and the generalized Stokes equations for \(\boldsymbol{u}\) and \(\xi\) alternately. Let us define a sequence \(\{(\boldsymbol{u}^{n,k}_{h},\xi^{n,k}_{h},\vec{p}^{n,k}_{h})\}\) with \(k\in\mathbb{N}^{*}\) being the iteration index. After initialization, i.e., \(\boldsymbol{u}^{n,0}_{h}=\boldsymbol{u}^{n-1}_{h},\xi^{n,0}_{h}=\xi^{n-1}_{h}, \vec{p}^{n,0}_{h}=\vec{p}^{n-1}_{h}\), each iteration consists of the following two steps. For a fixed time-step index \(n\), the \(k\)-th iteration can be expressed as follows. **Step 1**. Given \(\xi^{n,k-1}_{h}\in W_{h}\), find \(\vec{p}^{n,k}_{h}\in M_{h}\), for all \(\vec{q}_{h}\in M_{h}\) such that \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathrm{T}}\right)\vec{ p}^{n,k}_{h},\vec{q}_{h}\right)+\Delta t\left(K\nabla\vec{p}^{n,k}_{h},\nabla \vec{q}_{h}\right)+\Delta t\left(B\vec{p}^{n,k}_{h},\vec{q}_{h}\right) \tag{7a}\] \[=\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T}} \right)\vec{p}_{h}^{n-1},\vec{q}_{h}\right)+\frac{1}{\lambda}\left(\vec{\alpha} \left(\xi_{h}^{n,k-1}-\xi_{h}^{n-1}\right),\vec{q}_{h}\right)+\Delta t\left( \vec{g},\vec{q}_{h}\right)+\Delta t\left\langle\vec{l},\vec{q}_{h}\right\rangle_ {\Gamma_{\vec{p},N}}.\] **Step 2**. Given \(\vec{p}_{h}^{n,k}\in M_{h}\), find \((\mathbf{u}_{h}^{n,k},\xi_{h}^{n,k})\in\mathbf{V}_{h}\times W_{h}\), for all \((\mathbf{v}_{h},\eta_{h})\in\mathbf{V}_{h}\times W_{h}\) such that \[2\mu(\epsilon(\mathbf{u}_{h}^{n,k}),\epsilon(\mathbf{v}_{h}))-(\xi_{h}^{n,k},\operatorname{div}\mathbf{v}_{h})=(\mathbf{f},\mathbf{v}_{h})+\langle\mathbf{h},\mathbf{v}_{h }\rangle_{\Gamma_{\mathbf{u},N}}, \tag{7b}\] \[(\operatorname{div}\mathbf{u}_{h}^{n,k},\eta_{h})+\frac{1}{\lambda} (\xi_{h}^{n,k},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}\vec{p}_{ h}^{n,k},\eta_{h})=0. \tag{7c}\] ## 4 Convergence analysis For a detailed error analysis of the coupled algorithm, we direct the readers to **Appendix A**. In this section, our objective is to establish that as \(k\) approaches positive infinity, the sequence \((\mathbf{u}_{h}^{n,k},\xi_{h}^{n,k},\vec{p}_{h}^{n,k})\) generated through the iterative decoupled algorithm converges towards the solution \((\mathbf{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\) obtained via the coupled algorithm. To substantiate these convergence results, we present a crucial lemma [26]. **Lemma 1**.: _For all \(\mathbf{u}_{h}\in\mathbf{V}_{h}\), the following inequality holds_ \[\|\operatorname{div}\mathbf{u}_{h}\|_{L^{2}(\Omega)}\leq\sqrt{2}\|\epsilon(\mathbf{u} _{h})\|_{L^{2}(\Omega)}. \tag{8}\] For a complete theoretical analysis of the decoupled algorithm, we firstly assume that \(\min_{1\leq i\leq N}c_{i}>0\), and we have the following theorem. **Theorem 2**.: _Let \((\mathbf{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\) and \((\mathbf{u}_{h}^{n,k},\xi_{h}^{n,k},\vec{p}_{h}^{n,k})\) be the solutions of problem (6a)-(6c) and problem (7a)-(7c), respectively. Let \(e_{\mathbf{u}}^{k}=\mathbf{u}_{h}^{n,k}-\mathbf{u}_{h}^{n},e_{\xi}^{k}=\xi_{h}^{n,k}-\xi_{ h}^{n},e_{\vec{p}}^{k}=\vec{p}_{h}^{n,k}-\vec{p}_{h}^{n}\) denote the errors between the iterative solution in the \(k\)-th step and the solution of the coupled algorithm. Then, for all \(k\geq 1\), it holds that_ \[\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\leq C^{*}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)},\] _where \(C^{*}=\frac{\frac{\|\vec{\alpha}\|^{2}}{\lambda}}{\delta+\frac{\|\vec{\alpha} \|^{2}}{\lambda}}\) is a positive constant less than 1, \(\|\vec{\alpha}\|=\left(\sum_{i=1}^{N}\alpha_{i}^{2}\right)^{\frac{1}{2}}\), \(\delta=\min_{1\leq i\leq N}c_{i}\). Moreover, it holds that_ \[\lim_{k\to+\infty}\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+\infty }\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+\infty}\|e_{\mathbf{u}}^{k}\|_{H^ {1}(\Omega)}=\lim_{k\to+\infty}\|e_{\mathbf{u}}^{k}\|_{L^{2}(\Omega)}=0.\] **Proof**. Subtracting (6c), (6a) and (6b) from (7a), (7b) and (7c), and taking \(\mathbf{v}_{h}=e_{\mathbf{u}}^{k},\eta_{h}=e_{\xi}^{k},\vec{q}_{h}=e_{\vec{p}}^{k}\), respectively, we have \[2\mu(\epsilon(e_{\mathbf{u}}^{k}),\epsilon(e_{\mathbf{u}}^{k}))-(e_{\xi}^ {k},\operatorname{div}e_{\mathbf{u}}^{k})=0, \tag{9}\] \[(\operatorname{div}e_{\mathbf{u}}^{k},e_{\xi}^{k})+\frac{1}{\lambda}( e_{\xi}^{k},e_{\xi}^{k})=\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k},e_{ \xi}^{k}), \tag{10}\] \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T}} \right)e_{\vec{p}}^{k},e_{\vec{p}}^{k}\right)+\Delta t\left(K\nabla e_{\vec{p}}^ {k},\nabla e_{\vec{p}}^{k}\right)+\Delta t\left(Be_{\vec{p}}^{k},e_{\vec{p}}^{k }\right)=\frac{1}{\lambda}\left(\vec{\alpha}e_{\xi}^{k-1},e_{\vec{p}}^{k} \right). \tag{11}\] Using (11), there holds \[\left(Se_{\vec{p}}^{k},e_{\vec{p}}^{k}\right)+\frac{1}{\lambda}\|\vec{\alpha} ^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}\leq\frac{1}{\lambda}\|e_{ \xi}^{k-1}\|_{L^{2}(\Omega)}\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2 }(\Omega)}, \tag{12}\] where we have used the fact \(\Delta t\left(K\nabla e_{\vec{p}}^{k},\nabla e_{\vec{p}}^{k}\right)\geq 0, \Delta t\left(Be_{\vec{p}}^{k},e_{\vec{p}}^{k}\right)\geq 0\). Note that \[\frac{\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}}{\|\vec {\alpha}\|^{2}}\leq\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}. \tag{13}\] Plugging (13) into (12), we have \[\delta\frac{\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}( \Omega)}^{2}}{\|\vec{\alpha}\|^{2}}+\frac{1}{\lambda}\|\vec{\alpha}^{\mathsf{T }}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2} \leq\delta\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}+\frac{1}{\lambda }\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}\] \[\leq\left(Se_{\vec{p}}^{k},e_{\vec{p}}^{k}\right)+\frac{1}{ \lambda}\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}\] \[\leq\frac{1}{\lambda}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}\|\vec{ \alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)},\] where \(\delta=\min_{1\leq i\leq N}c_{i}\). In the above inequality, dividing both sides by \(\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\) leads to \[\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\leq\frac{\frac{\| \vec{\alpha}\|^{2}}{\lambda}}{\delta+\frac{\|\vec{\alpha}\|^{2}}{\lambda}}\|e_ {\xi}^{k-1}\|_{L^{2}(\Omega)}. \tag{14}\] Ignoring the nonnegative term \(\frac{1}{\lambda}\|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)} ^{2}\) in (12) and applying (14), we obtain \[\delta\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}\leq\left(Se_{\vec{p }}^{k},e_{\vec{p}}^{k}\right) \leq\frac{1}{\lambda}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}\|\vec{ \alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\] \[\leq\frac{1}{\lambda}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}\frac{\| \vec{\alpha}\|^{2}}{\delta+\frac{\|\vec{\alpha}\|^{2}}{\lambda}}\|e_{\xi}^{k-1 }\|_{L^{2}(\Omega)}.\] By taking square roots of both sides, we have \[\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\leq\left(\frac{\|\vec{\alpha}\|^{2}}{ \lambda\delta(\lambda\delta+\|\vec{\alpha}\|^{2})}\right)^{\frac{1}{2}}\|e_{ \xi}^{k-1}\|_{L^{2}(\Omega)}. \tag{15}\] Similarly, ignoring the nonnegative term \(\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T}}\right)e_{ \vec{p}}^{k},e_{\vec{p}}^{k}\right)\) and \(\Delta t\left(Be_{\vec{p}}^{k},e_{\vec{p}}^{k}\right)\) in (11), and using (14), we derive that \[\Delta t\zeta\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}^{2}\leq \Delta t\left(K\nabla e_{\vec{p}}^{k},\nabla e_{\vec{p}}^{k}\right) \leq\frac{1}{\lambda}\left(\vec{\alpha}e_{\xi}^{k-1},e_{\vec{p}}^ {k}\right)\] \[\leq\frac{1}{\lambda}|e_{\xi}^{k-1}|\frac{\frac{\|\vec{\alpha}\|^ {2}}{\lambda}}{\delta+\frac{\|\vec{\alpha}\|^{2}}{\lambda}}|e_{\xi}^{k-1}\|_{L ^{2}(\Omega)},\] where \(\zeta=\min_{1\leq i\leq N}K_{i}\). By taking the square roots of both sides, we obtain \[\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\leq\left(\frac{\frac{\|\vec{\alpha }\|^{2}}{\lambda}}{(\lambda\delta+|\vec{\alpha}|^{2})\Delta t\zeta}\right)^{ \frac{1}{2}}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}. \tag{16}\] Summing up (9) and (10), there holds \[2\mu\|\epsilon(e_{\boldsymbol{\omega}}^{k})\|_{L^{2}(\Omega)}^{2}+\frac{1}{ \lambda}|e_{\xi}^{k}|_{L^{2}(\Omega)}^{2}=\frac{1}{\lambda}(\vec{\alpha}^{ \mathsf{T}}e_{\vec{p}}^{k},e_{\xi}^{k})\leq\frac{1}{\lambda}\|\vec{\alpha}^{ \mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}. \tag{17}\] Ignoring the first positive term \(2\mu|\epsilon(e_{\boldsymbol{\omega}}^{k})|_{L^{2}(\Omega)}^{2}\) and using (14), we deduce that \[\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\leq|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k} \|_{L^{2}(\Omega)}\leq\frac{\frac{\|\vec{\alpha}\|^{2}}{\lambda}}{\delta+\frac{ \|\vec{\alpha}\|^{2}}{\lambda}}\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}. \tag{18}\] Applying (8) to (9), it holds that \[2\mu\|\epsilon(e_{\boldsymbol{\omega}}^{k})\|_{L^{2}(\Omega)}^{2}=(e_{\xi}^{ k},\operatorname{div}e_{\boldsymbol{\omega}}^{k})\leq\|e_{\xi}^{k}\|_{L^{2}( \Omega)}\|\operatorname{div}e_{\boldsymbol{\omega}}^{k}\|_{L^{2}(\Omega)}\leq \sqrt{2}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\|\epsilon(e_{\boldsymbol{\omega}}^{k}) \|_{L^{2}(\Omega)}.\] Using the first Korn's inequality [26], it can be seen that \[\|e_{\boldsymbol{\omega}}^{k}\|_{L^{2}(\Omega)}\leq\|e_{\boldsymbol{\omega}}^ {k}\|_{H^{1}(\Omega)}\leq C_{K}\|\epsilon(e_{\boldsymbol{\omega}}^{k})\|_{L^{2 }(\Omega)}\leq\frac{C_{K}}{\sqrt{2}\mu}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}. \tag{19}\] According to (18) and the inequality \(0<\frac{\frac{\|\vec{\alpha}\|^{2}}{\lambda}}{\delta+\frac{\|\vec{\alpha}\|^{2 }}{\lambda}}<1\), then \[\lim_{k\to+\infty}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}=0.\] Following from (15), (16) and (19), we have \[\lim_{k\to+\infty}\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+\infty }\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+\infty}\|e_{\boldsymbol{\omega }}^{k}\|_{H^{1}(\Omega)}=\lim_{k\to+\infty}\|e_{\boldsymbol{\omega}}^{k}\|_{L^ {2}(\Omega)}=0. \tag{20}\] For \(\min_{1\leq i\leq N}c_{i}=0\), the main conclusions are presented in the following remark. _Remark 1_.: If \(\min_{1\leq i\leq N}c_{i}=0\), we will show that the arguments (20) also hold true. It means that the iterative decoupled algorithm also converges when \(\min_{1\leq i\leq N}c_{i}\) degenerates to \(0\). According to (12), we obtain \[\|\bar{\alpha}^{\mathsf{T}}e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}\leq\|e_{\xi}^{k-1} \|_{L^{2}(\Omega)}. \tag{21}\] Thanks to (11), (17), (21), and the Poincare inequality, we can derive the following: \[\|e_{\xi}^{k}\|_{L^{2}(\Omega)} \leq\|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}, \tag{22}\] \[2\mu\|\epsilon(e_{\boldsymbol{u}}^{k})\|_{L^{2}(\Omega)}^{2}+ \frac{1}{\lambda}|e_{\xi}^{k}\|_{L^{2}(\Omega)}^{2} \leq\frac{1}{\lambda}|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}\|e_{\xi}^{k }\|_{L^{2}(\Omega)},\] (23) \[\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)} \leq C_{P}\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)} \leq C_{P}\left(\frac{1}{\lambda\Delta t\zeta}\right)^{\frac{1}{2}} \|e_{\xi}^{k-1}\|_{L^{2}(\Omega)}, \tag{24}\] where \(C_{P}\) is the constant in the Poincare inequality. We are going to use the method of contradiction to show that the limit of \(\{\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\}\) is \(0.\) If not, let us assume \[\lim_{k\to+\infty}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}>0.\] From (23), we know that \[\lim_{k\to+\infty}\|\epsilon(e_{\boldsymbol{u}}^{k})\|_{L^{2}(\Omega)}=0.\] Applying the discrete \(\inf\)-sup condition, we see that \[\beta_{0}^{*}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\leq\sup_{\boldsymbol{v}_{h}\in \boldsymbol{V}_{h}}\frac{|(e_{\xi}^{k},\operatorname{div}\boldsymbol{v}_{h}) |}{\|\boldsymbol{v}_{h}\|_{H^{1}(\Omega)}}=\sup_{\boldsymbol{v}_{h}\in \boldsymbol{V}_{h}}\frac{2\mu|(\epsilon(e_{\boldsymbol{u}}^{k}),\epsilon( \boldsymbol{v}_{h}))|}{\|\boldsymbol{v}_{h}\|_{H^{1}(\Omega)}}\leq 2\mu| \epsilon(e_{\boldsymbol{u}}^{k})|_{L^{2}(\Omega)}. \tag{25}\] Letting \(k\to\infty\) in (25), we derive that \[\lim_{k\to+\infty}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}\leq 0,\] which is a contradiction. Therefore \[\lim_{k\to+\infty}\|e_{\xi}^{k}\|_{L^{2}(\Omega)}=0. \tag{26}\] Combined with (19), (24) and (26), it follows that \[\lim_{k\to+\infty}\|\nabla e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+ \infty}\|e_{\vec{p}}^{k}\|_{L^{2}(\Omega)}=\lim_{k\to+\infty}\|e_{\boldsymbol {u}}^{k}\|_{H^{1}(\Omega)}=\lim_{k\to+\infty}\|e_{\boldsymbol{u}}^{k}\|_{L^{2} (\Omega)}=0.\] ## 5 Numerical experiments Let \(\Omega=[0,1]\times[0,1]\), \(\Gamma_{1}=\{(1,y);0\leq y\leq 1\}\), \(\Gamma_{2}=\{(x,0);0\leq x\leq 1\}\), \(\Gamma_{3}=\{(0,y);0\leq y\leq 1\}\), and \(\Gamma_{4}=\{(x,1);0\leq x\leq 1\}\). The terminal time is \(T=0.01\) seconds. In our numerical experiments, we impose a pure Dirichlet boundary condition on this domain. We initialize the mesh with a size of \(h=\frac{1}{8}\) and subsequently carry out four rounds of mesh refinement, which involves connecting the midpoints of each triangle. We use the variable \(iters\) to represent the number of iterations utilized within the iterative decoupled algorithm. In these tests, we intentionally employ relatively large time step sizes to effectively showcase the efficiency of the iterative decoupled algorithm. Specifically, we choose the values of \(\Delta t\) and \(iters\) such that the overall computational expense is approximately equal to that of the coupled algorithm. More precisely, we set the values of \(\Delta t\) and \(iters\) such that \(T/\Delta t\timesiters=50\). At the conclusion of the simulations, we calculate the \(H^{1}\) and \(L^{2}\) norms at the terminal time \(T\). All the tests discussed in this paper are implemented using the open-source software FreeFEM++ [28]. The exact solutions of (2a)-(2c) are given as follows. \[\boldsymbol{u} =\left[\begin{array}{c}\sin(2\pi y)(-1+\cos(2\pi x))+\frac{1}{ \mu+\lambda}\sin(\pi x)\sin(\pi y)\\ \sin(2\pi x)(1-\cos(2\pi y))+\frac{1}{\mu+\lambda}\sin(\pi x)\sin(\pi y)\end{array} \right]\sin(t),\] \[p_{1} =-\sin(\pi x)\sin(\pi y)\cos(t),\] \[p_{2} =-2\sin(\pi x)\sin(\pi y)\cos(t).\] ### Tests for the parameter \(\boldsymbol{\nu}\) In this subsection, we test the performance of the algorithms under different settings of the Poisson ratio \(\nu\). For other physical parameters, we set \[E=1,\alpha_{1}=\alpha_{2}=1,c_{1}=c_{2}=1,K_{1}=K_{2}=1,\beta_{12}=\beta_{21 }=1.\] In Table 1-Table 3, we present the numerical results of the coupled algorithm and the iterative decoupled algorithm for the case of Poisson's ratio \(v=0.3\). Table 2 displays the results obtained from the iterative decoupled algorithm with 10 iterations, while Table 3 shows the results obtained with 20 iterations. By observing the numerical results of Table 1-Table 3, it can be found that the energy-norm errors of all variables converge to the optimal orders. Furthermore, the results in Tables 3 indicate that increasing the number of iterations leads to enhanced accuracy in the iterative decoupled algorithm. Table 1-Table 3 is for the case that the poroelastic material is compressible. In Table 4 through Table 6, we keep the physical parameters constant while setting the Poisson ratio to \(\nu=0.49999\). With such a Poisson ratio, the poroelastic material exhibits nearly incompressible behavior. Both Table 4 and Table 5 through Table 6 \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\mathbf{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.229e-03 \& 1.765e-02 & & 3.665e-02 \& 1.083e+00 & \\ 16 & 3.011e-04 \& 4.012e-03 & 2.03 \& 2.13 & 9.105e-03 \& 5.506e-01 & 2.00 \& 0.976 \\ 32 & 7.536e-05 \& 9.421e-04 & 2.00 \& 2.10 & 2.269e-03 \& 2.760e-01 & 2.00 \& 0.996 \\ 64 & 1.890e-05 \& 2.257e-04 & 2.00 \& 2.06 & 5.670e-04 \& 1.381e-01 & 2.00 \& 0.999 \\ 128 & 4.766e-06 \& 5.523e-05 & 1.99 \& 2.03 & 1.423e-04 \& 6.908e-02 & 1.99 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.432e-02 \& 3.581e-01 & & 2.851e-02 \& 7.161e-01 & \\ 16 & 3.681e-03 \& 1.816e-01 & 1.96 \& 0.98 & 7.342e-03 \& 3.633e-01 & 1.96 \& 0.98 \\ 32 & 9.354e-04 \& 9.134e-02 & 1.98 \& 0.99 & 1.868e-03 \& 1.827e-01 & 1.97 \& 0.99 \\ 64 & 2.403e-04 \& 4.576e-02 & 1.96 \& 1.00 & 4.809e-04 \& 9.153e-02 & 1.96 \& 1.00 \\ 128 & 6.586e-05 \& 2.290e-02 & 1.87 \& 1.00 & 1.327e-04 \& 4.579e-02 & 1.86 \& 1.00 \\ \hline \end{tabular} \end{table} Table 1: Convergence rate of the coupled algorithm. \(\nu=0.3\) and \(\Delta t=2\times 10^{-4}s\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\mathbf{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.229e-03 \& 1.765e-02 & & 3.667e-02 \& 1.084e+00 & \\ 16 & 3.011e-04 \& 4.012e-03 & 2.03 \& 2.14 & 9.146e-03 \& 5.507e-01 & 2.00 \& 0.977 \\ 32 & 7.577e-05 \& 9.367e-04 & 1.99 \& 2.10 & 2.283e-03 \& 2.760e-01 & 2.00 \& 0.996 \\ 64 & 1.961e-05 \& 2.260e-04 & 1.95 \& 2.05 & 5.738e-04 \& 1.381e-01 & 1.99 \& 0.999 \\ 128 & 5.921e-06 \& 5.813e-05 & 1.73 \& 1.96 & 1.478e-04 \& 6.908e-02 & 1.96 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 1/2 & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.229e-03 \& 1.765e-02 & & 3.667e-02 \& 1.084e+00 & \\ 16 & 3.011e-04 \& 4.012e-03 & 2.03 \& 2.14 & 9.146e-03 \& 5.507e-01 & 2.00 \& 0.977 \\ 32 & 7.577e-05 \& 9.367e-04 & 1.99 \& 2.10 & 2.283e-03 \& 2.760e-01 & 2.00 \& 0.996 \\ 64 & 1.961e-05 \& 2.260e-04 & 1.95 \& 2.05 & 5.738e-04 \& 1.381e-01 & 1.99 \& 0.999 \\ 128 & 5.921e-06 \& 5.813e-05 & 1.73 \& 1.96 & 1.478e-04 \& 6.908e-02 & 1.96 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.200e-02 \& 3.609e-01 & & 2.625e-02 \& 7.203e-01 & \\ 16 & 3.036e-03 \& 1.827e-01 & 1.98 \& 0.98 & 6.662e-03 \& 3.652e-01 & 1.98 \& 0.98 \\ 32 & 7.626e-04 \& 9.182e-02 & 1.99 \& 0.99 & 1.678e-03 \& 1.836e-01 & 1.99 \& 0.99 \\ 64 & 1.920e-04 \& 4.599e-02 & 1.99 \& 1.00 & 4.239e-04 \& 9.198e-02 & 1.98 \& 1.00 \\ 128 & 4.950e-05 \& 2.301e-02 & 1.96 \& 1.00 & 1.100e-04 \& 4.602e-02 & 1.95 \& 1.00 \\ \hline \end{tabular} \end{table} Table 2: Convergence rate of the iterative decoupled algorithm. \(\nu=0.3\), \(\Delta t=2\times 10^{-3}s\), and \(iters=10\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\mathbf{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.184e-03 \& 1.734e-02 & & 3.656e-02 \& 1.084e+00 & \\ 16 & 2.879e-04 \& 3.877e-03 & 2.04 \& 2.16 & 9.114e-03 \& 5.508e-01 & 2.00 \& 0.977 \\ 32 & 7.230e-05 \& 8.955e-04 & 1.99 \& 2.11 & 2.279e-03 \& 2.760e-01 & 2.00 \& 0.997 \\ 64 & 1.858e-05 \& 2.150e-04 & 1.96 \& 2.06 & 5.782e-04 \& 1.381e-01 & 1.98 \& 0.999 \\ 128 & 5.148e-06 \& 5.469e-05 & 1.85 \& 1.97 & 1.542e-04 \& 6.908e-02 & 1.91 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.192e-02 \& 3.613e-01 & & 2.613e-02 \& 7.206e-01 & \\ 16 & 3.004e-03 \& 1.828e-01 & 1.99 \& 0.98 & 6.616e-03 \& 3.652e-01 & 1.98 \& 0.98 \\ 32 & 7.554e-04 \& 9.183e-02 & 1.99 \& 0.99 & 1.668e-03 \& 8.136e-01 & 1.99 present results obtained from the coupled algorithm and the iterative decoupled algorithm, respectively, each with different iteration counts. It is noteworthy that as the Poisson ratio \(\nu\) approaches 0.5, which indicates a state of near-incompressibility, both algorithms demonstrate improved numerical error rates compared to when \(\nu=0.3\). Upon examining Table 4 through Table 6, it becomes evident that in both algorithms, the \(L^{2}\) norm errors and energy norm errors of each variable exhibit optimal convergence rates. A comparative analysis of the numerical outcomes presented in Table 4 and Table 6 underscores the superior error accuracy achieved by our proposed iterative decoupled algorithm. ### Tests for the parameter \(K_{i}\) In this subsection, we conduct experiments to assess the performance of the iterative decoupled algorithm by varying the values of the parameter \(K_{i}\). It's worth noting that we previously examined the case where \(K_{1}=K_{2}=1\) in our earlier tests, but now we set \(K_{1}=K_{2}=10^{-6}\). For the other physical parameters, we maintain the following values: \[E=1,\nu=0.3,\alpha_{1}=\alpha_{2}=1,c_{1}=c_{2}=1,\beta_{12}=\beta_{21}=1.\] The results presented in Table 8 through Table 9 illustrate that increasing the number of iterations results in improved accuracy when using the iterative decoupled algorithm. Upon analyzing the numerical results in Table 3 and Table 9, it becomes apparent that the energy-norm errors for all variables converge to their optimal orders. This suggests that the accuracy of the iterative decoupled algorithm remains robust and is minimally affected by variations in hydraulic conductivity. \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\mathbf{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 4.096e-04 \& 1.657e-02 & & 3.927e-02 \& 1.090e+00 & \\ 16 & 3.552e-05 \& 3.046e-03 & 3.53 \& 2.44 & 9.738e-03 \& 5.606e-01 & 2.01 \& 0.959 \\ 32 & 3.224e-06 \& 5.575e-04 & 3.46 \& 2.45 & 2.429e-03 \& 2.779e-01 & 2.00 \& 1.012 \\ 64 & 2.911e-07 \& 1.027e-04 & 3.47 \& 2.44 & 6.168e-04 \& 1.386e-01 & 1.98 \& 1.004 \\ 128 & 2.669e-08 \& 1.944e-05 & 3.45 \& 2.40 & 1.657e-04 \& 6.918e-02 & 1.90 \& 1.002 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.462e-02 \& 3.600e-01 & & 2.903e-02 \& 7.199e-01 & \\ 16 & 3.723e-03 \& 1.826e-01 & 1.97 \& 0.98 & 7.388e-03 \& 3.651e-01 & 1.97 \& 0.98 \\ 32 & 9.410e-04 \& 9.180e-02 & 1.98 \& 0.99 & 1.867e-03 \& 1.836e-01 & 1.98 \& 0.99 \\ 64 & 2.403e-04 \& 4.599e-02 & 1.97 \& 1.00 & 4.768e-04 \& 9.198e-02 & 1.97 \& 1.00 \\ 128 & 6.476e-05 \& 2.301e-02 & 1.89 \& 1.00 & 1.285e-04 \& 4.602e-02 & 1.89 \& 1.00 \\ \hline \end{tabular} \end{table} Table 6: Convergence rate of the iterative decoupled algorithm. \(\nu=0.49999\), \(\Delta t=4\times 10^{-3}s\), and \(iters=20\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\mathbf{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.021e-03 \& 1.674e-02 & & 3.674e-02 \& 1.092e+00 & \\ 16 & 2.283e-04 \& 3.321e-03 & 2.16 \& 2.33 & 9.210e-03 \& 5.561e-01 & 2.00 \& 0.973 \\ 32 & 5.444e-05 \& 6.833e-04 & 2.07 \& 2.28 & 2.326e-03 \& 2.772e-01 & 1.99 \& 1.004 \\ 64 & 1.334e-05 \& 1.477e-04 & 2.03 \& 2.21 & 5.852e-04 \& 1.384e-01 & 1.99 \& 1.002 \\ 128 & 3.325e-06 \& 3.369e-05 & 2.00 \& 2.13 & 1.475e-04 \& 6.914e-02 & 1.99 \& 1.001 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.462e-02 \& 3.600e-01 & & 2.903e-02 \& 7.199e-01 & \\ 16 & 3.723e-03 \& 1.826e-01 & 1.97 \& 0.98 & 7.388e-03 \& 3.651e-01 & 1.97 \& 0.98 \\ 32 & 9.410e-04 \& 9.180e-02 & 1.98 \& 0.99 & 1.867e-03 \& 1.836e-01 & 1.98 \& 0.99 \\ 64 & 2.403e-04 \& 4.599e-02 & 1.97 \& 1.00 & 4.768e-04 \& 9.198e-02 & 1.97 \& 1.00 \\ 128 & 6.476e-05 \& 2.301e-02 & 1.89 \& 1.00 & 1.285e-04 \& 4.602e-02 & 1.89 \& 1.00 \\ \hline \end{tabular} \end{table} Table 7: Convergence rate of the coupled algorithm. \(K_{1}=K_{2}=10^{-6}\) and \(\Delta t=2\times 10^{-4}s\). ### Tests for the parameter \(\boldsymbol{c_{i}}\) In this subsection, we will evaluate the influence of the specific storage coefficient \(c_{i}\) on the iterative decoupled algorithm. We now let \(c_{1}=c_{2}=0\). For other physical parameters, we set \[E=1,\nu=0.3,\alpha_{1}=\alpha_{2}=1,K_{1}=K_{2}=1,\beta_{12}=\beta_{21}=1.\] Comparing the results in Table 2 with those in Table 11, it is evident that in cases where \(c_{1}=c_{2}=0\), the error rates for all variables exhibit a slight degradation when utilizing the iterative decoupled algorithm. However, a closer examination of Table 12 reveals that as the number of iterations increases to 20, the error rates for all variables in the iterative decoupled algorithm tend to decrease, and the energy-norm errors attain their optimal convergence rates. These experimental findings corroborate the assertion made in Remark 1. \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\boldsymbol{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.025e-03 \& 1.676e-02 & & 3.688e-02 \& 1.092e+00 & \\ 16 & 2.301e-04 \& 3.326e-03 & 2.16 \& 2.33 & 9.248e-03 \& 5.561e-01 & 2.00 \& 0.973 \\ 32 & 5.618e-05 \& 6.874e-04 & 2.03 \& 2.27 & 2.337e-03 \& 2.772e-01 & 1.98 \& 1.004 \\ 64 & 1.588e-05 \& 1.543e-04 & 1.82 \& 2.16 & 5.926e-04 \& 1.384e-01 & 1.98 \& 1.002 \\ 128 & 7.569e-06 \& 4.856e-05 & 1.07 \& 1.67 & 1.557e-04 \& 6.915e-02 & 1.93 \& 1.001 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.217e-02 \& 3.663e-01 & & 2.592e-02 \& 7.288e-01 & \\ 16 & 3.054e-03 \& 1.863e-01 & 1.99 \& 0.98 & 6.474e-03 \& 3.690e-01 & 2.00 \& 0.98 \\ 32 & 7.753e-04 \& 9.269e-02 & 1.98 \& 1.01 & 1.630e-03 \& 1.844e-01 & 1.99 \& 1.00 \\ 64 & 1.971e-04 \& 4.621e-02 & 1.98 \& 1.00 & 4.130e-04 \& 9.215e-02 & 1.98 \& 1.00 \\ 128 & 5.370e-05 \& 2.308e-02 & 1.88 \& 1.00 & 1.093e-04 \& 4.606e-02 & 1.92 \& 1.00 \\ \hline \end{tabular} \end{table} Table 8: Convergence rate of the iterative decoupled algorithm. \(K_{1}=K_{2}=10^{-6}\), \(\Delta t=2\times 10^{-3}s\), and \(iters=10\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\boldsymbol{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 1.021e-03 \& 1.664e-02 & & 3.675e-02 \& 1.091e+00 & \\ 16 & 2.289e-04 \& 3.291e-03 & 2.16 \& 2.34 & 9.226e-03 \& 5.560e-01 & 1.99 \& 0.973 \\ 32 & 5.500e-05 \& 6.757e-04 & 2.06 \& 2.28 & 2.341e-03 \& 2.772e-01 & 1.98 \& 1.004 \\ 64 & 1.389e-05 \& 1.474e-04 & 1.99 \& 2.20 & 6.010e-04 \& 1.384e-01 & 1.96 \& 1.002 \\ 128 & 3.871e-06 \& 3.579e-05 & 1.84 \& 2.04 & 1.636e-04 \& 6.914e-02 & 1.88 \& 1.001 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.209e-02 \& 3.663e-01 & & 2.588e-02 \& 7.288e-01 & \\ 16 & 3.038e-03 \& 1.863e-01 & 1.99 \& 0.98 & 6.470e-03 \& 3.690e-01 & 2.00 \& 0.98 \\ 32 & 7.750e-04 \& 9.269e-02 & 1.97 \& 1.01 & 1.634e-03 \& 1.844e-01 & 1.99 \& 1.00 \\ 64 & 1.994e-04 \& 4.621e-02 & 1.96 \& 1.00 & 4.184e-04 \& 9.215e-02 & 1.97 \& 1.00 \\ 128 & 5.432e-05 \& 2.307e-02 & 1.88 \& 1.00 & 1.139e-04 \& 4.606e-02 & 1.88 \& 1.00 \\ \hline \end{tabular} \end{table} Table 9: Convergence rate of the iterative decoupled algorithm. \(K_{1}=K_{2}=10^{-6}\), \(\Delta t=4\times 10^{-3}s\), and \(iters=20\). ## 6 Conclusion In this paper, we extend the iterative decoupled algorithm of Biot's consolidation model proposed in [26] to the multiple-network poroelasticity problem. Our analysis affirms that the sequence \((\boldsymbol{u}_{h}^{n,k},\xi_{h}^{n,k},\vec{p}_{h}^{n,k})\) produced by the iterative decoupled algorithm converges to the solution \((\boldsymbol{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\) acquired through the coupled algorithm. \begin{table} \begin{tabular}{c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\boldsymbol{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 8.027e-04 \& 1.657e-02 & & 3.108e-02 \& 1.084e+00 & \\ 16 & 1.860e-04 \& 3.729e-03 & 2.11 \& 2.15 & 7.778e-03 \& 5.506e-01 & 2.00 \& 0.977 \\ 32 & 4.606e-05 \& 8.620e-04 & 2.01 \& 2.11 & 1.941e-03 \& 2.760e-01 & 2.00 \& 0.996 \\ 64 & 1.152e-05 \& 2.047e-04 & 2.00 \& 2.07 & 4.845e-04 \& 1.381e-01 & 2.00 \& 0.999 \\ 128 & 2.883e-06 \& 4.967e-05 & 2.00 \& 2.04 & 1.210e-04 \& 6.908e-02 & 2.00 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 9.448e-03 \& 3.622e-01 & & 2.296e-02 \& 7.210e-01 & \\ 16 & 2.417e-03 \& 1.829e-01 & 1.97 \& 0.99 & 5.890e-03 \& 3.652e-01 & 1.96 \& 0.98 \\ 32 & 6.097e-04 \& 9.184e-02 & 1.99 \& 0.99 & 1.488e-03 \& 1.836e-01 & 1.98 \& 0.99 \\ 64 & 1.529e-04 \& 4.600e-02 & 2.00 \& 1.00 & 3.734e-04 \& 9.198e-02 & 1.99 \& 1.00 \\ 128 & 3.827e-05 \& 2.301e-02 & 2.00 \& 1.00 & 9.345e-05 \& 4.602e-02 & 2.00 \& 1.00 \\ \hline \end{tabular} \end{table} Table 10: Convergence rate of the coupled algorithm. \(c_{1}=c_{2}=0\) and \(\Delta t=2\times 10^{-4}s\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\boldsymbol{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 8.829e-04 \& 1.673e-02 & & 3.264e-02 \& 1.084e+00 & \\ 16 & 2.080e-04 \& 3.760e-03 & 2.09 \& 2.15 & 8.127e-03 \& 5.507e-01 & 2.01 \& 0.977 \\ 32 & 5.719e-05 \& 8.770e-04 & 1.86 \& 2.10 & 2.011e-03 \& 2.760e-01 & 2.01 \& 0.996 \\ 64 & 2.944e-05 \& 2.442e-04 & 0.96 \& 1.84 & 4.972e-04 \& 1.381e-01 & 2.02 \& 0.999 \\ 128 & 2.702e-05 \& 1.421e-04 & 0.12 \& 0.78 & 1.529e-04 \& 6.909e-02 & 1.70 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.011e-02 \& 3.618e-01 & & 2.383e-02 \& 7.208e-01 & \\ 16 & 2.554e-03 \& 1.828e-01 & 1.99 \& 0.98 & 6.081e-03 \& 3.652e-01 & 1.97 \& 0.98 \\ 32 & 6.388e-04 \& 9.184e-02 & 2.00 \& 0.99 & 1.527e-03 \& 8.1836e-01 & 1.99 \& 0.99 \\ 64 & 1.722e-04 \& 4.600e-02 & 1.89 \& 1.00 & 3.834e-04 \& 9.199e-02 & 1.99 \& 1.00 \\ 128 & 8.820e-05 \& 2.302e-02 & 0.97 \& 1.00 & 1.191e-04 \& 4.603e-02 & 1.69 \& 1.00 \\ \hline \end{tabular} \end{table} Table 11: Convergence rate of the iterative decoupled algorithm. \(c_{1}=c_{2}=0\), \(\Delta t=2\times 10^{-3}s\), and \(iters=10\). \begin{table} \begin{tabular}{c c c c c} \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(\boldsymbol{u}\) & Orders & \(L^{2}\&H^{1}\) errors of \(\xi\) & Orders \\ \hline 8 & 8.829e-04 \& 1.673e-02 & & 3.264e-02 \& 1.084e+00 & \\ 16 & 2.080e-04 \& 3.760e-03 & 2.09 \& 2.15 & 8.127e-03 \& 5.507e-01 & 2.01 \& 0.977 \\ 32 & 5.719e-05 \& 8.770e-04 & 1.86 \& 2.10 & 2.011e-03 \& 2.760e-01 & 2.01 \& 0.996 \\ 64 & 2.944e-05 \& 2.442e-04 & 0.96 \& 1.84 & 4.972e-04 \& 1.381e-01 & 2.02 \& 0.999 \\ 128 & 2.702e-05 \& 1.421e-04 & 0.12 \& 0.78 & 1.529e-04 \& 6.909e-02 & 1.70 \& 1.000 \\ \hline \(1/h\) & \(L^{2}\&H^{1}\) errors of \(p_{1}\) & Orders & \(L^{2}\&H^{1}\) errors of \(p_{2}\) & Orders \\ \hline 8 & 1.011e-02 \& 3.618e-01 & & 2.383e-02 \& 7.208e-01 & \\ 16 & 2.554e-03 \& 1.828e-01 & 1.99 \& 0.98 & 6.081e-03 \& 3.652e-01 & 1.97 \& 0.98 \\ 32 & 6.388e-04 \& 9.184e-02 & 2.00 \& 0.99 & 1.527e-03 \& 8.1836e-01 & 1.99 \& 0.99 \\ 64 & 1.722e-04 \& 4.600e-02 & 1.89 \& 1.00 & 3.834e-04 \& 9.199e-02 & 1.99 \& 1.00 \\ 128 & 8.820e-05 \& 2.302e-02 & 0.97 \& 1.00 & 1.191e-04 \& 4.603e-02 & 1.69 \& 1.00 \\ \hline \end{tabular} \end{table} Table 12: Convergence rate of the iterative decoupled algorithm. \(c_{1}=c_{2}=0\), \(\Delta t=4\times 10^{-3}s\), and \(iters=20\). Furthermore, the iterative decoupled algorithm demonstrates unconditional stability and convergence characteristics. We validate the soundness of our analysis through numerical experiments. ## Appendix A Error analysis of the coupled algorithm This appendix provides the optimal order error estimate for the coupled algorithm. Specifically, we demonstrate that the temporal error is of order \(O(\Delta t)\); The energy-norm errors for \(\mathbf{u}\) and \(\vec{p}\) are of order \(O(h^{2})\) and order \(O(h)\), respectively; The \(L^{2}\) norm errors for \(\xi\) and \(\vec{p}\) are of order \(O(h^{2})\). On the continuous level, we refer readers to [2] for the energy estimate of the variational problem (4). In the following lemma, we provide the energy estimate for the coupled algorithm. **Lemma A.1**.: _Let \(\{(\mathbf{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\}\) be the solutions of problem (6a)-(6c), then the following identity holds_ \[J_{h}^{l}+S_{h}^{l}=J_{h}^{0}\quad for\,l\in\{1,2,\cdots,L\}.\] (A.1) _where_ \[J_{h}^{l} :=\mu\|\epsilon(\mathbf{u}_{h}^{l})\|_{L^{2}(\Omega)}^{2}+\frac{1}{2 \lambda}\|\vec{\alpha}^{\mathrm{T}}\vec{p}_{h}^{l}-\xi_{h}^{l}\|_{L^{2}(\Omega )}^{2}+\frac{1}{2}\|\hat{S}\vec{p}_{h}^{l}\|_{L^{2}(\Omega)}^{2}-(\mathbf{f},\mathbf{u} _{h}^{l})-\langle\mathbf{h},\mathbf{u}_{h}^{l}\rangle_{\Gamma_{\mathbf{u},N}}.\] \[S_{h}^{l} :=\] \[\Delta t\sum_{n=1}^{l}\bigg{[}\Delta t\left(\mu\|d_{t}\epsilon( \mathbf{u}_{h}^{n})\|_{L^{2}(\Omega)}^{2}+\frac{1}{2\lambda}\|d_{t}(\vec{\alpha}^{ \mathrm{T}}\vec{p}_{h}^{n}-\xi_{h}^{n})\|_{L^{2}(\Omega)}^{2}+\frac{1}{2}\|d_ {t}(\hat{S}\vec{p}_{h}^{n})\|_{L^{2}(\Omega)}^{2}\right)\] \[+\|\hat{K}\nabla\vec{p}_{h}^{n}\|_{L^{2}(\Omega)}^{2}+\sum_{1\leq i <j\leq N}\beta_{i,j}\|p_{i,h}^{n}-p_{j,h}^{n}\|_{L^{2}(\Omega)}^{2}-(\vec{g}, \vec{p}_{h}^{n})-\left\langle\vec{l},\vec{p}_{h}^{n}\right\rangle_{\Gamma_{ \vec{p},N}}\bigg{]}.\] _Here, we denote \(d_{t}\eta^{n}:=\frac{\eta^{n}-\eta^{n-1}}{\Delta t}\), where \(\eta\) can be a vector or a scalar, \(\hat{S}=\text{diag}(\sqrt{c_{1}},\sqrt{c_{2}},\cdots,\sqrt{c_{N}}),\hat{K}= \text{diag}(\sqrt{K_{1}},\sqrt{K_{2}},\cdots,\sqrt{K_{N}})\). Moreover, there holds_ \[\|\xi_{h}^{l}\|_{L^{2}(\Omega)}\leq C(\|\epsilon(\mathbf{u}_{h}^{l})\|_{L^{2}( \Omega)}+\|\mathbf{f}\|_{L^{2}(\Omega)}+\|\mathbf{h}\|_{L^{2}(\Gamma_{\mathbf{u},N})}),\] (A.2) _where \(C\) is a positive constant._ **Proof**. Setting \(\mathbf{v}_{h}=d_{t}\mathbf{u}_{h}^{n}\) in (6a), \(\eta_{h}=\xi_{h}^{n}\) in (6b), \(\vec{q}_{h}=\vec{p}_{h}^{n}\) in (6c), we have \[2\mu(\epsilon(\mathbf{u}_{h}^{n}),d_{t}\epsilon(\mathbf{u}_{h}^{n}))-( \xi_{h}^{n},\text{div}\,d_{t}\mathbf{u}_{h}^{n}) =d_{t}(\mathbf{f},\mathbf{u}_{h}^{n})+d_{t}\langle\mathbf{h},\mathbf{u}_{h}^{n} \rangle_{\Gamma_{\mathbf{u},N}},\] (A.3) \[(\text{div}\,d_{t}\mathbf{u}_{h}^{n},\xi_{h}^{n})+\frac{1}{\lambda}( d_{t}\xi_{h}^{n},\xi_{h}^{n}) =\frac{1}{\lambda}(\vec{\alpha}^{\mathrm{T}}d_{t}\vec{p}_{h}^{n}, \xi_{h}^{n}),\] (A.4) \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{ \mathrm{T}}\right)d_{t}\vec{p}_{h}^{n},\vec{p}_{h}^{n}\right)-\frac{1}{ \lambda}\left(\vec{\alpha}d_{t}\xi_{h}^{n},\vec{p}_{h}^{n}\right) +(K\nabla\vec{p}_{h}^{n},\nabla\vec{p}_{h}^{n})+(B\vec{p}_{h}^{n},\vec{p}_{h} ^{n})\] (A.5) \[=(\vec{g},\vec{p}_{h}^{n})+\left\langle\vec{l},\vec{p}_{h}^{n} \right\rangle_{\Gamma_{\vec{p},N}}.\] Summing up (A.3)-(A.5), we get \[2\mu(\epsilon(\mathbf{u}_{h}^{n}),d_{t}\epsilon(\mathbf{u}_{h}^{n}))-d_{t }(\mathbf{f},\mathbf{u}_{h}^{n})-d_{t}\langle\mathbf{h},\mathbf{u}_{h}^{n}\rangle_{\Gamma_{\bm {u},N}}+\frac{1}{\lambda}(d_{t}\xi_{h}^{n},\xi_{h}^{n})-\frac{1}{\lambda}(\vec{ \alpha}^{\mathsf{T}}d_{t}\vec{p}_{h}^{n},\xi_{h}^{n})\] (A.6) \[+\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{ \mathsf{T}}\right)d_{t}\vec{p}_{h}^{n},\vec{p}_{h}^{n}\right)-\frac{1}{\lambda} \left(\vec{\alpha}d_{t}\xi_{h}^{n},\vec{p}_{h}^{n}\right)+(K\nabla\vec{p}_{h}^ {n},\nabla\vec{p}_{h}^{n})+(B\vec{p}_{h}^{n},\vec{p}_{h}^{n})-(\vec{g},\vec{p} _{h}^{n})\] \[-\left\langle\vec{l},\vec{p}_{h}^{n}\right\rangle_{\Gamma_{\vec{p },N}}=0.\] Note that, the following identities hold. \[2(\eta_{h}^{n},d_{t}\eta_{h}^{n}) =d_{t}|\eta_{h}^{n}|_{L^{2}(\Omega)}^{2}+\Delta t|d_{t}\eta_{h}^{n }|_{L^{2}(\Omega)}^{2},\] \[\frac{1}{\lambda}(d_{t}\xi_{h}^{n},\xi_{h}^{n})-\frac{1}{\lambda }(\vec{\alpha}^{\mathsf{T}}d_{t}\vec{p}_{h}^{n},\xi_{h}^{n})+\frac{1}{\lambda} \left(\vec{\alpha}\vec{\alpha}^{\mathsf{T}}d_{t}\vec{p}_{h}^{n},\vec{p}_{h}^{ n}\right)-\frac{1}{\lambda}\left(\vec{\alpha}d_{t}\xi_{h}^{n},\vec{p}_{h}^{n}\right)\] \[=\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}\vec{p}_{h}^{n}-\xi_{ h}^{n},d_{t}(\vec{\alpha}^{\mathsf{T}}\vec{p}_{h}^{n}-\xi_{h}^{n})),\] \[(B\vec{p}_{h}^{n},\vec{p}_{h}^{n}) =\sum_{1\leq i<j\leq N}\beta_{i,j}\|p_{i,h}^{n}-p_{j,h}^{n}\|_{L^{ 2}(\Omega)}^{2}.\] According to the above identities, (A.6) can be rewritten as \[d_{t}\bigg{(}\mu|\epsilon(\mathbf{u}_{h}^{n})|_{L^{2}(\Omega)}^{2}+ \frac{1}{2\lambda}\big{|}\vec{\alpha}^{\mathsf{T}}\vec{p}_{h}^{n}-\xi_{h}^{n} |_{L^{2}(\Omega)}^{2}+\frac{1}{2}\big{|}\hat{S}\vec{p}_{h}^{n}|_{L^{2}(\Omega) }^{2}-(\mathbf{f},\mathbf{u}_{h}^{n})\] (A.7) \[-\langle\mathbf{h},\mathbf{u}_{h}^{n}\rangle_{\Gamma_{\mathbf{u},N}}\bigg{)} +\Delta t\left(\mu\big{|}d_{t}\epsilon(\mathbf{u}_{h}^{n})\big{|}_{L^{2}(\Omega)} ^{2}+\frac{1}{2\lambda}\big{|}d_{t}(\vec{\alpha}^{\mathsf{T}}\vec{p}_{h}^{n}- \xi_{h}^{n})\big{|}_{L^{2}(\Omega)}^{2}+\frac{1}{2}\big{|}d_{t}(\hat{S}\vec{p} _{h}^{n})\big{|}_{L^{2}(\Omega)}^{2}\right)\] \[+|\hat{K}\nabla\vec{p}_{h}^{n}|_{L^{2}(\Omega)}^{2}+\sum_{1\leq i <j\leq N}\beta_{i,j}\|p_{i,h}^{n}-p_{j,h}^{n}\|_{L^{2}(\Omega)}^{2}-(\vec{g}, \vec{p}_{h}^{n})-\left\langle\vec{l},\vec{p}_{h}^{n}\right\rangle_{\Gamma_{ \vec{p},N}}=0.\] Applying the summation operator \(\Delta t\sum_{n=1}^{l}\) to both sides of the (A.7), we get (A.1). In addition, using (5) and (6a), we has (A.2). As a preliminary step for a priori estimate of the discrete formulation (6), we introduce a set of auxiliary projection operators. In particular, we define projection operators \[\Pi_{h}^{\mathbf{V}}:\mathbf{V}\to\mathbf{V}_{h},\quad\Pi_{h}^{W}:W\to W_{h},\] as follows. For any \((\mathbf{u},\xi)\in\mathbf{V}\times W\), we define its projection \((\Pi_{h}^{\mathbf{V}}\mathbf{u},\Pi_{h}^{W}\xi)\in\mathbf{V}_{h}\times W_{h}\) as the unique discrete solution to the Stokes problem: \[2\mu(\epsilon(\Pi_{h}^{\mathbf{V}}\mathbf{u}),\epsilon(\mathbf{v}_{h}))-(\Pi_{h}^{W}\xi, \operatorname{div}\mathbf{v}_{h}) =2\mu(\epsilon(\mathbf{u}),\epsilon(\mathbf{v}_{h}))-(\xi,\operatorname{ div}\mathbf{v}_{h})\quad\forall\mathbf{v}_{h}\in\mathbf{V}_{h},\] (A.8a) \[-(\operatorname{div}\Pi_{h}^{\mathbf{V}}\mathbf{u},\eta_{h}) =-(\operatorname{div}\mathbf{u},\eta_{h})\quad\forall\eta_{h}\in W_{h}.\] (A.8b) We also define the projection operator \[\Pi_{h}^{M}:M\to M_{h},\] as follows. For any \(\vec{p}\in M\), we define its projection \(\Pi_{h}^{M}\vec{p}\in M_{h}\) as the unique discrete solution to the weighted elliptic problem \[(K\nabla\Pi_{h}^{M}\vec{p},\nabla\vec{q}_{h})=(K\nabla\vec{p},\nabla\vec{q}_{h}) \quad\forall\vec{q}_{h}\in M_{h}.\] (A.9) Here, we list the properties of the operators \((\Pi_{h}^{\boldsymbol{V}},\Pi_{h}^{W},\Pi_{h}^{M})\). For all \((\boldsymbol{u},\xi,\vec{p})\in[H^{3}(\Omega)]^{2}\times H^{2}(\Omega)\times[H ^{2}(\Omega)]^{N}\), there holds [2] \[|\!|\boldsymbol{u}-\Pi_{h}^{\boldsymbol{V}}\boldsymbol{u}|\!|_{H^ {1}(\Omega)}+|\!|\xi-\Pi_{h}^{W}\xi|\!|_{L^{2}(\Omega)} \leq Ch^{2}\left(|\!|\boldsymbol{u}|\!|_{H^{3}(\Omega)}+|\!|\xi|\!| _{H^{2}(\Omega)}\right),\] (A.10) \[|\!|\vec{p}-\Pi_{h}^{M}\vec{p}|\!|_{H^{1}(\Omega)} \leq Ch|\!|\vec{p}|\!|_{H^{2}(\Omega)},\] (A.11) \[|\!|\vec{p}-\Pi_{h}^{M}\vec{p}|\!|_{L^{2}(\Omega)} \leq Ch^{2}|\!|\vec{p}|\!|_{H^{2}(\Omega)}.\] (A.12) For convenience, we introduce the standard decomposition of the errors into projection and discretization errors in the subsequent analysis, \[e_{\boldsymbol{u}}^{n}=\boldsymbol{u}^{n}-\boldsymbol{u}_{h}^{n} =(\boldsymbol{u}^{n}-\Pi_{h}^{\boldsymbol{V}}\boldsymbol{u}^{n})+( \Pi_{h}^{\boldsymbol{V}}\boldsymbol{u}^{n}-\boldsymbol{u}_{h}^{n}):=e_{ \boldsymbol{u}}^{I,n}+e_{\boldsymbol{u}}^{h,n},\] \[e_{\xi}^{n}=\xi^{n}-\xi_{h}^{n} =(\xi^{n}-\Pi_{h}^{W}\xi^{n})+(\Pi_{h}^{W}\xi^{n}-\xi_{h}^{n}):=e _{\xi}^{I,n}+e_{\xi}^{h,n},\] \[e_{\vec{p}}^{n}=\vec{p}^{n}-\vec{p}_{h}^{n} =(\vec{p}^{n}-\Pi_{h}^{M}\vec{p}^{n})+(\Pi_{h}^{M}\vec{p}^{n}- \vec{p}_{h}^{n}):=e_{\vec{p}}^{I,n}+e_{\vec{p}}^{h,n}.\] **Lemma A.2**.: _Let \(\{(\boldsymbol{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\}\) be the solutions of problem (6a)-(6c), then the following identity holds_ \[E_{h}^{l}+\Delta t\sum_{n=1}^{l}\left(|\!|\hat{K}\nabla e_{\vec{p }}^{h,n}|\!|_{L^{2}(\Omega)}^{2}+\left(Be_{\vec{p}}^{h,n},e_{\vec{p}}^{h,n} \right)\right)\] (A.13) \[+(\Delta t)^{2}\sum_{n=1}^{l}\left(\mu|\!|d_{t}\epsilon(e_{ \boldsymbol{u}}^{h,n})|\!|_{L^{2}(\Omega)}^{2}+\frac{1}{2\lambda}|\!|d_{t}( \vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{h,n}-e_{\xi}^{h,n})|\!|_{L^{2}(\Omega)}^ {2}+\frac{1}{2}|\!|d_{t}\hat{S}e_{\vec{p}}^{h,n}|\!|_{L^{2}(\Omega)}^{2}\right)\] \[=E_{h}^{0}+\Delta t\sum_{n=1}^{l}\Bigg{[}\left(\operatorname{div} (d_{t}\boldsymbol{u}^{n}-\dot{\boldsymbol{u}}^{n}),e_{\xi}^{h,n}\right)+\frac{1 }{\lambda}\left(d_{t}\Pi_{h}^{W}\xi^{n}-\dot{\xi}^{n},e_{\xi}^{h,n}\right)\] \[-\frac{1}{\lambda}\left(\vec{\alpha}^{\mathsf{T}}(d_{t}\Pi_{h}^{M} \vec{p}^{n}-\dot{\vec{p}}^{n}),e_{\xi}^{h,n}\right)+\left(B\left(\Pi_{h}^{M} \vec{p}^{n}-\vec{p}^{n}\right),e_{\vec{p}}^{h,n}\right)\] \[-\frac{1}{\lambda}\left(\vec{\alpha}\left(d_{t}\Pi_{h}^{W}\xi^{n} -\dot{\xi}^{n}\right),e_{\vec{p}}^{h,n}\right)+\left(\left(S+\frac{1}{\lambda} \vec{\alpha}\vec{\alpha}^{\mathsf{T}}\right)\left(d_{t}\Pi_{h}^{M}\vec{p}^{n} -\dot{\vec{p}}^{n}\right),e_{\vec{p}}^{h,n}\right)\Bigg{]},\] _where_ \[E_{h}^{l}:=\mu|\!|\epsilon(e_{\boldsymbol{u}}^{h,l})|\!|_{L^{2}(\Omega)}^{2}+ \frac{1}{2\lambda}|\!|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{h,l}-e_{\xi}^{h,l}|\!| _{L^{2}(\Omega)}^{2}+\frac{1}{2}|\!|\hat{S}e_{\vec{p}}^{h,l}|\!|_{L^{2}(\Omega)} ^{2}.\] **Proof**. Letting \(t=t_{n},\mathbf{v}=\mathbf{\psi}_{h}\) in (4a), subtracting (6a) from equation (4a) and combining with (A.8), there holds \[2\mu(\epsilon(e^{h,n}_{\mathbf{\mathsf{u}}}),\epsilon(\mathbf{v}_{h}))-(e^{h,n}_{\xi}, \operatorname{div}\mathbf{v}_{h})=0,\] (A.14) Taking the derivative of (4b) with respect to time \(t\), and letting \(t=t_{n},\eta=\eta_{h}\), we have \[(\operatorname{div}\!\mathbf{\dot{u}}^{n},\eta_{h})+\frac{1}{\lambda}(\dot{\xi}^{n },\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}\dot{\vec{p}}^{n},\eta_ {h})=0.\] (A.15) Using (6b), it holds that \[(\operatorname{div}d_{t}\mathbf{u}^{n},\eta_{h})+\frac{1}{\lambda}(d_{t}\xi^{n}, \eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}d_{t}\vec{p}^{n},\eta_{h })=0.\] (A.16) By subtracting (A.15) from (A.16), we obtain \[(\operatorname{div}(d_{t}\mathbf{u}^{n}-\dot{\mathbf{u}}^{n}),\eta_{h})+\frac{1}{ \lambda}(d_{t}\xi^{n}-\dot{\xi}^{n},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^ {\mathsf{T}}(d_{t}\vec{p}^{n}-\dot{\vec{p}}^{n}),\eta_{h})=0,\] (A.17) The combination of (4b), (6b) and (A.8) implies that \[(\operatorname{div}e^{h,n}_{\mathbf{\mathsf{u}}},\eta_{h})+\frac{1}{ \lambda}(e^{I,n}_{\xi}+e^{h,n}_{\xi},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha} ^{\mathsf{T}}(e^{I,n}_{\vec{p}}+e^{h,n}_{\vec{p}}),\eta_{h}) =0,\] (A.18) \[(\operatorname{div}e^{h,n-1}_{\mathbf{\mathsf{u}}},\eta_{h})+\frac{1 }{\lambda}(e^{I,n-1}_{\xi}+e^{h,n-1}_{\xi},\eta_{h})-\frac{1}{\lambda}(\vec{ \alpha}^{\mathsf{T}}(e^{I,n-1}_{\vec{p}}+e^{h,n-1}_{\vec{p}}),\eta_{h}) =0.\] (A.19) Using (A.19) and (A.18), we get \[(\operatorname{div}(d_{t}e^{h,n}_{\mathbf{\mathsf{u}}}),\eta_{h})+\frac{1}{ \lambda}(d_{t}e^{I,n}_{\xi}+d_{t}e^{h,n}_{\xi},\eta_{h})-\frac{1}{\lambda}( \vec{\alpha}^{\mathsf{T}}(d_{t}e^{I,n}_{\vec{p}}+d_{t}e^{h,n}_{\vec{p}}),\eta_ {h})=0.\] (A.20) Combining (A.17) with (A.20), we obtain \[(\operatorname{div}(d_{t}e^{h,n}_{\mathbf{\mathsf{u}}}),\eta_{h})+ \frac{1}{\lambda}(d_{t}e^{h,n}_{\xi},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha} ^{\mathsf{T}}d_{t}e^{h,n}_{\vec{p}},\eta_{h})=(\operatorname{div}(d_{t}\mathbf{u} ^{n}-\dot{\mathbf{u}}^{n}),\eta_{h})\] (A.21) \[+\frac{1}{\lambda}(d_{t}\xi^{n}-\dot{\xi}^{n},\eta_{h})-\frac{1}{ \lambda}(d_{t}e^{I,n}_{\xi},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{ T}}(d_{t}\vec{p}^{n}-\dot{\vec{p}}^{n}),\eta_{h})+\frac{1}{\lambda}(\vec{\alpha}^{ \mathsf{T}}d_{t}e^{I,n}_{\vec{p}},\eta_{h}).\] The above equation (A.21) can also be written as \[(\operatorname{div}(d_{t}e^{h,n}_{\mathbf{\mathsf{u}}}),\eta_{h})+ \frac{1}{\lambda}(d_{t}e^{h,n}_{\xi},\eta_{h})-\frac{1}{\lambda}(\vec{\alpha}^{ \mathsf{T}}d_{t}e^{h,n}_{\vec{p}},\eta_{h})=(\operatorname{div}(d_{t}\mathbf{u}^{n} -\dot{\mathbf{u}}^{n}),\eta_{h})\] (A.22) \[+\frac{1}{\lambda}(d_{t}\Pi^{W}_{h}\xi^{n}-\dot{\xi}^{n},\eta_{h} )-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}(d_{t}\Pi^{M}_{h}\vec{p}^{n}- \dot{\vec{p}}^{n}),\eta_{h}).\] In (4c), we let \(t=t_{n}\) and \(\vec{q}=\vec{q}_{h}\), then subtract equation (6c) from it. Combining with (A.8) and (A.9), we have \[\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T} }\right)d_{t}e_{\vec{p}}^{h,n},\vec{q}_{h}\right)-\frac{1}{\lambda}\left(\vec{ \alpha}d_{t}e_{\xi}^{h,n},\vec{q}_{h}\right)+\left(K\nabla e_{\vec{p}}^{h,n}, \nabla\vec{q}_{h}\right)\] (A.23) \[+\left(Be_{\vec{p}}^{h,n},\vec{q}_{h}\right)=\left(\left(S+\frac{ 1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T}}\right)\left(d_{t}\Pi_{h}^{M} \vec{p}^{n}-\dot{\vec{p}}^{n}\right),\vec{q}_{h}\right)-\frac{1}{\lambda}\left( \vec{\alpha}\left(d_{t}\Pi_{h}^{W}\xi^{n}-\dot{\xi}^{n}\right),\vec{q}_{h}\right)\] \[+\left(B\left(\Pi_{h}^{M}\vec{p}^{n}-\vec{p}^{n}\right),\vec{q}_ {h}\right).\] In (A.14), (A.22) and (A.23), letting \(\mathbf{v}_{h}=d_{t}e_{\mathbf{\mathsf{u}}}^{h,n}\), \(\eta_{h}=e_{\xi}^{h,n}\), \(\vec{q}_{h}=e_{\vec{p}}^{h,n}\), then adding them up, one can get \[2\mu(\epsilon(e_{\mathbf{\mathsf{u}}}^{h,n}),\epsilon(d_{t}e_{\mathbf{ \mathsf{u}}}^{h,n}))+\frac{1}{\lambda}(d_{t}e_{\xi}^{h,n},e_{\xi}^{h,n})-\frac{ 1}{\lambda}(\vec{\alpha}^{\mathsf{T}}d_{t}e_{\vec{p}}^{h,n},e_{\xi}^{h,n})+ \left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{\mathsf{T}}\right)d_ {t}e_{\vec{p}}^{h,n},e_{\vec{p}}^{h,n}\right)\] \[-\frac{1}{\lambda}\left(\vec{\alpha}d_{t}e_{\xi}^{h,n},e_{\vec{p} }^{h,n}\right)+\left(K\nabla e_{\vec{p}}^{h,n},\nabla e_{\vec{p}}^{h,n}\right) +\left(Be_{\vec{p}}^{h,n},e_{\vec{p}}^{h,n}\right)=\left(\operatorname{div}(d_ {t}\mathbf{\mathsf{u}}^{n}-\dot{\mathbf{\mathsf{u}}}^{n}),e_{\xi}^{h,n}\right)\] \[+\frac{1}{\lambda}(d_{t}\Pi_{h}^{W}\xi^{n}-\dot{\xi}^{n},e_{\xi}^ {h,n})-\frac{1}{\lambda}(\vec{\alpha}^{\mathsf{T}}(d_{t}\Pi_{h}^{M}\vec{p}^{n }-\dot{\vec{p}}^{n}),e_{\xi}^{h,n})-\frac{1}{\lambda}\left(\vec{\alpha}\left(d_ {t}\Pi_{h}^{W}\xi^{n}-\dot{\xi}^{n}\right),e_{\vec{p}}^{h,n}\right)\] \[+\left(\left(S+\frac{1}{\lambda}\vec{\alpha}\vec{\alpha}^{ \mathsf{T}}\right)\left(d_{t}\Pi_{h}^{M}\vec{p}^{n}-\dot{\vec{p}}^{n}\right),e _{\vec{p}}^{h,n}\right)+\left(B\left(\Pi_{h}^{M}\vec{p}^{n}-\vec{p}^{n}\right), e_{\vec{p}}^{h,n}\right).\] The above formula can also be written as \[d_{t}\left(\mu|\!|\epsilon(e_{\mathbf{\mathsf{u}}}^{h,n})|\!|_{L^{2}( \Omega)}^{2}+\frac{1}{2\lambda}|\vec{\alpha}^{\mathsf{T}}e_{\vec{p}}^{h,n}-e_ {\xi}^{h,n}|\!|_{L^{2}(\Omega)}^{2}+\frac{1}{2}|\hat{S}e_{\vec{p}}^{h,n}|\!|_ {L^{2}(\Omega)}^{2}\right)\] (A.24) \[+\|\hat{K}\nabla e_{\vec{p}}^{h,n}|\!|_{L^{2}(\Omega)}^{2}+\left( Be_{\vec{p}}^{h,n},e_{\vec{p}}^{h,n}\right)\] \[+\Delta t\left(\mu|\!|d_{t}\epsilon(e_{\mathbf{\mathsf{u}}}^{h,n})|\!| _{L^{2}(\Omega)}^{2}+\frac{1}{2\lambda}|\!|d_{t}(\vec{\alpha}^{\mathsf{T}}e_{ \vec{p}}^{h,n}-e_{\xi}^{h,n})|\!|_{L^{2}(\Omega)}^{2}+\frac{1}{2}|\!|d_{t}\hat {S}e_{\vec{p}}^{h,n}|\!|_{L^{2}(\Omega)}^{2}\right)\] \[=\left(\operatorname{div}(d_{t}\mathbf{\mathsf{u}}^{n}-\dot{\mathbf{ \mathsf{u}}}^{n}),e_{\xi}^{h,n}\right)+\frac{1}{\lambda}\left(d_{t}\Pi_{h}^{W} \xi^{n}-\dot{\xi}^{n},e_{\xi}^{h,n}\right)-\frac{1}{\lambda}\left(\vec{\alpha }^{\mathsf{T}}(d_{t}\Pi_{h}^{M}\vec{p}^{n}-\dot{\vec{p}}^{n}),e_{\xi}^{h,n}\right)\] \[+\left(B\left(\Pi_{h}^{M}\vec{p}^{n}-\vec{p}^{n}\right),e_{\vec{p} }^{h,n}\right).\] Applying the summation operator \(\Delta t\sum_{n=1}^{l}\) to both sides, we obtain (A.13). In the rest part of the analysis, we will use the following discrete Gronwall inequality [29]. **Lemma A.3**.: _Let \(\tau,\)\(B\) and \(a_{j},\)\(b_{j},\)\(c_{j},\)\(\gamma_{j},\)\(\forall j\geq 1\) be non-negative numbers such that_ \[a_{n}+\sum_{j=1}^{n}b_{j}\leq B+\tau\sum_{j=1}^{n}c_{j}+\tau\sum_{j=1}^{n}\gamma_{ j}a_{j}.\] (A.25) _If \(\tau\gamma_{j}\leq 1,j=1,\cdots,n\), there holds_ \[a_{n}+\sum_{j=1}^{n}b_{j}\leq C\left(B+\tau\sum_{j=1}^{n}c_{j}\right),\] (A.26) _where \(C=e^{\tau\sum_{j=1}^{n}\frac{\gamma_{j}}{1-\tau\gamma_{j}}}\)._ **Theorem A.4**.: _Let \(\{(\boldsymbol{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\}\) be the solution of problem (6a)-(6c), then the following error estimate holds_ \[E_{h}^{l}+\frac{1}{2}\Delta t\sum_{n=1}^{l}\|\hat{K}\nabla e_{\vec{p}}^{h,n}\|_ {L^{2}(\Omega)}^{2}\leq C_{1}(\Delta t)^{2}+C_{2}h^{4}+C_{3}\Delta th^{4},\] (A.27) _where_ \[C_{1} =C_{1}\left(|\check{\boldsymbol{u}}|\!|_{L^{2}(0,t_{l};H^{1}( \Omega))},|\check{\xi}|\!|_{L^{2}(0,t_{l};L^{2}(\Omega))},|\check{\vec{p}}|\!| _{L^{2}(0,t_{l};L^{2}(\Omega))}\right),\] \[C_{2} =C_{2}\left(|\check{\boldsymbol{u}}|\!|_{L^{2}(0,t_{l};H^{3}( \Omega))},|\check{\xi}|\!|_{L^{2}(0,t_{l};H^{2}(\Omega))},|\check{\vec{p}}|\!| _{L^{2}(0,t_{l};H^{2}(\Omega))}\right),\] \[C_{3} =C_{3}\left(|\vec{p}|\!|_{L^{\infty}(0,t_{l};H^{2}(\Omega))}\right).\] **Proof**. According to the Cauchy-Schwarz inequality, the Young inequality with a \(\epsilon_{1}>0\), and Taylor's formula, we obtain \[\Delta t\sum_{n=1}^{l}\left(\mathrm{div}(\mathrm{d}_{t}\boldsymbol {u}^{n}-\check{\boldsymbol{u}}^{n}),e_{\xi}^{h,n}\right)\] (A.28) \[=\sum_{n=1}^{l}\left(\mathrm{div}(\boldsymbol{u}^{n}-\boldsymbol {u}^{n-1}-\Delta t\check{\boldsymbol{u}}^{n}),e_{\xi}^{h,n}\right)\] \[\leq\sum_{n=1}^{l}|\boldsymbol{u}^{n}-\boldsymbol{u}^{n-1}- \Delta t\check{\boldsymbol{u}}^{n}\|_{H^{1}(\Omega)}\|e_{\xi}^{h,n}\|_{L^{2}( \Omega)}\] \[\leq\left(\sum_{n=1}^{l}\frac{(\Delta t)^{3}}{3}\int_{t_{n-1}}^{ t_{n}}|\check{\boldsymbol{u}}|\!|_{H^{1}(\Omega)}^{2}\mathrm{d}s\right)^{\frac{1}{2}} \left(\sum_{n=1}^{l}|e_{\xi}^{h,n}\|_{L^{2}(\Omega)}^{2}\right)^{\frac{1}{2}}\] \[\leq\left(\frac{(\Delta t)^{2}}{3}\int_{t_{0}}^{t_{l}}|\check{ \boldsymbol{u}}|\!|_{H^{1}(\Omega)}^{2}\mathrm{d}s\right)^{\frac{1}{2}}\left( \Delta t\sum_{n=1}^{l}|e_{\xi}^{h,n}|_{L^{2}(\Omega)}^{2}\right)^{\frac{1}{2}}\] \[\leq\frac{(\Delta t)^{2}}{6\epsilon_{1}}|\check{\boldsymbol{u}}| \!|_{L^{2}(0,t_{l};H^{1}(\Omega))}^{2}+\frac{\epsilon_{1}\Delta t}{2}\sum_{n=1 }^{l}|e_{\xi}^{h,n}|_{L^{2}(\Omega)}^{2}.\] Similarly, the following term can be bounded by \[\Delta t\sum_{n=1}^{l}\frac{1}{\lambda}(\mathrm{d}_{t}\Pi_{h}^{W} \xi^{n}-\dot{\xi}^{n},e_{\xi}^{h,n})\] (A.29) \[\leq\frac{1}{\lambda^{2}\epsilon_{2}}\left(Ch^{4}|\dot{\xi}|^{2}_{ L^{2}(0,t_{l};H^{2}(\Omega))}+Ch^{4}|\dot{\mathbf{u}}|^{2}_{L^{2}(0,t_{l};H^{3}( \Omega))}+\frac{(\Delta t)^{2}}{3}\|\ddot{\xi}\|^{2}_{L^{2}(0,t_{l};L^{2}( \Omega))}\right)\] \[+\frac{\epsilon_{2}\Delta t}{2}\sum_{n=1}^{l}|e_{\xi}^{h,n}|^{2}_ {L^{2}(\Omega)}.\] In consideration of (A.12), there holds \[\Delta t\sum_{n=1}^{l}\frac{1}{\lambda}\left(\vec{\alpha}^{\sf T} (\mathrm{d}_{t}\Pi_{h}^{M}\vec{p}^{n}-\dot{\vec{p}}^{n}),e_{\xi}^{h,n}\right)\] (A.30) \[\leq\frac{1}{\lambda^{2}\epsilon_{3}}|\vec{\alpha}|^{2}\left(Ch^ {4}|\dot{\vec{p}}|^{2}_{L^{2}(0,t_{l};H^{2}(\Omega))}+\frac{(\Delta t)^{2}}{3} |\dot{\vec{p}}|^{2}_{L^{2}(0,t_{l};L^{2}(\Omega))}\right)+\frac{\epsilon_{3} \Delta t}{2}\sum_{n=1}^{l}|e_{\xi}^{h,n}|^{2}_{L^{2}(\Omega)}.\] By use of estimate (A.12) and the Poincare inequality, we get \[\Delta t\sum_{n=1}^{l}\left(S\left(\mathrm{d}_{t}\Pi_{h}^{M} \vec{p}^{n}-\dot{\vec{p}}^{n}\right),e_{\vec{p}}^{h,n}\right)\] (A.31) \[\leq\frac{|\!|S|\!|^{2}}{\epsilon_{4}}\left(Ch^{4}|\dot{\vec{p}}| ^{2}_{L^{2}(0,t_{l};H^{2}(\Omega))}+\frac{(\Delta t)^{2}}{3}|\ddot{\vec{p}}|^{ 2}_{L^{2}(0,t_{l};L^{2}(\Omega))}\right)+\frac{\Delta t\epsilon_{4}}{2}\sum_{n =1}^{l}|\!|\nabla e_{\vec{p}}^{h,n}|^{2}_{L^{2}(\Omega)}.\] Likewise, based on the Cauchy-Schwarz inequality, we derive \[\Delta t\sum_{n=1}^{l}\left(\frac{1}{\lambda}\vec{\alpha}\vec{ \alpha}^{\sf T}(\mathrm{d}_{t}\Pi_{h}^{M}\vec{p}^{n}-\dot{\vec{p}}^{n}),e_{ \vec{p}}^{h,n}\right)\] (A.32) \[\leq\frac{|\!|\vec{\alpha}\vec{\alpha}^{\sf T}|\!|^{2}}{\lambda^ {2}\epsilon_{5}}\left(Ch^{4}|\dot{\vec{p}}|^{2}_{L^{2}(0,t_{l};H^{2}(\Omega))} +\frac{(\Delta t)^{2}}{3}|\!|\ddot{\vec{p}}|^{2}_{L^{2}(0,t_{l};L^{2}(\Omega) )}\right)+\frac{\Delta t\epsilon_{5}}{2}\sum_{n=1}^{l}|\!|\nabla e_{\vec{p}}^{ h,n}|^{2}_{L^{2}(\Omega)}.\] \[\Delta t\sum_{n=1}^{l}\frac{1}{\lambda}(\vec{\alpha}(\mathrm{d}_ {t}\Pi_{h}^{W}\xi^{n}-\dot{\xi}^{n}),e_{\vec{p}}^{h,n})\] (A.33) \[\leq\frac{|\!|\vec{\alpha}|\!|^{2}}{\lambda^{2}\epsilon_{6}} \bigg{(}Ch^{4}|\dot{\xi}|^{2}_{L^{2}(0,t_{l};H^{2}(\Omega))}+Ch^{4}|\!|\dot{ \mathbf{u}}|^{2}_{L^{2}(0,t_{l};H^{3}(\Omega))}+\frac{(\Delta t)^{2}}{3}|\!|\ddot{ \xi}|^{2}_{L^{2}(0,t_{l};L^{2}(\Omega))}\bigg{)}\] \[+\frac{\Delta t\epsilon_{6}}{2}\sum_{n=1}^{l}|\!|\nabla e_{\vec{p }}^{h,n}|^{2}_{L^{2}(\Omega)}.\] \[\Delta t\sum_{n=1}^{l}\left(Be_{\vec{p}}^{I,n},e_{\vec{p}}^{h,n}\right) \leq\frac{Cl|\!|B|^{2}h^{4}}{2\epsilon_{7}}\Delta t|\vec{p}|_{L^{\infty}(0,t_{ 1};H^{2}(\Omega))}^{2}+\frac{\Delta t\epsilon_{7}}{2}\sum_{n=1}^{l}|\!|\nabla e _{\vec{p}}^{h,n}|\!|_{L^{2}(\Omega)}^{2}.\] (A.34) By using the discrete inf-sup condition, we see that \[|\!|e_{\xi}^{h,n}|\!|_{L^{2}(\Omega)}\leq\frac{1}{\beta_{0}^{*}}\sup_{\mathbf{v}_{h }\in\mathbf{V}_{h}}\frac{(e_{\xi}^{h,n},\operatorname{div}\mathbf{v}_{h})}{|\!|\mathbf{v} _{h}|\!|_{H^{1}(\Omega)}}=\frac{1}{\beta_{0}^{*}}\sup_{\mathbf{v}_{h}\in\mathbf{V}_{h }}\frac{2\mu(\epsilon(e_{\mathbf{u}}^{h,n}),\epsilon(\mathbf{v}_{h}))}{|\!|\mathbf{v}_{h}| \!|_{H^{1}(\Omega)}}\leq\frac{2\mu}{\beta_{0}^{*}}|\!|\epsilon(e_{\mathbf{u}}^{h, n})|\!|_{L^{2}(\Omega)}.\] So we can choose \(\epsilon_{i},i\in\{1,2,3\}\), small enough such that \[\frac{\epsilon_{i}\Delta t}{2}\sum_{n=1}^{l}|\!|e_{\xi}^{h,n}|\!|_{L^{2}( \Omega)}^{2}\leq\frac{\Delta t\mu}{3}\sum_{n=1}^{l}|\!|\epsilon(e_{\mathbf{u}}^{h, n})|\!|_{L^{2}(\Omega)}^{2}.\] (A.35) Similarly, select \(\epsilon_{i},i\in\{4,5,6,7\}\), small enough to guarantee \[\frac{\Delta t\epsilon_{i}}{2}\sum_{n=1}^{l}|\!|\nabla e_{\vec{p}}^{h,n}|\!|_ {L^{2}(\Omega)}^{2}\leq\frac{\Delta t}{8}\sum_{n=1}^{l}|\!|\hat{K}\nabla e_{ \vec{p}}^{h,n}|\!|_{L^{2}(\Omega)}^{2}.\] (A.36) Combining (A.13), (A.28)-(A.36) and the discrete Gronwall inequality, we see (A.27). **Theorem A.5**.: _Let \(\{(\mathbf{u}_{h}^{n},\xi_{h}^{n},\vec{p}_{h}^{n})\}\) be the solution of problem (6a)-(6c), then the following error estimate holds_ \[\mu|\!|\epsilon(e_{\mathbf{u}}^{l})|\!|_{L^{2}(\Omega)}^{2}+\frac{1}{ 2\lambda}|\!|\bar{\alpha}^{\mathsf{T}}e_{\vec{p}}^{l}|\!|_{L^{2}(\Omega)}^{2} +\frac{1}{2}|\!|\hat{S}e_{\vec{p}}^{l}|\!|_{L^{2}(\Omega)}^{2} \leq C_{1}(\Delta t)^{2}+C_{2}h^{4}+C_{3}\Delta th^{4},\] (A.37) \[\Delta t\sum_{n=1}^{l}|\!|\hat{K}\nabla e_{\vec{p}}^{n}|\!|_{L^{2 }(\Omega)}^{2} \leq C_{1}(\Delta t)^{2}+C_{2}h^{4}+C_{3}\Delta th^{2},\] (A.38) \[|\!|e_{\mathbf{u}}^{l}|\!|_{H^{1}(\Omega)}^{2} \leq C_{1}(\Delta t)^{2}+C_{2}h^{4}+C_{3}\Delta th^{4},\] (A.39) \[|\!|e_{\xi}^{l}|\!|_{L^{2}(\Omega)}^{2} \leq C_{1}(\Delta t)^{2}+C_{2}h^{4}+C_{3}\Delta th^{4},\] (A.40) _where_ \[C_{1}=C_{1}\big{(}|\!|\vec{\mathbf{u}}|\!|_{L^{2}(0,t_{1};H^{1}( \Omega))},|\!|\vec{\xi}|\!|_{L^{2}(0,t_{1};L^{2}(\Omega))},|\!|\vec{p}|\!|_{L^{2 }(0,t_{1};L^{2}(\Omega))}\big{)},\] \[C_{2}=C_{2}\big{(}|\!|\mathbf{u}|\!|_{L^{\infty}(0,t_{1};H^{3}(\Omega ))},|\!|\vec{\xi}|\!|_{L^{\infty}(0,t_{1};H^{2}(\Omega))},|\!|\vec{p}|\!|_{L^{ \infty}(0,t_{1};H^{2}(\Omega))},|\!|\vec{\mathbf{u}}|\!|_{L^{2}(0,t_{1};H^{3}( \Omega))},\] \[|\!|\dot{\xi}|\!|_{L^{2}(0,t_{1};H^{2}(\Omega))},|\!|\dot{\vec{p}}| \!|_{L^{2}(0,t_{1};H^{2}(\Omega))}\big{)},\] \[C_{3}=C_{3}\big{(}|\!|\vec{p}|\!|_{L^{\infty}(0,t_{1};H^{2}(\Omega ))}\big{)}.\] **Proof**. By using the Cauchy-Schwarz inequality and discrete inf-sup condition, we have \[|\!|\bar{\alpha}^{\mathsf{T}}e^{h,l}_{\vec{p}}|\!|_{L^{2}(\Omega)} \leq|\!|\bar{\alpha}^{\mathsf{T}}e^{h,l}_{\vec{p}}-e^{h,l}_{\xi}|\!| _{L^{2}(\Omega)}+|\!|\epsilon^{h,l}_{\xi}|\!|_{L^{2}(\Omega)}\] (A.41) \[\leq|\!|\bar{\alpha}^{\mathsf{T}}e^{h,l}_{\vec{p}}-e^{h,l}_{\xi}|\!| _{L^{2}(\Omega)}+\frac{2\mu}{\beta_{0}}|\!|\epsilon(e^{h,l}_{\mathbf{u}})|\!|_{L^{2 }(\Omega)}.\] Combining (A.27), (A.41) and (A.10)-(A.12), we obtain (A.37), (A.38) and (A.40). According to the first Korn's inequality and (A.37), we derive (A.39).
2308.01668
Equations of the multi-Rees algebra of fattened coordinate subspaces
In this paper we describe the equations defining the multi-Rees algebra $k[x_1,\dots,x_n][I_1^{a_1}t_1,\dots,I_r^{a_r}t_r]$, where the ideals are generated by subsets of $x_1,\dots,x_n$. We also show that a family of binomials whose leading terms are squrefree, form a Gr\"{o}bner basis for the defining equations with lexicographic order. We show that if we remove binomials that include $x$'s, then remaining binomials form a Gr\"{o}bner basis for the toric ideal associated to the multi-fiber ring. However binomials, including $x$'s, in Gr\"{o}bner basis of defining equations of the multi-Rees algebra are not necessarily defining equations of corresponding symmetric algebra. Despite this fact, we show that this family of ideals is of multi-fiber type.
Babak Jabbar Nezhad
2023-08-03T10:13:03Z
http://arxiv.org/abs/2308.01668v1
# Equations of the multi-Rees algebra of fattened coordinate subspaces ###### Abstract. In this paper we describe the equations defining the multi-Rees algebra \(k[x_{1},\ldots,x_{n}][I_{1}^{a_{1}}t_{1},\ldots,I_{r}^{a_{r}}t_{r}]\), where the ideals are generated by subsets of \(x_{1},\ldots,x_{n}\). We also show that a family of binomials whose leading terms are squfree, form a Grobner basis for the defining equations with lexicographic order. We show that if we remove binomials that include \(x\)'s, then remaining binomials form a Grobner basis for the toric ideal associated to the multi-fiber ring. However binomials, including \(x\)'s, in Grobner basis of defining equations of the multi-Rees algebra are not necessarily defining equations of corresponding symmetric algebra. Despite this fact, we show that this family of ideals is of multi-fiber type. Key words and phrases:Grobner bases, multi-Rees algebra, toric ring, multi-fiber ring 2010 Mathematics Subject Classification: Primary 13A30,13P10,05E40 Babak Jabbar Nezhad has also published under the name Babak Jabarnejad [10]. ## 1. Introduction Let \(R\) be a Noetherian ring, and let \(s_{1},\ldots,s_{n}\) be generators of the ideal \(I\). We define the homomorphism \(\phi\) from the polynomial ring \(S=R[T_{1},\ldots,T_{n}]\) to the Rees algebra \(R[It]\) by sending \(T_{i}\) to \(s_{i}t\). Then \(R[It]\cong S/\ker(\phi)\). The generating set of \(\ker(\phi)\) is referred to as the defining equations of the Rees algebra \(R[It]\). Finding these generating sets is a tough problem which is open for most classes of ideals. Some papers about this problem are [25], [24], [13], [14], [11]. More generally, given any ideals \(I_{1},\ldots,I_{r}\) in a ring \(R\), one would like to describe the equations of the multi-Rees algebra \(R[I_{1}t_{1},I_{2}t_{2},\ldots,I_{r}t_{r}]\). Indeed, the multi-Rees algebra in question is simply the Rees algebra of the module \(I_{1}\oplus I_{2}\oplus\cdots\oplus I_{r}\). However, in our work, we make no serious use of this theory. There is little work on the defining equations of the multi-Rees algebra compared to the ordinary Rees algebra. Another motivation for investigating the multi-Rees algebra is an illustration of the theory of Rees algebra of modules [6], [20]. Some works about defining equations of the multi-Rees algebra included in [19], [12], [22], [2], [10], [3], [1], [5]. In this paper we determine the equations of the multi-Rees algebra \(R[I_{1}^{a_{1}}t_{1},I_{2}^{a_{2}}t_{2},\ldots,I_{r}^{a_{r}}t_{r}]\), where \(R=k[x_{1},\ldots,x_{n}]\) (\(k\) a field) and ideals \(I_{i}\) are generated by subsets of \(x_{1},\ldots,x_{n}\). We present the concept of binary quasi-minors and we show that some explicit binary quasi-minors, whose leading terms are squfree, form a Grobner basis with lexicographic order for defining equations. The degree of these binomials is at most \(r+1\). Also, we show that if we remove binary quasi-minors that include \(x\)'s from Grobner basis, then the rest form Grobner basis for the toric ideal of multi-fiber ring \(k[I_{1}^{a_{1}}t_{1},I_{2}^{a_{2}}t_{2},\ldots,I_{r}^{a_{r}}t_{r}]\). We show that in general, if we add equations of the symmetric algebra to Grobner basis of toric ideal associated to the toric ring \(k[I_{1}^{a_{1}}t_{1},I_{2}^{a_{2}}t_{2},\ldots,I_{r}^{a_{r}}t_{r}]\), then they don't necessarily form a Grobner basis for the defining equations of the multi-Rees algebra. That means some of binary quasi-minors, including \(x\)'s, in the Grobner basis of defining equations of the multi-Rees algebra are not defining equations of the symmetric algebra. This is shown in Example 4.11. Note that the ideals in discussion, individually satisfy \(l\)-exchange property (see [8]) with any monomial order. We know that in the case of Rees algebra when a monomial ideal \(I\) is generated by monomials of one degree, and it satisfies \(l\)-exchange property, then Grobner basis of the defining equations of the Rees algebra \(R[It]\) is formed by Grobner basis of defining equations of the toric ideal of toric ring \(k[I]\) plus some equations of the symmetric algebra [8, Theorem 5.1]. However, when it comes to just defining equations, we show that if we add equations of symmetric algebra to defining equations of the toric ideal associated to the multi-fiber ring, then we obtain defining equations of the multi-Rees algebra. Then this family of ideals is of multi-fiber type. Notice that when powers of all ideals coincide \(1\), the multi-fiber ring of theses ideals is just the toric ring of edge ideals which is a well-known concept. Defining equations of the toric ideal of edge ideals is already studied in many papers including [26], [4], [15], [16], [17], [18], [21]. We can summarize the main result of this paper as below. _Theorem A_.: Let \(R=k[x_{1},\ldots,x_{n}]\) and suppose that ideals \(I_{i}\) are generated by subsets of \(x_{1},\ldots,x_{n}\). Then there is a quasi-matrix \(D\), whose entries are certain indeterminates, such that the multi-Rees algebra \(R[I_{1}^{a_{1}}t_{1},\ldots,I_{r}^{a_{r}}t_{r}]\) is defined by the ideal generated by all binary quasi-minors of \([\underline{x}|D]\). Also, an explicit subset of these binary quasi-minors form a Grobner basis with lexicographic order whose leading terms are squarefree. As this concept could be seen as a specialization of toric ring of edge ideals, one may wonder whether we can describe defining equations of the multi-Rees algebra using graph theory. We define a bipartite graph associated to the multi-Rees algebra of these ideals and we describe defining equations using cycles of this graph. To prove the main result, first, we use a result in [5] to show one case. Next, for more general case, we build a directed bipartite graph (this graph is different than the graph that we use to describe equations, but both graphs come from the same concept) and we find cycles of this graph and then we prove the theorem. Actually, we can associate a bipartite graph to the multi-Rees algebra \(k[x_{1},\ldots,x_{n}][I_{1}^{a_{1}}t_{1},\ldots,I_{r}^{a_{r}}t_{r}]\), discussed in this paper, as follows: one partition of vertices is formed by \(t_{1},\ldots,t_{r}\). Another partition is formed by \(x_{1},\ldots,x_{n}\). We attach \(x_{i}\)'s which divide generators of \(I_{j}^{a_{j}}\) to \(t_{j}\). When this graph is chordal (that means every cycle of length greater than or equal to \(6\) has a chord), then Grobner basis for defining equations of the multi-Rees algebra is the way described in [5]. In the present paper we describe the Grobner basis for all cases, including non-chordal cases. _Example 1.1_.: Consider the polynomial ring \(R=k[x_{1},x_{2},x_{3}]\). Let \(I_{1}=\langle x_{1},x_{2}\rangle\), \(I_{2}=\langle x_{2},x_{3}\rangle\), \(I_{3}=\langle x_{1},x_{3}\rangle\). We consider the multi-Rees algebra \(R[I_{1}^{2}t_{1},I_{2}^{2}t_{2},I_{3}^{2}t_{3}]\). Then incidence bipartite graph corresponding to this multi-Rees algebra is shown in Figure 1. As we see this graph is non-chordal. Also, in [5], it is proved in more generality that if the incidence bipartite graph is chordal, then the multi-Rees algebra is Koszul. In the last section we show that in our case if the incidence bipartite graph is non-chordal, then the multi-Rees algebra is not Koszul. We also pose a question for interested readers. ## 2. Background This section is taken from [5], as we need these results to prove our main result. Let \(G\) be a finite collection of monomials of positive degree in the polynomial ring \(k[x_{1},\ldots,x_{n}]\) (\(k\) is a field). We denote \(\mathbf{x}=x_{1},\ldots,x_{n}\). Let \(S\) be the polynomial ring \(S:=k[T_{m}:m\in G]\). We define the toric map associated to \(G\) as the map \(\phi_{G}:S\to k[\mathbf{x}]\), where \(\phi_{G}(T_{m})=m\). We denote \(J_{G}\) for the kernel of \(\phi_{G}\); clearly this is a toric ideal associated to \(G\). Given \(\boldsymbol{\gamma}=(\gamma_{m})\in\mathbb{Z}_{\geq 0}^{G}\), we write \(\mathbf{T}^{\boldsymbol{\gamma}}\) for \(\prod_{m\in G}T_{m}^{\gamma_{m}}\), where \(T_{m}\) is a variable of the polynomial ring \(S=k[T_{m}:m\in G]\). If \(\mu\in k[G]\) is a monomial, then \[S_{\mu}=\operatorname{span}_{k}\{\mathbf{T}^{\boldsymbol{\gamma}}:\phi_{G}( \mathbf{T}^{\boldsymbol{\gamma}})=\mu\}.\] We now introduce a combinatorial device from [5], which we call the _fiber graph_ of the toric map \(\phi_{G}\) at the monomial \(\mu\). _Definition 2.1_.: [5] Let \(G\) be a finite collection of monomials of positive degree in the polynomial ring \(k[\mathbf{x}]\), \(J_{G}\) the toric ideal of \(G\), and \(\mathcal{B}\subset J_{G}\) a finite collection of binomials. The fiber graph of \(\phi_{G}\) at \(\mu\) with respect to \(\mathcal{B}\) is the graph \(\Gamma_{\mu,\mathcal{B}}\) whose vertices are monomials \(\mathbf{T}^{\boldsymbol{\gamma}}\in S_{\mu}\) with an edge connecting \(\mathbf{T}^{\boldsymbol{\gamma}},\mathbf{T}^{\boldsymbol{\gamma}^{\prime}}\in S _{\mu}\) if \(\mathbf{T}^{\boldsymbol{\gamma}}-\mathbf{T}^{\boldsymbol{\gamma}^{\prime}}\) is a multiple of a binomial from \(\mathcal{B}\). Moreover, if \(\prec\) is a monomial order on \(S\), then \(\vec{\Gamma}_{\mu,\mathcal{B}}\) is the graph \(\Gamma_{\mu,\mathcal{B}}\) with edges directed from the larger monomial to the smaller. That is, if \(\mathbf{T}^{\boldsymbol{\gamma}},\mathbf{T}^{\boldsymbol{\gamma}^{\prime}}\in S _{\mu}\) are connected by an edge in \(\Gamma_{\mu,\mathcal{B}}\) and \(\mathbf{T}^{\boldsymbol{\gamma}^{\prime}}\prec\mathbf{T}^{\boldsymbol{\gamma}}\), then we get the directed edge \(\mathbf{T}^{\boldsymbol{\gamma}}\to\mathbf{T}^{\boldsymbol{\gamma}^{\prime}}\). _Remark 2.2_.: We suppress the collection \(G\) of monomials of \(k[\mathbf{x}]\) in the notation for \(\Gamma_{\mu,\mathcal{B}}\), assuming that the underlying toric map is understood from context. _Proposition 2.3_.: [5] Let \(\phi_{G}:S\to k[\mathbf{x}]\) be a toric map and \(\mathcal{B}\) a collection of binomials from the toric ideal \(J=J_{G}\). If \(S\) is equipped with a monomial order \(\prec\), then the following are equivalent: 1. The binomials in \(\mathcal{B}\) form a Grobner basis for \(J_{G}\) under \(\prec\). 2. Every nonempty graph \(\vec{\Gamma}_{\mu,\mathcal{B}}\) has a unique sink. Let the ideal \(J\) be generated by a subset of \(\mathbf{x}\). Let \(I=J^{m}\), \(\mathbf{T}=\{T_{m}:m\in I\}\), and \(\phi:k[\mathbf{T}]\to k[\mathbf{x}]\) be defined by \(\phi(T_{m})=m\). Order the variables of \(k[\mathbf{T}]\) by \(T_{m}\succ T_{m^{\prime}}\) if \(m\succ_{\text{grevlex}}m^{\prime}\). Let \(\prec_{\text{lex}}\) be the lexicographic monomial order on \(k[\mathbf{T}]\) with respect to this ordering of the variables. Set \[\mathcal{B}=\{T_{m}T_{n}-T_{\frac{x_{i}}{x_{j}}m}T_{\frac{x_{j}}{x_{i}}n}:x_{j} \mid m,x_{i}\mid n\}.\] _Theorem 2.4_.: [5] With assumptions above, \(\mathcal{B}\) is a Grobner basis for \(J_{k[I]}\) with respect to \(\prec_{\text{lex}}\). ## 3. Binary quasi-minors _Definition 3.1_.: An \(n\times m\) quasi-matrix over a ring \(R\) is a rectangular array with \(n\) rows and \(m\) columns such that some entries may be empty. Figure 1. The bipartite incidence graph for Example 1.1 A subquasi-matrix is a quasi-matrix that is obtained by deleting some rows, columns, or elements of a quasi-matrix. _Example 3.2_.: \[A=\begin{bmatrix}a&&b\\ c&d&\\ e&f&g\end{bmatrix}\] is a quasi-matrix and \(\begin{bmatrix}a&&b\\ d&\end{bmatrix}\) is a subquasi-matrix of \(A\). _Definition 3.3_.: A binary quasi-matrix is a quasi-matrix having exactly two elements in each nonempty row and column. _Example 3.4_.: All \(3\times 3\) binary quasi-matrices are listed below: \[\begin{bmatrix}a&b\\ c&d\\ e&&f\end{bmatrix},\ \begin{bmatrix}a&b\\ c&&d\\ e&f\end{bmatrix},\ \begin{bmatrix}a&&b\\ c&d\\ e&f\end{bmatrix},\ \begin{bmatrix}a&&b\\ c&d&\\ e&f\end{bmatrix},\ \begin{bmatrix}a&b\\ c&d\\ e&f\end{bmatrix},\ \begin{bmatrix}a&b\\ c&d\\ e&f\end{bmatrix}\] Note that a binary quasi-matrix is a square matrix, up to deleting an empty row or column. Since we usually identify a quasi-matrix canonically with the one obtained by deleting any empty row or column, in the sequel we usually consider a binary quasi-matrix as a square matrix. _Definition 3.5_.: Let \(A=(a_{ij})\) be an \(n\times n\) binary quasi-matrix over a ring \(R\). A binary quasi-determinant of \(A\) is an element \[a_{1\sigma(1)}a_{2\sigma(2)}\ldots a_{n\sigma(n)}-a_{1\tau(1)}a_{2\tau(2)} \ldots a_{n\tau(n)}\] where \(\sigma,\tau\) are permutations of \(\{1,2,\ldots,n\}\) such that \(\sigma(l)\neq\tau(l)\) for all \(1\leq l\leq n\). A binary quasi-determinant of a binary subquasi-matrix \(A\) is called a binary quasi-minor of \(A\). Note that by definition, if \(\delta\) is a binary quasi-determinant of a quasi-matrix, then so is \(-\delta\). In the sequel, we will usually consider a given binary quasi-minor up to sign. _Remark 3.6_.: (1) Note that the quasi-determinant of a \(2\times 2\) binary quasi-matrix is equal to its determinant, up to sign. Hence all \(2\times 2\) minors (which exist) of a quasi-matrix are binary quasi-minors. (2) Note that a quasi-determinant of a \(3\times 3\) binary quasi-matrix is uniquely determined up to sign. However, in general it is not equal to the determinant, even up to sign, of the matrix obtained by assigning value zero to all empty positions. (3) For \(n\geq 4\), a quasi-determinant of a binary \(n\times n\) quasi-matrix is not even unique, up to sign. For example, consider the following binary quasi-matrix \[\begin{bmatrix}a&b&&\\ c&d&&\\ &&e&f\\ &&g&h\end{bmatrix}.\] Then \(adeh-bcgf\) and \(adgf-bceh\) are both quasi-determinants. _Notation 3.7_.: If \(A\) is a quasi-matrix with entries in \(R\), then we denote the ideal generated by the binary quasi-minors of \(A\) by \(I_{bin}(A)\). _Example 3.8_.: Consider the quasi-matrix \(A\) as below: \[A=\begin{bmatrix}a&b\\ c&d&e\\ f&&g\end{bmatrix},\] then \(adg-bef\), \(bef-adg\), \(ad-bc\), \(bc-ad\), \(cg-fe\), and \(fe-cg\) are all binary quasi-minors of \(A\). The next elementary result shows that the ideal of binary quasi-minors generalizes the classical ideal of \(2\times 2\) minors. _Proposition 3.9_.: Let A be a matrix. Then \(I_{bin}(A)=I_{2}(A)\). Proof.: It is enough to show that every binary quasi-minor in \(A\) is an \(R\)-combination of \(2\times 2\) minors. Let \(\delta=V_{1}V_{2}\ldots V_{n}-W_{1}W_{2}\ldots W_{n}\) be an arbitrary binary quasi-minor. We induct on \(n\geq 2\). Since the result is clear for \(n=2\), we may assume \(n\geq 3\) and that the result holds for binary quasi-minors of size \(<n\). We may assume \(V_{1}\) is in the same row with \(W_{1}\) and \(V_{2}\) is in the same column with \(W_{1}\). Let \(U\) be the entry of \(A\) in the same column as \(V_{1}\) and same row as \(V_{2}\). Then \[\delta =\delta-UW_{1}V_{3}\ldots V_{n}+UW_{1}V_{3}\ldots Vn\] \[=(V_{1}V_{2}-UW_{1})V_{3}\ldots V_{n}+W_{1}(UV_{3}\ldots V_{n}-W_ {2}\ldots W_{n}).\] If \(U\) is not one of the \(W\)'s, then the subquasi-matrix obtained by deleting the first row and column involving \(W_{1}\) and \(V_{2}\), and containing \(U\) is binary quasi-matrix, with \(UV_{3}\ldots V_{n}-W_{2}\ldots W_{n}\) as an \((n-1)\)-sized binary quasi-minor. On the other hand, if \(U\) is a \(W_{i}\), say \(W_{2}\) (which can only happen if \(n\geq 4\)), then \(UV_{3}\ldots V_{n}-W_{2}\ldots W_{n}=W_{2}(V_{3}\ldots V_{n}-W_{3}\ldots W_{n})\) and \(V_{3}\ldots V_{n}-W_{3}\ldots W_{n}\) is a binary quasi-minor of A of size \((n-2)\). In either case, we are done by induction. ## 4. Equations of the multi-Rees algebra Let \(I_{i}=J_{i}^{a_{i}}\), where the \(a_{i}\)'s are positive integers and the ideals \(J_{i}\) are generated by arbitrary subsets of \(x_{1},\ldots,x_{n}\). In the rest of this paper, by generators of \(J_{i}\) we mean these subsets of \(x_{1},\ldots,x_{n}\). Also, by minimal generating set of \(I_{i}\) we mean generators that are created from mentioned generating set of \(J_{i}\). We denote \(\boldsymbol{a}=a_{1},\ldots,a_{r}\). Let \(\mathcal{I}=\{I_{1},\ldots,I_{r}\}\). If \(\boldsymbol{\alpha}=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{Z}_{\geq 0}^{n}\), we write \(\mathbf{x}^{\boldsymbol{\alpha}}\) for \(x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}\). We write \(\mathbf{t}\) for the set of variables \(\{t_{1},\ldots,t_{r}\}\). We also write \(k[\mathbf{x},\mathbf{t}]\) for the polynomial ring \(k[\mathbf{x}][t_{1},\ldots,t_{r}]=k[x_{1},\ldots,x_{n},t_{1},\ldots,t_{r}]\). We consider the multi-Rees algebra of \(\mathcal{I}\): \[k[\mathbf{x}][\mathcal{I}\mathbf{t}]=k[\mathbf{x}][I_{1}t_{1},\ldots,I_{r}t_{ r}]=\bigoplus_{b_{1},\ldots,b_{r}\geq 0}I_{1}^{b_{1}}\cdots I_{k}^{b_{r}}t_{1}^{b_{ 1}}\cdots t_{r}^{b_{r}}.\] Let \(G_{1},\ldots,G_{r}\) be minimal sets of generators for \(I_{1},\ldots,I_{r}\). Then clearly \[k[\mathbf{x}][I_{1}t_{1},\ldots,I_{r}t_{k}]=k[x_{1},\ldots,x_{n},\{\mathbf{x}^ {\boldsymbol{\alpha}}t_{j}:\mathbf{x}^{\boldsymbol{\alpha}}\in G_{j}\}]\] We create a variable \(T_{\mathbf{x}^{\boldsymbol{\alpha}}t_{j}}\) for each monomial \(\mathbf{x}^{\boldsymbol{\alpha}}t_{j}\) and write \(\mathbf{T}\) for the set of all such variables. We then define the map \(\phi\) as follows: \[\phi:S:=k[\mathbf{x},\mathbf{T}]\to k[\mathbf{x},\mathbf{t}],\] where \(\phi(x_{i})=x_{i}\) for all \(x_{i}\in\mathbf{x}\) and \(\phi(T_{\mathbf{x}^{\boldsymbol{\alpha}}t_{j}})=x^{\boldsymbol{\alpha}}t_{j}\) for all \(\mathbf{x}^{\boldsymbol{\alpha}}t_{j}\) with \(\mathbf{x}^{\boldsymbol{\alpha}}\in G_{j}\). Clearly this is a toric map as discussed in Section 2. We are concerned primarily with the defining equations of \(k[\mathbf{x}][I_{1}t_{1},\ldots,I_{r}t_{r}]\), which is the toric ideal \(\ker(\phi)\). Let \(\mathfrak{m}\) for the ideal \(\langle x_{1},\ldots,x_{n}\rangle\subset k[\mathbf{x}]\). Then the multi-fiber ring of \(k[\mathbf{x}][\mathcal{I}\mathbf{t}]\) is \(k[\mathbf{x}][\mathcal{I}\mathbf{t}]/\mathfrak{m}k[x][\mathcal{I}\mathbf{t}]\). Since in our case the monomial ideals \(\mathcal{I}=\{I_{1},\ldots,I_{r}\}\) are each generated in a single degree, then we have an isomorphism \[k[\mathbf{x}][\mathcal{I}\mathbf{t}]/\mathfrak{m}k[\mathbf{x}][\mathcal{I} \mathbf{t}]\cong k[\mathbf{x}^{\boldsymbol{\alpha}}t_{j}:\mathbf{x}^{ \boldsymbol{\alpha}}\in G_{j},1\leq j\leq r].\] We denote the ring \(k[\mathbf{x}^{\boldsymbol{\alpha}}t_{j}:\mathbf{x}^{\boldsymbol{\alpha}}\in G _{j},1\leq j\leq r]\) by \(k[\mathcal{I}\mathbf{t}]\). We define the map \(\psi\) as follows: \[\psi:k[\mathbf{T}]\to k[\mathbf{x},\mathbf{t}],\] where \(\psi(T_{\mathbf{x}^{\boldsymbol{\alpha}}t_{j}})=x^{\boldsymbol{\alpha}}t_{j}\). If \(\mathcal{I}=\{I\}\) consists of a monomial ideal generated in a single degree, then \[k[\mathbf{x}][It]/\mathfrak{m}k[\mathbf{x}][It]\cong k[\mathbf{x}][It]\cong k[ I],\] is the toric ring of \(I\). The following definition can be found in [2]. _Definition 4.1_.: The ideal \(I\) that is generated by the elements \(f_{1},\ldots,f_{m}\) of single degree is said to be of fiber type if the defining ideal of the Rees algebra \(k[\mathbf{x}][It]\) is generated by polynomials that either (1) are linear in the indeterminates representing the generators \(f_{i}t\) of the Rees algebra (and therefore are relations of the symmetric algebra), or (2) belong to the defining ideal of \(k[I]\). One can immediately generalize this notion and speak of family \(I_{1},\ldots,I_{r}\) of multi-fiber type. We will see that the family of ideals \(I_{1},\ldots,I_{r}\) in our case is of multi-fiber type. _Remark 4.2_.: In [8, Definition 4.1], authors define \(l\)-exchange property for a monomial ideal, whose generators have one degree. They also show in [8, Theorem 5.1] that if an ideal \(I\) satisfies \(l\)-exchange property, then Grobner basis of defining equations of the Rees algebra \(k[\mathbf{x}][It]\) is formed by Grobner basis of defining equations of \(k[I]\) plus some of relations of symmetric algebra. We see easily that in our case each of ideals \(I_{1},\ldots,I_{r}\) satisfies \(l\)-exchange property with any monomial order. We will see in the Example 4.11, that even under the conditions that mentioned in the [8, Theorem 5.1], this theorem does not hold in our case. That means the Grobner basis will have some polynomials involving \(x\)'s which are not symmetric polynomials. Even though, as we mentioned, these family of ideals is of multi-fiber type. _Convention 4.3_ (Monomial order for \(k\left[\mathbf{x},\mathbf{T}\right]\)).: We fix the following monomial order on \(k[\mathbf{x},\mathbf{T}]\). We order the variables of \(k[\mathbf{x},\mathbf{T}]\) by \(T_{mt_{i}}\succ T_{nt_{j}}\) if \(i>j\) or \(i=j\) and \(m\succ_{\text{grevlex}}n\). Furthermore we make \(T_{mt_{i}}\succ x_{j}\) for any variable \(T_{mt_{i}}\in\mathbf{T}\) and \(x_{j}\in\mathbf{x}\). On top of this ordering of the variables of \(k[\mathbf{x},\mathbf{T}]\) we put the lexicographic order. _Definition 4.4_.: For a fixed \(l\), we define the matrix \(B_{a_{l}}\), whose entries in first row are \(T_{x_{1}^{p_{1}}\ldots x_{n}^{p_{n}}t_{l}}\), where \(x_{1}^{p_{1}}\ldots x_{n}^{p_{n}}\) are generators of \(\mathfrak{m}^{a_{l}}\), \(p_{1}\geq 1\), and smaller elements are put on the left. For each \(T_{x_{1}^{p_{1}}\ldots x_{n}^{p_{n}}t_{l}}\) on the first row entry under this element on the \(v\)-th row is \[T_{x_{1}^{p_{1}-1}\ldots x_{v}^{p_{v}+1}\ldots x_{n}^{p_{n}}t_{l}}=T_{\frac{x_{v }}{x_{1}}x_{1}^{p_{1}}\ldots x_{v}^{p_{v}}\ldots x_{n}^{p_{n}}t_{l}},\ 1\leq v\leq n.\] We see that if \(T_{mt_{l}}\) is in the column \(v\) and row \(i\) of \(B_{a_{l}}\), then the entry in the same column and row \(j\) is \(T_{\frac{x_{j}}{x_{i}}mt_{l}}\). Also, we see that if all distinct factors of the monomial \(m\) of degree \(a_{l}\) are \(x_{i_{1}},\ldots,x_{i_{v}}\), then \(T_{mt_{l}}\) appears in the rows \(i_{1},\ldots,i_{v}\) of \(B_{a_{l}}\). _Definition 4.5_.: Let \(J_{l}=\langle x_{i_{1}},\ldots,x_{i_{p}}\rangle\). We define the quasi-matrix \(D_{a_{l}}\) to be the subquasi-matrix of \(B_{a_{l}}\) by choosing \(T_{mt_{l}}\) from \(i_{1}\)-row such that \(m\) are monomials in \(x_{i_{1}},\ldots,x_{i_{p}}\). Then under these elements we choose entries on rows \(i_{2},\ldots,i_{p}\). This is nothing but choosing all \(T_{mt_{l}}\) such that \(m\) are monomials in \(x_{i_{1}},\ldots,x_{i_{p}}\). We define the subquasi-matrices \(D_{\boldsymbol{a}}\coloneqq(D_{a_{1}}|D_{a_{2}}|\ldots|D_{a_{r}})\) and \(C_{\boldsymbol{a}}\coloneqq(\mathbf{x}|D_{\boldsymbol{a}})\). _Lemma 4.6_.: \(I_{bin}(C_{\boldsymbol{a}})\subseteq\ker(\phi)\). Proof.: Let \(\alpha\) be a binary quasi-minor of \(C_{\boldsymbol{a}}\). It is enough to show that \(\phi(\alpha)=0\). We prove the case that binary quasi-minor involves \(x\), where the proof for the other case is similar. Suppose \(i_{1},\ldots,i_{v}\) are the rows that factors of \(\alpha\) appear. Without loss of generality we may assume that \(x\)'s appear in the rows \(i_{1}\) and \(i_{v}\). Let one term of \(\alpha\) be \[x_{i_{1}}T_{m_{1}j_{1}}T_{m_{2}j_{l_{2}}}\ldots T_{m_{v-1}j_{l_{v-1}}},\] where \(T_{m_{s}l_{s}}\) is in the row \(i_{s+1}\), and \(l_{s}\) are not necessarily distinct. Then without loss of generality we may assume that other term of \(\alpha\) is \[x_{i_{v}}T_{\frac{x_{i_{1}}}{x_{i_{2}}}m_{1}j_{1}}T_{\frac{x_{i_{2}}}{x_{i_{3} }}m_{2}j_{l_{2}}}\ldots T_{\frac{x_{i_{v-1}}}{x_{i_{v}}}m_{v-1}j_{l_{v-1}}}.\] Then clearly \(\phi(\alpha)=0\). This completes the proof. _Theorem 4.7_.: Suppose that ideals \(J_{i}\) are generated by subsets of \(x_{1},\ldots,x_{n}\). Let \(I_{i}=J_{i}^{a_{i}}\). Suppose \(\mathcal{I}=\{I_{1},\ldots,I_{r}\}\). Then the multi-Rees algebra \(k[\mathbf{x}][\mathcal{I}\mathbf{t}]\) is normal and Cohen-Macaulay. Explicitly, a Grobner basis of binary quasi-minors of \(C_{\boldsymbol{a}}\) whose leading terms are squarefree is given for \(\ker(\phi)\), with lexicographic order as follows: (1) Some of them are \(2\times 2\)-minors of \((\mathbf{x}|D_{a_{l}})\). (2) Some of them are binary quasi-minors such that in each term we have at most one entry from \(\mathbf{x}\) and each \(D_{a_{l}}\). Proof.: We use Proposition 2.3, with \(\mathcal{B}\) the indicated set. Under the given monomial order, we seek to show that the directed graphs \(\vec{\Gamma}_{\mu,\mathcal{B}}\) have a unique sink. Let \(M^{\prime},N^{\prime}\in S_{\mu}\). Let \(M=\frac{M^{\prime}}{\gcd(M^{\prime},N^{\prime})}\) and \(N=\frac{N^{\prime}}{\gcd(M^{\prime},N^{\prime})}\). Hence \(\phi(M)=\phi(N)\), \(M,N\in S_{\nu}\). Therefore to prove the theorem it is enough to show that \(M\) or \(N\) is not a sink, and to do this we build the directed graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) associated to \(M\) (resp. \(N\)). \(\vec{\Theta}_{M}\) and \(\vec{\Theta}_{N}\) are bipartite graphs with the same vertices. One partition for both of them is a subset of \(x\)'s and other partition is a subset of \(t_{j}\)'s (\(0\leq j\leq r\)). Note that here we create an extra variable \(t_{0}\). For each \(1\leq j\leq r\) (not necessarily every such \(j\) will be considered), we consider \(M_{j}\) (resp. \(N_{j}\)) to be the product of all factors of \(M\) (resp. \(N\)) involving \(t_{j}\). If \(\phi(M_{j})=\phi(N_{j})\), then by Theorem 2.4 and Proposition 2.3, we see that either \(M_{j}\) or \(N_{j}\) is not a sink. So that \(M\) or \(N\) is not a sink. Then we assume that for every such a \(j\), \(\phi(M_{j})\neq\phi(N_{j})\). Since number of factors involving \(t_{j}\) in both \(M_{j}\) and \(N_{j}\) is equal, there is \(x_{\alpha}\) (resp. \(x_{\beta}\)) (\(x_{\alpha}\neq x_{\beta}\)) in \(\phi(M_{j})\) (resp. \(\phi(N_{j})\)) such that its power is greater in \(\phi(M_{j})\) (resp. \(\phi(N_{j})\)). Hence there is \(T_{mt_{j}}\) in \(M_{j}\) where \(x_{\alpha}\) divides \(m\). Also, there is \(T_{m^{\prime}t_{j}}\) in \(N_{j}\), where \(x_{\beta}\) divides \(m^{\prime}\). Back to building the graphs, \(x_{\alpha}\), \(x_{\beta}\), and \(t_{j}\) are vertices of both graphs. In the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(x_{\alpha}\) to \(t_{j}\) (resp. from \(t_{j}\) to \(x_{\alpha}\)) and other directed edge goes from \(t_{j}\) to \(x_{\beta}\) (resp. from \(x_{\beta}\) to \(t_{j}\)). On the other hand, since \(\phi(M)=\phi(N)\), there is a factor of \(x_{\beta}\) (resp. \(x_{\alpha}\)) in \(\frac{\phi(M)}{\phi(M_{j})}\) (resp. \(\frac{\phi(N)}{\phi(N_{j})}\)). Then either \(x_{\beta}\) divides \(M\) or there is \(T_{nt_{j^{\prime}}}\) in \(M\), where \(x_{\beta}\) divides \(n\). In the former case, in the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(x_{\beta}\) to \(t_{0}\) (resp. one directed edge goes from \(t_{0}\) to \(x_{\beta}\)). In the latter case, in the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(x_{\beta}\) to \(t_{j^{\prime}}\) (resp. one directed edge goes from \(t_{j^{\prime}}\) to \(x_{\beta}\)). Also, either \(x_{\alpha}\) divides \(N\) or there is \(T_{n^{\prime}t_{j^{\prime\prime}}}\) in \(N\), where \(x_{\alpha}\) divides \(n^{\prime}\) (\(j^{\prime}\) and \(j^{\prime\prime}\) are not necessarily different). If \(x_{\alpha}\) divides \(N\), then in the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(t_{0}\) to \(x_{\alpha}\) (resp. one directed edge goes from \(x_{\alpha}\) to \(t_{0}\)). If there is \(T_{n^{\prime}t_{j^{\prime\prime}}}\) in \(N\), where \(x_{\alpha}\) divides \(n^{\prime}\), then in the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(t_{j^{\prime\prime}}\) to \(x_{\alpha}\) (resp. one directed edge goes from \(x_{\alpha}\) to \(t_{j^{\prime\prime}}\)). Finally, if \(m\) and \(n\) are \(x\)-monomials of \(M\) and \(N\) (in the case that they exist, and if one of them exists, then other one also exists), then \(\gcd(m,n)=1\). But total degree of \(m\) and \(n\) is the same. Therefore, there are \(x_{\gamma}\) and \(x_{\delta}\) (\(x_{\gamma}\neq x_{\delta}\)) such that \(x_{\gamma}\upharpoonright m\), \(x_{\gamma}\upharpoonright n\), \(x_{\delta}\upharpoonright m\). Hence there is \(T_{pt_{j^{\prime}}}\) in \(M\) (resp. \(T_{qt_{j^{\prime\prime}}}\) in \(N\)) (\(j^{\prime}\) and \(j^{\prime\prime}\) are not necessarily different) such that \(x_{\delta}\) divides \(p\) (resp. \(x_{\gamma}\) divides \(q\)). In the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(x_{\gamma}\) to \(t_{0}\) (resp. from \(t_{0}\) to \(x_{\gamma}\)) and another directed edge goes from \(t_{0}\) to \(x_{\delta}\) (resp. from \(x_{\delta}\) to \(t_{0}\)). Also, in the graph \(\vec{\Theta}_{M}\) (resp. \(\vec{\Theta}_{N}\)) one directed edge goes from \(x_{\delta}\) to \(t_{j^{\prime}}\) (resp. from \(t_{j^{\prime}}\) to \(x_{\delta}\)) and another directed edge goes from \(t_{j^{\prime\prime}}\) to \(x_{\gamma}\) (resp. from \(x_{\gamma}\) to \(t_{j^{\prime\prime}}\)). The point of building the graph \(\vec{\Theta}_{M}\) (similar for \(\vec{\Theta}_{N}\)) is that if in this graph \(x_{i}\) goes to \(t_{j}\) (\(j\neq 0\)), then there is a \(T_{mt_{j}}\) dividing \(M\) such that \(x_{i}\) divides \(m\). If \(x_{i}\) goes to \(t_{0}\), then \(x_{i}\) divides \(M\). Now, if \(t_{j}\) (\(j\neq 0\)) goes to \(x_{i}\), then \(x_{i}\) is a generator of \(J_{j}\). Finally if \(t_{0}\) goes to \(x_{i}\), then it means \(x_{i}\) is between generators of at least one of \(J_{j}\). As we see direction of edges in the graph \(\vec{\Theta}_{M}\) is opposite to direction of edges in the graph \(\vec{\Theta}_{N}\). In the graph \(\vec{\Theta}_{M}\) in-degree and out-degree of all vertices are greater or equal to \(1\) (the same in \(\vec{\Theta}_{N}\)). Thus there is a cycle in \(\vec{\Theta}_{M}\) which we denote by \(C_{M}\). If we change the direction of edges in \(C_{M}\), then we obtain a cycle \(C_{N}\) in \(\vec{\Theta}_{N}\). Let \(j_{l}\) be the maximum index between \(j\), where \(t_{j}\) are vertices of one partition in \(C_{M}\). Let \(x_{i_{l}},x_{i_{l-1}}\) be in \(C_{M}\) such that \(x_{i_{l}}\) goes to \(t_{j_{l}}\) and \(x_{i_{l-1}}\) leaves \(t_{j_{l}}\). Also, let \(t_{j_{1}},\ldots,t_{j_{l}}\) be vertices of one partition in \(C_{M}\) and \(x_{i_{1}},\ldots,x_{i_{l}}\) be vertices in other partition in \(C_{M}\). The cycle \(C_{M}\) is shown in Figure 2. If \(x_{i_{l}}\succ_{\mbox{lex}}x_{i_{l-1}}\), then by considering properties of the graph \(\vec{\Theta}_{M}\), we Figure 2. The directed cycle \(C_{M}\) in the Theorem 4.7 have \[M^{\prime\prime}=\frac{T_{\frac{x_{i_{l}}}{x_{i_{1}}}m_{1}t_{j_{1}}}}{T_{m_{1}}t_ {j_{1}}}\dots\frac{T_{\frac{x_{i_{l-2}}}{x_{i_{l-1}}}m_{l-1}t_{j_{l-1}}}}{T_{m_{l -1}}t_{j_{l-1}}}\frac{T_{\frac{x_{i_{l-1}}}{x_{i_{l}}}m_{l}t_{j_{l}}}}{T_{m_{l}}t _{j_{l}}}M,\] just we should remark that if some \(j_{u}=j_{0}\) (\(1\leq u\leq l\)), then in above formula, instead of \(T_{m_{u}t_{j_{u}}}\) we will have \(x_{i_{u}}\) and instead of \(T_{\frac{x_{i_{u-1}}}{x_{i_{u}}}m_{u}t_{j_{u}}}\) we will have \(x_{i_{u-1}}\). Then \(M\succ_{\text{lex}}M^{\prime\prime}\) and \(M\to M^{\prime\prime}\), so that \(M\) is not a sink. If \(x_{i_{l-1}}\succ_{\text{lex}}x_{i_{l}}\), then by a similar argument we can show that \(N\) is not a sink. Note that \[T_{m_{l}}t_{j_{l}}T_{m_{l-1}}t_{j_{l-1}}\dots T_{m_{1}}t_{j_{1}}-T_{\frac{x_{i _{l-1}}}{x_{i_{l}}}m_{l}t_{j_{l}}}T_{\frac{x_{i_{l-2}}}{x_{i_{l-1}}}m_{l-1}t_{ j_{l-1}}}\dots T_{\frac{x_{i_{l}}}{x_{i_{l}}}m_{1}t_{j_{1}}},\] is a binary quasi-minor, as \(T_{m_{l-s}}t_{j_{l-s}}\) is in the same row with \(T_{\frac{x_{i_{l-s}}}{x_{i_{l-s+1}}}m_{l-s+1}t_{j_{l-s+1}}}\) (\(1\leq s\leq l-1\)), and \(T_{m_{l}}t_{j_{l}}\) is in the same row with \(T_{\frac{x_{i_{l}}}{x_{i_{l}}}m_{1}t_{j_{1}}}\). Also, \(T\) variables involving \(t_{j_{l-s}}\) (\(0\leq s\leq l-1\)) are in the same columns. Finally, we conclude normality by [23, Proposition 13.5, Proposition 13.15]. Cohen-Macaulayness is concluded from [9, Proposition 1, Theorem 1]. We can prove the following result similar to Theorem 4.7. _Theorem 4.8_.: Suppose that ideals \(J_{i}\) are generated by subsets of \(x_{1},\dots,x_{n}\). Let \(I_{i}=J_{i}^{a_{i}}\). Suppose \(\mathcal{I}=\{I_{1},\dots,I_{r}\}\). Then the multi-fiber ring \(k[\mathcal{I}\mathbf{t}]\) is normal and Cohen-Macaulay. Explicitly, a Grobner basis of binary quasi-minors of \(D_{\boldsymbol{a}}\) whose leading terms are squarefree is given for \(\ker(\psi)\), as follows: (1) Some of them are \(2\times 2\)-minors of \(D_{a_{l}}\). (2) Some of them are binary quasi-minors such that in each term we have at most one entry from each \(D_{a_{l}}\). We take the monomial order on \(k[\mathbf{T}]\) to be the monomial order induced on \(k[\mathbf{T}]\) as a subring of \(k[\mathbf{x},\mathbf{T}]\), where the latter is given the monomial order of Convention 4.3. _Proposition 4.9_.: Suppose that ideals \(J_{i}\) are generated by subsets of \(x_{1},\dots,x_{n}\). Let \(I_{i}\) be powers of \(J_{i}\). Then the family of ideals \(I_{1},\dots,I_{r}\) is of multi-fiber type. Proof.: We prove that every \(x\)-binary quasi-minor of \(C_{\boldsymbol{a}}\) (binary quasi-minors that involve \(x\)) can be generated by \(2\times 2\)\(x\)-minors and \(T\)-binary quasi-minors of \(C_{\boldsymbol{a}}\). Let \(f=x_{i}V_{1}V_{2}\dots V_{m}-x_{j}W_{1}W_{2}\dots W_{m}\) (\(V_{i}\) and \(W_{i}\) are \(T\) variables). Without loss of generality we may assume that \(x_{i}\) and \(W_{1}\) are in the same row and \(W_{1}\) and \(V_{1}\) are in the same column. If \(V_{1}\) and \(x_{j}\) are in the same row, then we have \[f=x_{i}V_{1}V_{2}\dots V_{m}-x_{j}W_{1}W_{2}\dots W_{m}-x_{j}W_{1 }V_{2}\dots V_{m}+x_{j}W_{1}V_{2}\dots V_{m}\] \[\qquad=(x_{i}V_{1}-x_{j}W_{1})V_{2}\dots V_{m}+x_{j}W_{1}(V_{2} \dots V_{m}-W_{2}\dots W_{m}).\] If \(V_{1}\) and \(x_{j}\) are not in the same row, then there is an \(x_{v}\) which is in the same row with \(V_{1}\). We have \[f=x_{i}V_{1}V_{2}\dots V_{m}-x_{j}W_{1}W_{2}\dots W_{m}-x_{v}W_{1 }V_{2}\dots V_{n}+x_{v}W_{1}V_{2}\dots V_{m}\] \[\qquad=(x_{i}V_{1}-x_{v}W_{1})V_{2}\dots V_{m}+W_{1}(x_{v}V_{2} \dots V_{m}-x_{j}W_{2}\dots W_{m}).\] We can continue this procedure until all generators are either \(2\times 2\)\(x\)-minors or \(T\)-binary quasi-minors of \(C_{\boldsymbol{a}}\) _Remark 4.10_.: Instead of lexicographic order we can put graded reverse lexicographic order on \(k[\mathbf{x},\mathbf{T}]\), and in much simpler way we can prove that binary quasi-minors form a Grobner basis for \(\ker(\phi)\) (resp. \(\ker(\psi)\)). But this comes at the cost of leading terms which are no longer squarefree. _Example 4.11_.: We consider the polynomial ring \(R=k[x_{1},x_{2},x_{3}]\). Let \(I_{1}=\langle x_{1}^{2},x_{1}x_{2},x_{2}^{2}\rangle\), \(I_{2}=\langle x_{2}^{2},x_{2}x_{3},x_{3}^{2}\rangle\), \(I_{3}=\langle x_{1}^{2},x_{1}x_{3},x_{3}^{2}\rangle\). We consider the multi-Rees algebra \(R[I_{1}t_{1},I_{2}t_{2},I_{3}t_{3}]\). We have \[C_{\boldsymbol{a}}=\begin{bmatrix}x_{1}&T_{x_{1}x_{2}t_{1}}&T_{x_{1}^{2}t_{1} }&T_{x_{1}x_{3}t_{3}}&T_{x_{1}^{2}t_{3}}\\ x_{2}&T_{x_{2}^{2}t_{1}}&T_{x_{1}x_{2}t_{1}}&T_{x_{2}x_{3}t_{2}}&T_{x_{2}^{2}t _{2}}&\\ x_{3}&&T_{x_{3}^{2}t_{2}}&T_{x_{2}x_{3}t_{2}}&T_{x_{3}^{2}t_{3}}&T_{x_{1}x_{3} t_{3}}\end{bmatrix}.\] So that \(\ker(\phi)\) is generated by \(T\)-binary quasi-minors and \(2\times 2\)\(x\)-minors of \(C_{\boldsymbol{a}}\). But the reduced Grobner basis has some \(x\)-binary quasi-minors that are not \(2\times 2\)-minors. For example \[x_{2}T_{x_{2}x_{3}t_{2}}T_{x_{1}^{2}t_{3}}-x_{1}T_{x_{2}^{2}t_{2}}T_{x_{1}x_{3 }t_{3}},\] is an \(x\)-binary quasi-minor. Its leading term is \(x_{2}T_{x_{2}x_{3}t_{2}}T_{x_{1}^{2}t_{3}}\), which is not divisible by leading term of any \(2\times 2\)-minor. We see that if we keep everything in the order described in Convention 4.3, except we put the order \(x_{i}\succ T_{mt_{j}}\), then again with a similar proof we see binary quasi-minors form a Grobner basis for \(\ker(\phi)\) (resp. \(\ker(\psi)\)). Then in this case all conditions of [8, Theorem 5.1], are satisfied. However, under this order, the leading term of binary quasi-minor \[x_{2}T_{x_{1}^{2}t_{1}}T_{x_{3}^{2}t_{3}}-x_{3}T_{x_{1}x_{2}t_{1}}T_{x_{1}x_{3 }t_{3}},\] which is \(x_{2}T_{x_{1}^{2}t_{1}}T_{x_{3}^{2}t_{3}}\), is not divisible by leading term of any \(2\times 2\)-minor. Then [8, Theorem 5.1], does not hold. Even though, we know that this family of ideals is of multi-fiber type. _Example 4.12_.: Let \(R=k[x_{1},x_{2},x_{3}]\). Let \(I_{1}=\langle x_{1},x_{2}\rangle\), \(I_{2}=\langle x_{2},x_{3}\rangle\), \(I_{3}=\langle x_{1},x_{3}\rangle\). Then \(\ker(\phi)\) is \[\langle x_{1}T_{x_{2}t_{1}}-x_{2}T_{x_{1}t_{1}},x_{1}T_{x_{3}t_{3}}-x_{3}T_{x _{1}t_{3}},x_{2}T_{x_{3}t_{2}}-x_{3}T_{x_{2}t_{2}},T_{x_{1}t_{1}}T_{x_{2}t_{2}} T_{x_{3}t_{3}}-T_{x_{1}t_{3}}T_{x_{2}t_{1}}T_{x_{3}t_{2}}\rangle.\] We see that \(C_{\boldsymbol{a}}\) has the form below \[\begin{bmatrix}x_{1}&T_{x_{1}t_{1}}&&T_{x_{1}t_{3}}\\ x_{2}&T_{x_{2}t_{1}}&T_{x_{2}t_{2}}\\ x_{3}&&T_{x_{3}t_{2}}&T_{x_{3}t_{3}}\end{bmatrix}.\] This special example can also be recovered by using the theory of Rees algebras of modules, as follows. The module \(M=I_{1}\oplus I_{2}\oplus I_{3}\) has a linear resolution \[0\to R^{3}\xrightarrow{\Phi}R^{6}\to M\to 0\] where \[\Phi=\begin{bmatrix}x_{2}&0&0\\ -x_{1}&0&0\\ 0&x_{3}&0\\ 0&-x_{2}&0\\ 0&0&x_{3}\\ 0&0&-x_{1}\end{bmatrix}.\] Hence \(\operatorname{pd}M=1\). Furthermore, since \(M\) is free in codimension \(1\), and \(4\)-generated in codimension \(2\), by [20, Proposition 4.11] the Rees algebra of \(M\), which is the multi-Rees algebra in question, has the expected defining equations, in the sense that \[\mathcal{R}(M)\cong R[T_{1},T_{2},T_{3},T_{4},T_{5},T_{6}]/\langle[x_{1}x_{2} x_{3}]B,\det B\rangle\] where \[B=\begin{bmatrix}-T_{2}&0&-T_{6}\\ T_{1}&-T_{4}&0\\ 0&T_{3}&T_{5}\end{bmatrix}\] is the matrix defined by the equation \[[T]\Phi=[x]B.\] In this special example equations of multi-fiber ring are also known by [26, Proposition 3.1]. Moreover, in this example binary quasi-minors form a universal Grobner basis (c.f.[7, Corllary 5.12]). _Example 4.13_.: Let \(R=k[x_{1},x_{2},x_{3},x_{4}]\). Let \(I_{1}=\langle x_{1}^{2},x_{1}x_{2},x_{2}^{2}\rangle\), \(I_{2}=\langle x_{1}^{3},x_{1}^{2}x_{3},x_{1}x_{3}^{2},x_{3}^{3}\rangle\), \(I_{3}=\langle x_{2}^{2},x_{2}x_{3},x_{3}^{2}\rangle\), \(I_{4}=\langle x_{1},x_{4}\rangle\), \(I_{5}=\langle x_{2},x_{4}\rangle\). We see that \(C_{\boldsymbol{a}}\) has the form below \[\begin{bmatrix}x_{1}&T_{x_{1}x_{2}t_{1}}&T_{x_{1}^{2}t_{1}}&T_{x_{1}x_{2}^{2}t_ {2}}&T_{x_{1}^{2}x_{3}t_{2}}&T_{x_{1}^{2}t_{2}}&&T_{x_{1}t_{4}}\\ x_{2}&T_{x_{2}^{2}t_{1}}&T_{x_{1}x_{2}t_{1}}&&T_{x_{2}x_{3}t_{3}}&T_{x_{2}^{2}t_ {3}}&&T_{x_{2}t_{5}}\\ x_{3}&&T_{x_{3}^{3}t_{2}}&T_{x_{1}x_{2}^{2}t_{2}}&T_{x_{1}^{2}x_{3}t_{2}}&T_{x_{ 3}^{2}t_{3}}&T_{x_{2}x_{3}t_{3}}&&T_{x_{2}x_{3}t_{3}}\\ x_{4}&&&&&&T_{x_{4}t_{4}}&T_{x_{4}t_{5}}\end{bmatrix}.\] For example \[T_{x_{1}x_{2}t_{1}}T_{x_{2}^{2}t_{3}}T_{x_{1}x_{3}^{2}t_{2}}-T_{x_{2}^{2}t_{1} }T_{x_{2}x_{3}t_{3}}T_{x_{1}^{2}x_{3}t_{2}}\] is one of generators. We cannot apply [20, Proposition 4.11] and [26, Proposition 3.1] on this example. As we said in the introduction, when powers of ideals is \(1\), the multi-fiber ring of these ideals is actually toric ring of edge ideals. This brings the idea to describe binary quasi-minors as cycles of a bipartite graph, even if when powers of ideals are not necessarily \(1\). We show it in this example. We will also argue this generally in Discussion 4.14. Now, we create a bipartite graph. One partition of vertices are \(x\)'s and other partition are \(1\) and \(T_{mt_{j}}\), where \(m\) are generators of \(J_{j}^{a_{j}-1}\) (\(I_{j}=J_{j}^{a_{j}}\)). We connect all \(x\)'s to \(1\). We also connect all \(x\)'s that are generators of \(J_{j}\) to \(T_{mt_{j}}\). In this example our graph will be as Figure 3. Now every cycle in this graph gives us a binary quasi-minor. If we label edges with numbers in order, then for every even edge, where one vertex of this edge is \(x_{l}\), two cases occur: if the other vertex is \(1\), then we get a factor \(x_{l}\) in one term of binary quasi-minor. If the other vertex is \(T_{mt_{j}}\), then we get a factor \(T_{x_{l}mt_{j}}\) for the mentioned term. We do the same thing with odd edges to get factors of other term of binary quasi-minor. For example, the binary quasi-minor \[T_{x_{1}x_{2}t_{1}}T_{x_{2}^{2}t_{3}}T_{x_{1}x_{3}^{2}t_{2}}-T_{x_{2}^{2}t_{1} }T_{x_{2}x_{3}t_{3}}T_{x_{1}^{2}x_{3}t_{2}}\] is corresponding to the cycle given in Figure 4. Another easy example is \(2\times 2\)-minor \(x_{1}T_{x_{4}t_{4}}-x_{4}T_{x_{1}t_{4}}\). The corresponding cycle is shown in Figure 5. _Discussion 4.14_.: Let \(\alpha\) be a binary quasi-minor of \(C_{\boldsymbol{a}}\). We claim that \(\alpha\) is corresponding to a cycle as we mentioned in Example 4.13. We prove the claim for the case that binary quasi-minor involves \(x\). The other case is similar. Suppose \(i_{1},\ldots,i_{v}\) are the rows that factors of \(\alpha\) appear. Without loss of generality we may assume that \(x\)'s appear in the rows \(i_{1}\) and \(i_{v}\). Let one term of \(\alpha\) be \[x_{i_{1}}T_{m_{1}j_{l_{1}}}T_{m_{2}j_{l_{2}}}\dots T_{m_{v-1}j_{v-1}},\] where \(T_{m_{s}l_{s}}\) is in the row \(i_{s+1}\), and \(l_{s}\) are not necessarily distinct. Then without loss of generality we may assume that other term of \(\alpha\) is \[x_{i_{v}}T_{\frac{x_{i_{1}}}{x_{i_{2}}}m_{1}j_{l_{1}}}T_{\frac{x_{i_{2}}}{x_{i_ {3}}}m_{2}j_{l_{2}}}\dots T_{\frac{x_{i_{v-1}}}{x_{i_{v}}}m_{v-1}j_{v-1}}.\] Figure 3. The bipartite graph for Example 4.13 Then the cycle starts from \(1\) going to \(x_{i_{1}}\). But \(T_{m_{1}j_{l_{1}}}\) is in the row \(i_{2}\). Hence in the bipartite graph, \(x_{i_{1}}\) is attached to \(T_{\frac{m_{1}}{x_{i_{2}}}j_{l_{1}}}\). This edge creates \(T_{\frac{x_{i_{1}}}{x_{i_{2}}}m_{1}j_{l_{1}}}\). The next edge in the cycle is going from \(T_{\frac{m_{1}}{x_{i_{2}}}j_{l_{1}}}\) to \(x_{i_{2}}\). This edge creates \(T_{m_{1}j_{l_{1}}}\). Actually, vertices in one partition of the bipartite graph are \(x_{i_{1}},\ldots,x_{i_{v}}\), and in other partition vertices are \(1,T_{\frac{m_{1}}{x_{i_{2}}}j_{l_{1}}},\ldots,T_{\frac{m_{v-1}}{x_{i_{v}}}j_{ l_{v-1}}}\). The vertex \(T_{\frac{m_{s-1}}{x_{i_{s}}}j_{l_{s-1}}}\) is attached to vertices \(x_{i_{s-1}}\) and \(x_{i_{s}}\). These edges create \(T_{\frac{x_{i_{s-1}}}{x_{i_{s}}}m_{s-1}j_{l_{s-1}}}\) and \(T_{m_{s-1}j_{l_{s-1}}}\). Also, for \(1<s<v\), \(x_{i_{s}}\) is attached to \(T_{\frac{m_{s-1}}{x_{i_{s}}}j_{l_{s-1}}}\) and \(T_{\frac{m_{s}}{x_{i_{s+1}}}j_{l_{s}}}\). Finally, \(x_{i_{v}}\) is attached to \(1\) and \(T_{\frac{m_{v-1}}{x_{i_{v}}}j_{i_{v-1}}}\). From the argument we see that every such a cycle gives us a binary quasi-minor. _Remark 4.15_.: We see that the directed cycles \(C_{M}\) and \(C_{N}\) in the proof of Theorem 4.7 (which have the same vertices) correspond to two family of bipartite graphs as argued in Discussion 4.14. Each pair of these graphs which correspond to \(C_{M}\) and \(C_{N}\) have different vertices. ## 5. Concluding remarks and questions We consider the polynomial ring \(k[\mathbf{x}]\). We also consider the poset \(L=\{x_{1},\ldots,x_{n}\}\). Let \(L_{i}\) (\(1\leq i\leq r\)) be subposets of \(L\). Suppose \(M_{1},\ldots,M_{r}\) be a collection of monomials in \(k[\mathbf{x}]\). We take ideals \(I_{i}=L_{i}\)-\(\operatorname{Borel}(M_{i})\), which means these ideals are generated by monomials which are obtained by Borel moves starting from \(M_{i}\) on variables that are in \(L_{i}\). We consider the multi-Rees algebra \(k[\mathbf{x}][I_{1}t_{1},\ldots,I_{r}t_{r}]\). Similar to previous section we define \(\mathbf{T}\) variables and the polynomial ring \(k[\mathbf{x},\mathbf{T}]\). Also, we define similarly the map \(\phi\). We define a bipartite graph, where one partition of vertices are \(t_{1},\ldots,t_{r}\) and the other partition of vertices are \(x_{1},\ldots,x_{n}\). Edges are made by connecting variables in the posets \(L_{i}\) to \(t_{i}\). In [5], we have showed that if this bipartite graph is chordal (that means every cycle of length greater or equal to \(6\) has a chord), then there is a Grobner basis of quadrics with lexicographic order for \(\ker(\phi)\). We pose the following question for interested reader: _Question 5.1_.: If the bipartite incidence graph is non-chordal, how do the equations look like? Let consider the quasi-matrix \(C_{\boldsymbol{a}}\). Since we deal with Borel moves for some \(T_{mt_{j}}\), \(m\) may not be in \(I_{j}\). We delete these \(T\) variables from \(C_{\boldsymbol{a}}\) and we obtain another quasi-matrix, say, \(A\). If we have monomial order as given in Convention 4.3, then is the Grobner basis of \(\ker(\phi)\) formed by binary quasi-minors of \(A\)? Also, in [5], we have showed that when the associate incidence bipartite graph is chordal, then the multi-Rees algebra is Koszul. We have also posed this question in [5], that is this a necessary condition for the multi-Rees algebra \(k[\mathbf{x}][I_{1}t_{1},\ldots,I_{r}t_{r}]\) to be Koszul? Here, in the case which is discussed in the present paper we show that this is a necessary condition. _Proposition 5.2_.: Suppose that ideals \(J_{i}\) are generated by subsets of \(x_{1},\ldots,x_{n}\). Let \(I_{i}=J_{i}^{a_{i}}\). Suppose \(\mathcal{I}=\{I_{1},\ldots,I_{r}\}\). If the incidence bipartite graph associated to the multi-Rees algebra is non-chordal, then the multi-Rees algebra \(k[\mathbf{x}][\mathcal{I}\mathbf{t}]\) is not Koszul. Proof.: Without loss of generality we may assume that we have a cycle shown in Figure 6, of length \(\geq 6\), which does not have a chord. Then we see that the binary quasi-minor \[\alpha=T_{x_{1}^{a_{1}}t_{1}}T_{x_{2}^{a_{2}}t_{2}}\ldots T_{x_{m}^{a_{m}}t_{ m}}-T_{x_{1}^{a_{1}-1}x_{2}t_{1}}T_{x_{2}^{a_{2}-1}x_{3}t_{2}}\ldots T_{x_{m}^{a_{ m}-1}x_{1}t_{m}},\] cannot be generated by any quadrics. Because if \(\alpha=\sum\beta_{i}\gamma_{i}\), where \(\gamma_{i}\) are quadrics, then there is at least one \(\gamma_{j}\) such that two factors of the first term of \(\alpha\) is one term of \(\gamma_{j}\). Then \(\phi(\gamma)\neq 0\), which is a contradiction. We have a similar argument for the multi-fiber ring. _Proposition 5.3_.: Suppose that ideals \(J_{i}\) are generated by subsets of \(x_{1},\ldots,x_{n}\). Let \(I_{i}=J_{i}^{a_{i}}\). Suppose \(\mathcal{I}=\{I_{1},\ldots,I_{r}\}\). If the incidence bipartite graph associated to the multi-fiber ring \(k[\mathcal{I}\mathbf{t}]\) is non-chordal, then the multi-fiber ring is not Koszul. ## 6. Acknowledgments I would like to express my gratitude to my former advisor, Mark Johnson, for his valuable comments. I would also like to express my gratitude to Michael DiPasquale, as he gave many valuable comments to make this paper better.
2308.16314
Limit theorems for high-dimensional Betti numbers in the multiparameter random simplicial complexes
We consider the multiparameter random simplicial complex on a vertex set $\{ 1,\dots,n \}$, which is parameterized by multiple connectivity probabilities. Our key results concern the topology of this complex of dimensions higher than the critical dimension. We show that the higher-dimensional Betti numbers satisfy strong laws of large numbers and central limit theorems. Moreover, lower tail large deviations for these Betti numbers are also discussed. Some of our results indicate an occurrence of phase transitions in terms of the scaling constants of the central limit theorem, and the exponentially decaying rate of convergence of lower tail large deviation probabilities.
Takashi Owada, Gennady Samorodnitsky
2023-08-30T20:46:18Z
http://arxiv.org/abs/2308.16314v1
# Limit theorems for high-dimensional Betti numbers in the multiparameter random simplicial complexes ###### Abstract. We consider the multiparameter random simplicial complex on a vertex set \(\{1,\ldots,n\}\), which is parameterized by multiple connectivity probabilities. Our key results concern the topology of this complex of dimensions higher than the critical dimension. We show that the higher-dimensional Betti numbers satisfy strong laws of large numbers and central limit theorems. Moreover, lower tail large deviations for these Betti numbers are also discussed. Some of our results indicate an occurrence of phase transitions in terms of the scaling constants of the central limit theorem, and the exponentially decaying rate of convergence of lower tail large deviation probabilities. Key words and phrases:strong law of large numbers, central limit theorem, large deviation, Betti number, multiparameter random simplicial complex 2010 Mathematics Subject Classification: Primary 60F05, 60F10, 60F15. Secondary 55U05, 60C05. Owada's research was partially supported by the AFOSR grant FA9550-22-1-0238 at Purdue University. Samorodnitsky's research was partially supported by the AFOSR grant FA9550-22-1-0091 at Cornell University.
2307.08977
On Sharpness of $L\log L$ Criterion for Weak Type $(1,1)$ boundedness of rough operators
In this note, we show that the $L\log L$ hypothesis is the strongest size condition on a homogeneous rough function on the sphere which ensures the weak type $(1,1)$ boundedness of the corresponding singular integral $T_\Omega$, provided $T_\Omega$ is bounded in $L^2$.
Ankit Bhojak
2023-07-18T05:19:32Z
http://arxiv.org/abs/2307.08977v1
# On sharpness of \(L\log L\) criterion for weak type \((1,1)\) boundedness of rough operators ###### Abstract. In this note, we show that the \(L\log L\) hypothesis is the strongest size condition on a homogeneous rough function on the sphere which ensures the weak type \((1,1)\) boundedness of the corresponding singular integral \(T_{\Omega}\), provided \(T_{\Omega}\) is bounded in \(L^{2}\). 2010 Mathematics Subject Classification: Primary 42B20 ## 1. Introduction Let \(\Omega\in L^{1}(\mathbb{S}^{d-1})\) with \(\int_{\mathbb{S}^{d-1}}\Omega(\theta)d\theta=0\), where \(d\theta\) is the surface measure on \(\mathbb{S}^{d-1}\). Calderon and Zygmund [2] considered the rough singular integrals defined as, \[T_{\Omega}f(x)=p.v.\int\frac{1}{|x-y|^{d}}\Omega\Big{(}\frac{x-y}{|x-y|}\Big{)} f(y)\;dy,\] They showed that \(\Omega\in L\log L(\mathbb{S}^{d-1})\) i.e. \(\int_{\mathbb{S}^{1}}|\Omega(\theta)|\log(e+|\Omega(\theta)|)<\infty\) implies that \(T_{\Omega}\) is bounded on \(L^{p}(\mathbb{R}^{d})\) for \(1<p<\infty\). The singular integral \(T_{\Omega}\) was shown to be of weak type \((1,1)\) using \(TT^{*}\) arguments by Christ and Rubio de Francia [3] in dimension \(d=2\) (and independently by Hofmann [7]). The case of general dimensions was resolved by Seeger [11] by showing that \(T_{\Omega}\) is of weak type \((1,1)\) for \(\Omega\in L\log L(\mathbb{S}^{d-1})\) assuming the \(L^{2}\) boundedness of \(T_{\Omega}\). It is of interest to know other sufficient conditions on \(\Omega\) that ensures the weak type boundedness of the operator \(T_{\Omega}\). In fact, during the inception of this problem, Calderon and Zygmund [2] showed that \(\Omega\in L\log L\) is "almost" a necessary size condition for \(T_{\Omega}\) to be \(L^{2}\) bounded. If we drop the condition that \(\Omega\in L\log L\), then Calderon and Zygmund [2] pointed out that \(T_{\Omega}\) may even fail to be \(L^{2}\) bounded. Infact, the examples of \(\Omega\) constructed in [13] lies outside the space \(L\log L\) and the corresponding operator \(T_{\Omega}\) is unbounded on \(L^{2}(\mathbb{R}^{d})\). Later on, it was shown in [5, 9] that \(\Omega\in H^{1}(\mathbb{S}^{1})\) in the sense of Coifman and Weiss [4] implies \(T_{\Omega}:L^{p}(\mathbb{R}^{d})\to L^{p}(\mathbb{R}^{d}),\;1<p<\infty\). It is still an open problem if \(T_{\Omega}\) is of weak type \((1,1)\) for \(\Omega\in H^{1}(\mathbb{S}^{1})\). A partial result assuming additional conditions on \(H^{1}\)-atoms in dimension two was obtained by Stefanov [12]. In [6, 8], it was shown that \(T_{\Omega}\) distinguishes \(L^{p}\) spaces by considering a suitable quantity based on the Fourier transform of \(\Omega\). However, we would like to know if there exists an Orlicz space \(X\supsetneq L\log L\) which would ensure that the \(L^{2}\) boundedness of \(T_{\Omega}\) implies the weak \((1,1)\) boundeness of \(T_{\Omega}\) when \(\Omega\in X\). We will show that no such \(X\) exists. To state our main result, we introduce the Orlicz spaces and discuss some of its basic properties. **Definition 1.1** ([1]).: Let \(\Phi:[0,\infty)\to[0,\infty)\) be a Young's function i.e. there exists an increasing and left continuous function \(\phi:[0,\infty)\to[0,\infty)\) with \(\phi(0)=0\) such that \(\int_{0}^{t}\phi(u)\;du\) and \(\frac{\Phi(t)}{t}\to\infty,\;\text{as }t\to\infty.\) We say \(\Omega\in\Phi(L)(\mathbb{S}^{1}),\) if the quantity \[\|\Omega\|_{\Phi(L)}=\int_{\mathbb{S}^{1}}\Phi(|\Omega(\theta)|)\;d\theta \tag{1.1}\] is finite. The quantity in (1.1) fails to be a norm and \(\Phi(L)(\mathbb{S}^{1})\) is not even a linear space. To remedy that, we define the set \[L^{\Phi}(\mathbb{S}^{1})=\{\Omega:\mathbb{S}^{1}\to\mathbb{R}:\exists k>0\text{ such that }\|k^{-1}\Omega\|_{\Phi(L)}<\infty\}.\] We define the Luxemburg norm as \[\left|\kern-1.075pt\left|\kern-1.075pt\left|\Omega\right|\kern-1.075pt\right| \kern-1.075pt\right|_{\Phi(L)}=\inf\{k>0:\|k^{-1}\Omega\|_{\Phi(L)}<\infty\}.\] It is well known that the Orlicz space \(L^{\Phi}(\mathbb{S}^{1})\) forms a Banach space with this norm. We state the following fact that compares this norm with the quantity in (1.1), see Lemma 8.8 in [1]. **Lemma 1.2**.: _If \(\|\Omega\|_{\Phi(L)}<\left|\kern-1.075pt\left|\Omega\right|\kern-1.075pt\right| \kern-1.075pt\right|_{\Phi(L)}\), then \(\left|\kern-1.075pt\left|\Omega\right|\kern-1.075pt\right|_{\Phi(L)}\leq 1.\)_ ## 2. Main result We state our main result for dimension two but the same also holds for higher dimensions using the methods in [13, 6]. Our main result is the following, **Theorem 2.1**.: _Let \(\Phi\) be a Young's function such that_ \[\Psi(t)=\frac{t\log(e+t)}{\Phi(t)}\to\infty,\text{ as }t\to\infty, \tag{2.1}\] _Then there exists an \(\Omega\in\Phi(L)(\mathbb{S}^{1})\) such that \(T_{\Omega}\) is \(L^{p}\) bounded iff \(p=2\). In particular, \(T_{\Omega}\) does not map \(L^{1}(\mathbb{R}^{2})\) to \(L^{1,\infty}(\mathbb{R}^{2})\)._ We note that using the geometric construction in [8], one can obtain the above theorem for the space \(L(\log L)^{1-\epsilon}(\mathbb{S}^{1}),\;0<\epsilon\leq 1\). To obtain the general case, we will employ the construction in [6] with a suitable modification to ensure that the resulting \(\Omega\) lies in the required Orlicz space. ## 3. Proof of Theorem 2.1 To prove Theorem 2.1, we will rely on a transference principle [6]. The space of \(L^{p}\) multipliers \(M^{p}(\mathbb{T})\) is defined as \[M^{p}(\mathbb{T})=\{\mathbf{a}=\{a_{n}\}\in l^{\infty}(\mathbb{Z}):\;T_{ \mathbf{a}}f(x)=\sum_{n\in\mathbb{Z}}a_{n}\widehat{f}(n)e^{2\pi inx}\text{ is bounded on }L^{p}(\mathbb{T})\},\] with \(\|\mathbf{a}\|_{M^{p}(\mathbb{T})}=\|T_{\mathbf{a}}\|_{L^{p}(\mathbb{T})\to L ^{p}(\mathbb{T})}\). The idea is to construct a sequence of \(\{\Omega_{n}\}\) such that the \(L^{p}\) norm of \(T_{\Omega_{n}}\) is large for \(p\neq 2\). We achieve this by employing the fact that \(\{e^{2\pi ikx},\;k\in\mathbb{Z}\}\) is not an unconditional basis for \(L^{p}([0,1]),\;p\neq 2\). We have, **Lemma 3.1** ([6]).: _For \(p\neq 2\) and fixed \(n\in\mathbb{N}\), there exists finite sequences \(\{a_{k}\}_{k=1}^{n}\) and \(\{\epsilon_{k}\}_{k=1}^{n}\) (depending on \(n\)) with \(\epsilon_{k}\in\{-1,1\}\) such that_ \[\Big{\|}\sum_{k=1}^{n}\epsilon_{k}a_{k}e^{2\pi ikx}\Big{\|}_{L^{p}([0,1])}\geq c _{p}n^{\lfloor\frac{1}{2}-\frac{1}{p}\rfloor}\Big{\|}\sum_{k=1}^{n}a_{k}e^{2 \pi ikx}\Big{\|}_{L^{p}([0,1])},\] _where \(c_{p}>0\) depends only on \(p\). Consequently, \(\|\{\ldots,0,\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{n},0,\ldots\}\|_{M^{p} (\mathbb{T})}\gtrsim n^{\lfloor\frac{1}{2}-\frac{1}{p}\rfloor}\). Moreever, we can choose \(\epsilon_{k}\) such that_ \[\|\{\ldots,0,\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{n},0,\ldots\}\|_{M^{p} (\mathbb{T})}=\sup\{\|\{\ldots,0,\delta_{1},\delta_{2},\ldots,\delta_{n},0, \ldots\}\|_{M^{p}(\mathbb{T})}:\ |\delta_{k}|\leq 1\}.\] Proof.: The inequality follows from the unconditionality of basis \(\{e^{2\pi ikx},\ k\in\mathbb{Z}\}\) for \(p\neq 2\) with constant \(K(n)\to\infty\) as \(n\to\infty\). We justify the choice of constant \(c_{p}n^{\lfloor\frac{1}{2}-\frac{1}{p}\rfloor}\). Indeed, we invoke Theorem 1 from [10]. _For \(n\in\mathbb{N}\), there exists \(\{\epsilon_{k}\}_{k=1}^{n}\) with \(\epsilon_{k}=\pm 1\) such that \(\|\sum_{k=1}^{n}\epsilon_{k}e^{2\pi ikx}\|_{L^{\infty}([0,1])}\leq 5n^{\frac{1}{2}}\)._ By using the well known fact that \(L^{p}\) norm of the Dirichlet kernel satisfies the following estimate: \[\|\sum_{k=1}^{n}\ e^{2\pi ikx}\|_{L^{p}([0,1])}\sim n^{1-\frac{1}{p}}.\] we have, for \(p>2\) and \(a_{k}=\epsilon_{k}\), \[\Big{\|}\sum_{k=1}^{n}\epsilon_{k}^{2}e^{2\pi ikx}\Big{\|}_{L^{p}([0,1])}\geq c _{p}n^{\frac{1}{2}-\frac{1}{p}}\Big{\|}\sum_{k=1}^{n}\epsilon_{k}e^{2\pi ikx} \Big{\|}_{L^{p}([0,1])}.\] The analogous inequality for \(p<2\) follows from duality. If we have a seqeunc of multiplier \(\{\gamma_{n}\}\) on \(\mathbb{R}^{2}\) such that \(\gamma_{n}|_{\mathbb{Z}}=\{\ldots,0,\epsilon_{1},\epsilon_{2},\ldots,\epsilon_ {n},0,\ldots\}\) where \(\epsilon_{k}\) are from the above lemma, then by classical transference \(\|T_{\gamma_{n}}\|_{L^{p}(\mathbb{R}^{2})\to L^{p}(\mathbb{R}^{2})}\gtrsim n^ {\lfloor\frac{1}{2}-\frac{1}{p}\rfloor}\). Our aim is to produce \(\gamma_{n}\) such that \(T_{\gamma_{n}}=T_{\Omega_{n}}\). Towards this, the following observation is important. **Lemma 3.2** ([6]).: _Let \(1<p<\infty\) and \(\gamma\in M^{p}(\mathbb{R}^{2})\) be continuous on an arithmetic progression \(\{x_{k}\}_{k=1}^{n}\) in \(\mathbb{R}^{2}\) (i.e. there exists vector \(v\in\mathbb{R}^{2}\) such that \(x_{k}-x_{k-1}=v\)). Then there exists a constant \(C_{p}>0\) such that_ \[\|\gamma\|_{M^{p}(\mathbb{R}^{2})}\geq C_{p}\|\{\ldots,0,\gamma(x_{1}),\gamma( x_{2}),\ldots,\gamma(x_{n}),0,\ldots\}\|_{M^{p}(\mathbb{T})}.\] The proof of above lemma can be found in [6]. We now begin the proof of Theorem 2.1. Proof of Theorem 2.1.: We fix a large \(N\in\mathbb{N}\). Let \(n=\Big{[}\Psi\left(\frac{N}{\log N}\right)\Big{]}\). By hypothesis, we have \(n\to\infty\) as \(N\to\infty\). Let \(s_{n}\in\mathbb{N}\) be a large number such that there exists natural numbers \(t_{1},t_{2},\ldots,t_{2n}\) satisfying * The numbers \(t_{k}\) are in arithmetic progression, i.e. \(t_{k+1}-t_{k}=t_{k}-t_{k-1}\). * Let \(x_{k}=(t_{k},s_{n})\in\mathbb{R}^{2}\). Then \(x_{k},\ k=1,\ldots,2n\) lies in the second quadrant between the lines \(y\)-axis and \(y=-x\). * \(|\frac{x_{k+1}}{|x_{k+1}|}-\frac{x_{k}}{|x_{k}|}|\sim\frac{1}{n}\). We denote \(\tilde{x}_{k}\) to be the point on \(\mathbb{S}^{1}\) obtained by rotating the point \(\frac{x_{k}}{|x_{k}|}\) by \(\frac{\pi}{2}\) radians clockwise. We consider \(I_{k},\ k=1,\ldots,2n\) to be the arc on \(\mathbb{S}^{1}\) with centre \(\tilde{x}_{k}\) and arc length \(N^{-1}\) and denote \(\mathfrak{R}_{\alpha}(I_{k})\) to be the arc obtained by rotating \(I_{k}\) by \(\alpha\) radians counterclockwise. We define \(w_{k}\) as \[w_{k}(\theta)=c_{I_{k}}(-\chi_{I_{k}}(\theta)+\chi_{\mathfrak{R}_{\frac{\pi}{2 }}(I_{k})}(\theta)-\chi_{\mathfrak{R}_{\pi(I_{k})}}(\theta)+\chi_{\mathfrak{R }_{\frac{3\pi}{2}}(I_{k})}(\theta)),\] where we choose \(c_{I_{k}}\) as follows. We recall that the Fourier transform of the kernel in \(T_{\Omega}\) for any even \(\Omega\) with mean value zero is given by \[\widehat{K}_{\Omega}(\xi)=\int_{\mathbb{S}^{1}}\Omega(\theta)\log\frac{1}{| \langle\xi,\theta\rangle|}\ d\theta.\] We define the larger quantity \(m(\Omega)\) which will be useful for our purpose. \[m(\Omega)(\xi):=\int_{\mathbb{S}^{1}}|\Omega(\theta)|\log\frac{1}{|\langle\xi,\theta\rangle|}\ d\theta.\] Clearly, \(|\widehat{K}_{\Omega}(\xi)|\leq m(\Omega)(\xi)\). We choose \(c_{I_{k}}\) such that \(m(w_{k})(\frac{x_{k}}{|x_{k}|})=1\). It is not difficult to see that \(c_{I_{k}}\) and \(\widehat{K}_{w_{k}}(\frac{x_{k}}{|x_{k}|})\) are independent of \(k\) and satisfies, \[c_{I_{k}}\sim\frac{N}{\log N}.\] \[1\lesssim\sup_{x}|\widehat{K}_{w_{k}}(x)|=\Big{|}\widehat{K}_{w _{k}}\Big{(}\frac{x_{k}}{|x_{k}|}\Big{)}\Big{|}\leq\sup_{x}m(w_{k})(x)=1.\] We now set \[\Omega_{n}=\sum_{k=1}^{2n}(-1)^{k}\epsilon_{[\frac{k+1}{2}]}w_{k},\] where \([\ ]\) denotes the integer part and \(\epsilon_{[\cdot]}\) is as in Lemma 3.1. By the disjointness of the arcs \(I_{k}\), we have \[\|\Omega_{n}\|_{\Phi(L)(\mathbb{S}^{1})} \sim\sum_{k=1}^{2n}\int_{I_{k}}\Phi(c_{I_{k}})\] \[\sim nN^{-1}\Phi\left(\frac{N}{\log N}\right)\] \[\sim 1.\] Moreover, since \(\|\Omega_{n}\|_{\Phi(L)(\mathbb{S}^{1})}>1\), by Lemma 1.2, we have \(\|\Omega_{n}\|_{\Phi(L)(\mathbb{S}^{1})}\lesssim 1\). To calculate the \(L^{2}\) norm of \(T_{\Omega_{n}}\), we will need the following estimate. Let \(J_{k}\) be the arc of length \(\frac{1}{100n}\) and whose bisector passes though the point \(\frac{x_{k}}{|x_{k}|}\). Then for \(x\in\mathbb{S}^{1}\) lying in second quadrant between the lines \(y\)-axis and \(y=-x\) with \(x\notin\bigcup_{i=0}^{3}\mathfrak{R}_{\frac{i\pi}{2}}(J_{k})\), we have \[m(w_{k})(x)\lesssim\frac{\log n}{\log N}. \tag{3.1}\] The equation (3.1) follows from the fact that for \(\gamma\in(2I_{k})^{c}\cap(0,\frac{\pi}{4})\), we have \[|m(w_{I_{k}})(e^{i\gamma})|\lesssim\frac{|\log|\gamma-\tilde{x}_{k}||}{|\log|I _{k}||}.\] Indeed, for \(\theta\in I_{k}\), we have \(|\gamma-\tilde{x}_{k}|<|\theta-\tilde{x}_{k}|+|\theta-\gamma|<\frac{|I_{k}|}{2}+| \theta-\gamma|<|\gamma-\tilde{x}_{k}|/2+|\theta-\gamma|\). Thus \(\frac{|\tilde{x}_{k}-\gamma|}{2}<|\theta-\gamma|\) and it follows that \[|m(w_{I_{k}})(e^{i\gamma})| \lesssim-c_{I_{k}}\int_{I_{k}}\log|\sin(\theta-\gamma)|\ d\theta\] \[\leq c_{I_{k}}|I_{k}||\log\left|\sin\left(\frac{|\gamma-\tilde{x}_{ k}|}{2}\right)\right|\] \[\lesssim\frac{|\log|\gamma-\tilde{x}_{k}||}{|\log|I_{k}||}.\] Since \(\frac{\Phi(t)}{t}\to\infty\) as \(t\to\infty\), we have \(\frac{\Psi(t)}{\log t}\to 0\) as \(t\to\infty\). Therefore, \[\|m(\Omega_{n})\|_{L^{\infty}(\mathbb{S}^{1})}\lesssim 1+\frac{n\log n}{\log N }\lesssim\frac{\Psi\left(\frac{N}{\log N}\right)}{\log N}\log\Psi\left(\frac{ N}{\log N}\right)\lesssim\log n. \tag{3.2}\] Now we turn to the estimate of \(L^{p}\) bounds of \(T_{\Omega_{n}}\), We claim \[\|T_{\Omega_{n}}\|_{L^{p}(\mathbb{R}^{2})\to L^{p}(\mathbb{R}^{2})}\gtrsim n^{ |\frac{1}{2}-\frac{1}{p}|}. \tag{3.3}\] To acheive the above claim, we need to prove that for \(1\leq k\leq n\) and \(x\in\mathbb{S}^{1}\) lying in second quadrant between the lines \(y\)-axis and \(y=-x\) with \(x\notin\left(\bigcup_{i=0}^{3}\mathfrak{R}_{\frac{i\pi}{2}}(J_{2k})\right) \cup\left(\bigcup_{i=0}^{3}\mathfrak{R}_{\frac{i\pi}{2}}(J_{2k-1})\right)\), we have \[|\widehat{K}_{w_{2k}}(x)-\widehat{K}_{w_{2k-1}}(x)|\lesssim\left(n\log N\left| \frac{x}{|x|}-\frac{x_{2k}}{|x_{2k}|}\right|\right)^{-1}. \tag{3.4}\] Indeed, let \(e^{i\theta_{2k}}=\frac{x_{2k}}{|x_{2k}|},\ e^{i\gamma}=\frac{x}{|x|}\) and \(A_{2k}\) be the interval in \((-\frac{\pi}{4},\frac{\pi}{4})\) such that \(I_{2k}-\frac{x_{2k}}{|x_{2k}|}=\{e^{i\theta}:\ \theta\in A_{2k}\}\). By using mean value theorem twice and the fact that \(|\theta_{2k}-\theta_{2k-1}|\) is small, we have \[|\widehat{K}_{w_{2k}}(x)-\widehat{K}_{w_{2k-1}}(x)| \lesssim c_{I_{2k}}\int_{A_{2k}}\left(\log\frac{1}{|\tan(\theta+ \theta_{2k}-\gamma)|}-\log\frac{1}{|\tan(\theta+\theta_{2k-1}-\gamma)|}\right) \ d\theta\] \[\lesssim c_{I_{2k}}\int_{A_{2k}}\frac{|\tan(\theta+\theta_{2k}- \gamma)-\tan(\theta+\theta_{2k-1}-\gamma)|}{|\tan(\theta+\theta_{2k}-\gamma)|} \ d\theta\] \[\lesssim c_{I_{2k}}\int_{A_{2k}}\frac{|\theta_{2k}-\theta_{2k-1}|} {|\theta+\theta_{2k}-\gamma|}\ d\theta\] \[\lesssim\frac{c_{I_{2k}}}{n}\int_{A_{2k}}\frac{1}{|\gamma-\theta _{2k}|}\ d\theta\] \[\lesssim\left(n\log N\left|\frac{x}{|x|}-\frac{x_{2k}}{|x_{2k}|} \right|\right)^{-1},\] where we have used \(|\gamma-\theta_{2k}|\leq 2|\theta+\theta_{2k}-\gamma|\) and \(\tan\theta\sim\theta\) away from odd multiples of \(\frac{\pi}{2}\). Now, we return to the proof of the claim (3.3). For \(1\leq k\leq n\), we have \[\widehat{K}_{\Omega_{n}}(x_{2k})=(-1)^{2k}\widehat{K}_{w_{2k}}(x_{2k})\epsilon _{k}+\sum_{1\leq i\neq 2k\leq 2n}(-1)^{i}\epsilon_{\left[\frac{i+1}{2}\right]} \widehat{K}_{w_{i}}(x_{2k})=D\epsilon_{k}+\delta_{k},\] where \(D=\widehat{K}_{w_{2k}}(x_{2k})\) and \(\delta_{k}=\sum\limits_{1\leq i\neq 2k\leq 2n}(-1)^{i}\epsilon_{\left[\frac{i+1}{2} \right]}\widehat{K}_{w_{i}}(x_{2k})\). Using (3.1) for the term \(i=2k-1\) and (3.4) for the remaining terms (in pair), we get \[|\delta_{k}|\leq C\left(\frac{\log n}{\log N}+\frac{1}{\log N}\sum_{i=1}^{2n} \frac{1}{i}\right)\leq\frac{C^{\prime}\log n}{\log N}\leq\frac{|D|}{4}\text{ (for large $n$)}.\] Hence by choice of Lemma 3.1, we have \[\frac{1}{2}\|\{\dots,0,\epsilon_{1},\epsilon_{2},\dots,\epsilon_{n},0,\dots\} \|_{M^{p}(\mathbb{T})}\geq\left\|\left\{\dots,0,\frac{\delta_{1}}{D},\frac{ \delta_{2}}{D},\dots,\frac{\delta_{n}}{D},0,\dots\right\}\right\|_{M^{p}( \mathbb{T})}.\] Since \(\widehat{K}_{\Omega_{n}}(\theta)\) is a circular convolution of a \(L^{1}(\mathbb{S}^{1})\) and \(L^{\infty}(\mathbb{S}^{1})\), it is continuous at the points \(x_{2k},k=1,\dots,n\) and applying Lemma 3.2, we have \[\|T_{\Omega_{n}}\|_{L^{p}(\mathbb{R}^{2})\to L^{p}(\mathbb{R}^{2})}\] \[= \|\widehat{K}_{\Omega_{n}}\|_{M^{p}(\mathbb{R}^{2})}\] \[\gtrsim \|\{\dots,0,\widehat{K}_{\Omega_{n}}(x_{2}),\widehat{K}_{\Omega _{n}}(x_{4}),\dots,\widehat{K}_{\Omega_{n}}(x_{2n}),0,\dots\}\|_{M^{p}( \mathbb{T})}\] \[\gtrsim |D|\left(\|\{\dots,0,\epsilon_{1},\epsilon_{2},\dots,\epsilon_{n },0,\dots\}\|_{M^{p}(\mathbb{T})}-\left\|\left\{\dots,0,\frac{\delta_{1}}{D}, \frac{\delta_{2}}{D},\dots,\frac{\delta_{n}}{D},0,\dots\right\}\right\|_{M^{p} (\mathbb{T})}\right)\] \[\geq \frac{|D|}{2}\|\{\dots,0,\epsilon_{1},\epsilon_{2},\dots, \epsilon_{n},0,\dots\}\|_{M^{p}(\mathbb{T})}\] \[\gtrsim n^{\lfloor\frac{1}{2}-\frac{1}{p}\rfloor},\] where we used Lemma 3.1 in the last step. We conclude the proof by an application of uniform boundedness principle. Indeed, We define the space, \[\mathfrak{B}:=\{\Omega:\mathbb{S}^{1}\to\mathbb{R}\text{ is even}:\int\Omega=0 \text{ and }\|\Omega\|_{\mathfrak{B}^{\prime}}=\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left| \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left| \!\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\! \left|\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\! \!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left| \!\!\left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left| ## Acknowledgement I would like to thank Prof. Parasar Mohanty and Prof. Adimurthi for various useful discussions regarding the problem. I acknowledge the financial support from Science and Engineering Research Board, Department of Science and Technology, Govt. of India, under the scheme Core Research Grant, file no. CRG/2021/000230.
2310.13026
Weakly-Supervised Semantic Segmentation with Image-Level Labels: from Traditional Models to Foundation Models
The rapid development of deep learning has driven significant progress in the field of image semantic segmentation - a fundamental task in computer vision. Semantic segmentation algorithms often depend on the availability of pixel-level labels (i.e., masks of objects), which are expensive, time-consuming, and labor-intensive. Weakly-supervised semantic segmentation (WSSS) is an effective solution to avoid such labeling. It utilizes only partial or incomplete annotations and provides a cost-effective alternative to fully-supervised semantic segmentation. In this paper, we focus on the WSSS with image-level labels, which is the most challenging form of WSSS. Our work has two parts. First, we conduct a comprehensive survey on traditional methods, primarily focusing on those presented at premier research conferences. We categorize them into four groups based on where their methods operate: pixel-wise, image-wise, cross-image, and external data. Second, we investigate the applicability of visual foundation models, such as the Segment Anything Model (SAM), in the context of WSSS. We scrutinize SAM in two intriguing scenarios: text prompting and zero-shot learning. We provide insights into the potential and challenges associated with deploying visual foundational models for WSSS, facilitating future developments in this exciting research area.
Zhaozheng Chen, Qianru Sun
2023-10-19T07:16:54Z
http://arxiv.org/abs/2310.13026v1
# Weakly-Supervised Semantic Segmentation with Image-Level Labels: ###### Abstract The rapid development of deep learning has driven significant progress in the field of image semantic segmentation--a fundamental task in computer vision. Semantic segmentation algorithms often depend on the availability of pixel-level labels (i.e., masks of objects), which are expensive, time-consuming, and labor-intensive. Weakly-supervised semantic segmentation (WSSS) is an effective solution to avoid such labeling. It utilizes only partial or incomplete annotations and provides a cost-effective alternative to fully-supervised semantic segmentation. In this paper, we focus on the WSSS with image-level labels, which is the most challenging form of WSSS. Our work has two parts. First, we conduct a comprehensive survey on traditional methods, primarily focusing on those presented at premier research conferences. We categorize them into four groups based on where their methods operate: pixel-wise, image-wise, cross-image, and external data. Second, we investigate the applicability of visual foundation models, such as the Segment Anything Model (SAM), in the context of WSSS. We scrutinize SAM in two intriguing scenarios: text prompting and zero-shot learning. We provide insights into the potential and challenges associated with deploying visual foundational models for WSSS, facilitating future developments in this exciting research area. ## 1 Introduction Semantic segmentation serves as a fundamental task in the field of computer vision. This task involves the labeling of each pixel within an image with semantically meaningful labels that correspond to specific objects present. Semantic segmentation has broad applicability, with use cases ranging from autonomous driving and scene comprehension to medical image analysis. By enabling machines to extract rich semantic information from images, it allows them to understand the visual world in a manner like human perception. However, the high variability and complexity of real-world scenes [13], coupled with the requirement of extensive labeled data for training deep-learning models, make semantic segmentation a challenging task. To address these challenges, various approaches have been proposed, including fully convolutional networks [56], encoder-decoder architectures [8, 9], and attention mechanisms [93]. These techniques have significantly advanced the state-of-the-art in semantic segmentation, making it a highly active and exciting research area. The fully-supervised semantic segmentation requires a large number of labeled images for training. Unlike it, weakly-supervised semantic segmentation (WSSS) uses only partial or incomplete annotations to learn the segmentation task. This makes the weakly-supervised approach more feasible for real-world applications, where obtaining large amounts of fully labeled data can be prohibitively expensive or time-consuming. Weakly-supervised methods typically rely on various forms of supervision, such as image-level labels [31, 38, 75], scribbles [49, 73], or bounding boxes [14, 40, 67], to guide the segmentation process. Weakly-supervised techniques have shown remarkable progress in recent years, and they represent a promising direction for the future development of semantic segmentation algorithms. In this context, WSSS with image-level class labels is the most challenging and popular form of WSSS. In WSSS with image-level class labels, the only form of supervision provided is the class label for the entire image rather than for each individual pixel. The challenge is to use this limited information to learn the boundaries of objects and accurately segment them. To address this challenge, Class Activation Map (CAM) [94] has emerged as a powerful technique in WSSS. CAM provides a way to visualize the areas of an image that are most relevant to a particular class without requiring pixel-level annotations. CAM is computed from a classification model by weighting the feature maps with the learned weights of the last fully connected layer, resulting in a heat map that highlights the most discriminative regions of an image. However, CAM often fails to capture the complete extent of an object, as it only highlights the most discriminative parts and leaves out other important regions. Most WSSS research focuses on generating more complete CAM. We categorize these methods into four distinct groups, each defined by the level at which they operate: * Pixel-wise methods. These are methods that operate at the pixel level, employing strategies such as the usage of pixel-wise loss functions or the exploitation of pixel similarity and local patches to generate more accurate CAMs. * Image-wise methods. This category includes methods operating on a whole image level. Key methods encompass adversarial learning, context decoupling, consistency regularization, and the implementation of novel loss functions. * Cross-image methods. Methods operate beyond a single image, extending their functions across pairs or groups of images. In some scenarios, these may cover the full extent of the dataset. * Methods with external data. These are methods that utilize additional data sources beyond the training datasets, such as saliency maps and out-of-distribution data, to help the model better distinguish the co-occurring background cues. In addition to these traditional methodologies in WSSS, our study also delves into the applicability and efficacy of recent foundation models. The foundation models, including GPT-3 [3], CLIP [61], and SAM [30], have had a profound impact on both computer vision and natural language processing. This impact is largely attributed to their dependence on extensive data and the utilization of billions of model parameters. Among them, the Segment Anything Model (SAM) [30] is specially crafted for the segmentation field. SAM introduces a new promptable segmentation task that supports various types of prompts, such as points, bounding boxes, and textual descriptions. It leverages a Transformer model [17] trained on the extensive SA-1B dataset (comprising over 1 billion masks derived from 11 million images), which gives it the ability to handle a wide range of scenes and objects. SAM is remarkable for its capability to interpret diverse prompts and successively generate various object masks. In this survey, we assess the potential of SAM in WSSS by exploring two distinct settings: text input and zero-shot learning. In the text input setting, we first employ the Grounded DINO model [53] to generate bounding boxes of the target objects and then feed the bounding boxes and the image into SAM to yield the masks. In the zero-shot setting, where class labels are assumed to be absent (mirroring the validation process in WSSS), we first employ the Recognise Anything Model (RAM) [90] to identify class labels. Subsequently, the same as the text input setting, the Grounded DINO is used to obtain the bounding boxes, and SAM is used to obtain masks. A notable difference between the two settings lies in their subsequent steps. In the text input setting, it is necessary to train a segmentation model, such as DeepLabV2 [8], to produce masks for the validation and test sets. In contrast, the zero-shot approach obviates the need for training an additional semantic segmentation model. We compare the performance of the traditional methods and foundation models on the MS COCO [50] validation set. As shown in Figure 1, the traditional methods reach a noticeable plateau in performance and the introduction of SAM has significantly enhanced the performance outcomes. In Section 5, we provide a comprehensive performance comparison of traditional methods and foundation models, offering insights into the potential and challenges of deploying foundational models in WSSS. The paper is organized as below: Section 2 introduces the preliminaries for the WSSS task. Section 3 introduces the traditional models in WSSS. Section 4 introduces the applicability of visual foundation models in WSSS. Section 5 provides a comprehensive comparison of the performance of the traditional models and the application of foundation models. We conclude in Section 6. ## 2 Preliminaries ### Problem Statement Fig. 2 illustrates the general pipeline for WSSS with image-level class labels for both traditional and foundation models. As shown in Fig. 2 (a), the traditional process begins with the training of a multi-label classification model using training images annotated with image-level class labels. Following this, we infer the class-specific seed areas for each image through the application of the Class Acti Figure 1: The performance of recent WSSS works (CONTA [87], IRN [1], ReCAM [12], AMN [41], LPCAM [11], and CLIP-ES [51]) and an evaluation of foundation models on MS COCO [50] val set. vation Map (CAM) [94] to the classification model. This results in a set of preliminary masks that undergo further refinement to produce the final pseudo masks. These pseudo masks then serve as the pseudo ground truth, enabling the training of a conventional fully supervised semantic segmentation model (e.g., DeepLabV2 [8]). Fig. 2 (b) shows the pipeline of applying foundation models in WSSS. Given the images and their class labels, we first leverage Grounding DINO [53] to generate bounding boxes using text prompts. Then, we feed these bounding boxes into SAM to produce corresponding segmentation masks. Similar to the traditional methods, we can train a fully supervised semantic segmentation model using the produced masks. In another setting, when class labels for images are absent, RAM [90] can be deployed to produce tags for the images. Following this, the aforementioned pipeline can be applied to generate masks. Notably, there's no need to train a fully supervised segmentation model in this context, as masks can be predicted without class labels. ### Class Activation Map (CAM) Class Activation Map (CAM) [94] is a simple yet effective technique employed to identify the regions within an image that a CNN leverages to identify a specific class present in that image. It is calculated by multiplying the classifier weights with the image features. The specifics of how CAM is computed will be discussed in the subsequent discussion. In the multi-label classification model, Global Average Pooling (GAP) is utilized, followed by a prediction layer. To compute the prediction loss on each training example, the Binary Cross-Entropy (BCE) function is employed, as detailed in the following formula: \[\mathcal{L}_{bce}=-\frac{1}{K}\sum_{k=1}^{K}y\left[k\right]\log\sigma\left(z[ k]\right)+(1-y[k])\log\left[1-\sigma\left(z[k]\right]\right), \tag{1}\] where \(z[k]\) denotes the prediction logit of the \(k\)-th class, \(\sigma(\cdot)\) is the sigmoid function, and \(K\) is the total number of foreground object classes (in the dataset). \(y[k]\in\{0,1\}\) is the image-level label for the \(k\)-th class, where 1 denotes the class is present in the image and 0 otherwise. Once the classification model converges, we feed the image \(\mathbf{x}\) into it to extract the CAM of class \(k\) appearing in \(\mathbf{x}\): \[\mathrm{CAM}_{k}(\mathbf{x})=\frac{\mathrm{ReLU}\left(\mathbf{A}_{k}\right)}{\max \left(\mathrm{ReLU}\left(\mathbf{A}_{k}\right)\right)},\mathbf{A}_{k}=\mathbf{w}_{k}^ {\top}f(\mathbf{x}), \tag{2}\] where \(\mathbf{w}_{k}\) denotes the classification weights (e.g., the FC layer of a ResNet) corresponding to the \(k\)-th class, and \(f(\mathbf{x})\) represents the feature maps of \(\mathbf{x}\) before the GAP. As we mentioned in Section 1, CAM often struggles to capture the complete object, instead focusing on the most distinctive parts. As such, a significant portion of the works in WSSS aim to tackle this issue, endeavoring to produce more complete CAMs. ### Related Works To the best of our knowledge, there are only a few related survey papers [66, 6] on WSSS. Chan et al. [6] fo Figure 2: The pipeline of traditional methods and foundation models in WSSS. The dashed line means an optional step. cused on evaluating state-of-the-art weakly-supervised semantic segmentation methods (up until 2019) across diverse datasets, including those for natural scenes, histopathology, and satellite images. Shen et al. [66] provided a comprehensive review of label-efficient segmentation methods. They established a taxonomy classifying these methods based on the level of supervision, varying from no supervision to coarse, incomplete, and noisy supervision. Furthermore, they considered different types of segmentation problems like semantic segmentation, instance segmentation, and panoptic segmentation. In their work, WSSS methods (up until 2021) that utilize image-level class labels fall under the coarse supervision category, further subdivided into six distinct parts (i.e., seed area refinement by cross-label constraint, seed area refinement by cross-pixel similarity, seed area refinement by cross-view consistency, seed area refinement by cross-image relation, pseudo mask generation by cross-pixel similarity, and pseudo mask generation by cross-image relation). In comparison, our paper offers a novel perspective by proposing a new taxonomy to categorize traditional WSSS methods with image-level class labels. We also take a step further by exploring the applicability and effectiveness of recent foundation models in the WSSS context, providing up-to-date insights into this rapidly developing field. ## 3 Traditional Models Traditional WSSS methods [12, 11, 13, 36, 38, 74, 75, 36] predominantly employ convolutional neural networks (CNNs) [23, 78]. Recently, Vision Transformer (ViT) [17] has recently emerged as a competitive alternative, achieving impressive results across multiple vision tasks [4, 55, 68, 72]. In fact, when pre-trained on extensive datasets, ViT can even surpass the state-of-the-art performance of CNNs. This indicates that, similar to their success in natural language processing, Transformers can be a powerful tool in computer vision. ViT-based WSSS methods [62, 63, 51, 83] also achieve competitive results in WSSS. Most of the CNN-based and ViT-based methods are built upon CAM and developed to address the partial activation problem in CAM (as introduced in Section 1). We categorize those methods into four groups based on their operational level: pixel-wise, image-wise, cross-image, and external data. ### Pixel-Wise Methods In WSSS, even though we are limited to image-level labels, several methods delve into information at the pixel level. Some methods, such as [43, 76, 41], derive pixel-level supervision signals from the image-level labels and then utilize them to optimize the pixel-wise loss functions. Others, like [12, 29, 82, 62], explore the similarity among neighboring pixels. Expanding beyond individual pixels, certain approaches [59, 88, 27] harness the complementary information from local patches comprised of multiple pixels. Furthermore, there are other methods that explore contrastive learning [63, 18], graph convolution network [84] on the pixel level. #### 3.1.1 Pixel-wise loss In WSSS with image-level class labels, we lack pixel-level labels for direct network supervision. As a result, certain methods have been developed to generate pixel-level supervision using diverse strategies. A straightforward strategy is to utilize the seed (or refined with dense Conditional Random Field (dCRF) [32]) as noisy supervisory. AMN [41] strives to increase the activation gap between the foreground and background regions. This ensures the resultant pseudo masks are robust to the global threshold values utilized to separate the foreground and background. To achieve this, AMN develops an activation manipulation network equipped with a per-pixel classification loss function (balanced cross-entropy loss [25]), which is supervised by the confident regions within the refined seeds. SANCE [43] trains a model to predict both object contour map and segmentation map, supervised by noisy seeds and online label simultaneously. This method employs noisy seeds as supervision for the segmentation branch and then refines the segmentation map to generate online labels to offer more accurate semantic supervision to the contour branch. Finally, they can generate more complete pseudo masks based on the segmentation map and the contour map. In SpatialBCE [76], the authors highlighted a drawback of the traditional BCE loss function: it calculates the average over the entire probability map (i.e., via global average pooling), thereby causing all pixels to be optimized in the same direction. This process reduces the discriminative ability between the foreground and background pixels. To address this issue, they proposed a spatial BCE loss function that optimizes foreground and background pixels in distinct directions. An adaptive threshold is employed to divide the foreground and background within the initial seeds. All three approaches share a crucial commonality: the use of a threshold to discern between foreground and background. This threshold stands as a pivotal determinant, given that it classifies each pixel as either background or foreground, thereby profoundly affecting the subsequent learning trajectory. The first two methods [41, 43] employ a fixed threshold, set as a hyper-parameter. In contrast, the latter approach [76] opts for a learnable threshold which can be optimized in the training process. #### 3.1.2 Pixel similarity These kinds of methods leverage the similarity between adjacent pixels to refine seeds. PSA [2], IRN [1], AuxSeg Net [82], AFA [62] propagate the object regions in the seed to semantically similar pixels in the neighborhood. It is achieved by the random walk [57] on a transition matrix where each element is an affinity score. PSA [2] incorporates an AffinityNet to predict semantic affinities between adjacent pixels, while IRN [1] includes an inter-pixel relation network to estimate class boundary maps based on which it computes affinities. AuxSegNet [82] integrates non-local self-attention blocks, which capture the semantic correlations of spatial positions based on the similarities between the feature vectors of any two positions. The propagation of CAM in those methods can effectively reduce the false negatives in the original CAM. However, one potential limitation is that the random walk on transition matrix can be time-intensive. Different from them, AFA [62] and SAS [29] leverage the pixel similarity from the inherent self-attention in the Transformer-based backbone. Specifically, Ru et al. [62] introduce an Affinity from Attention (AFA) module to learn semantic affinities from the multi-head self-attention (MHSA) in Transformers. Specifically, they generate an initial CAM and then use it to compute pseudo affinity labels, representing pixel similarity. These pseudo affinity labels are subsequently utilized to guide the affinity prediction made by the MHSA. Kim et al. [29] proposed a super-pixel discovery method to find the semantic-aware super-pixels based on the pixel-level feature similarity produced by self-supervised vision transformer [5]. Then the super-pixels are utilized to expand the initial seed. #### 3.1.3 Local patch Moving beyond the scope of a single pixel, certain methods operate within the context of a small patch composed of a cluster of adjacent pixels. CPN [88] demonstrates that the self-information of the CAM of an image is less than or equal to the sum of the self-information of the CAMs, which are obtained by complementary patch pair. They split an image into two images with complementary patch regions and used the sum of CAMs generated by the two images to mine out more foreground regions. L2G [27] employs a local classification network to extract attention from various randomly cropped local patches within the input image. Concurrently, it uses a global network to learn complementary attention knowledge across multiple local attention maps online. Different from CPN [88] and L2G [27], where the patches are randomly divided, RPIM [59] utilizes the superpixel approach to partition the input images into different regions. It then uses an inter-region spreading module to discover the relationship between different regions and merge the regions that belong to the same object into a whole semantic region. #### 3.1.4 Other pixel-wise methods Other methods operate at the pixel level but do not align squarely with the aforementioned categories. Fan et al. [20] proposed an intra-class discriminator (ICD) that is dedicated to separating the foreground and the background pixels within each image-level class. Such an intra-class discriminator is similar to a binary classifier for each image-level class, which identifies between the foreground pixels and the background pixels. NSROM [84] performs the graph-based global reasoning [64] on pixel-level feature maps to strengthen the classification network's ability to capture global relations among disjoint and distant regions. This helps the network activate the object features outside the salient area. DRS [28] takes a unique approach by attempting to suppress the discriminative regions, thereby redirecting attention to adjacent non-discriminative regions. This is accomplished by introducing suppression controllers (which can be either learnable or non-learnable) to each layer of the CNNs, controlling the extent to which the attention is focused on discriminative regions. Specifically, any activation values that exceed a certain threshold (which can be fixed or learnable) multiplied by the maximum activation values will be suppressed to the maximum activation values. This methodology ensures that the model's focus is more evenly distributed across the whole image, rather than being concentrated solely on the most distinctive regions. PPC [18] is instantiated with a unified pixel-to-prototype contrastive learning formulation, which shapes the pixel embedding space through a prototype-based metric learning methodology. The core idea is pulling pixels together to their positive prototypes and pushing them away from their negative prototypes to learn discriminative dense visual representations. ToCo [63] devises a Class Token Contrast (CTC) module inspired by the capability of ViT's class tokens to capture high-level semantics. CTC uses reliable foreground and background regions within the initial CAM to derive positive and negative local images. The class tokens of these local images are then projected and contrasted with the global class token using InfoNCE loss [58], aiding in differentiating low-confidence regions within the CAM. ### Image-Wise Methods Image-wise methods are the most straightforward and have been the subject of numerous works. Researchers employing these methods have explored a diverse array of strategies. A considerable number of studies delve into adversarial learning [34, 35, 38, 71, 75, 85], context decoupling [69, 80, 81, 87], and consistency regularization [10, 74, 60]. Some tackle the problems of loss functions, introducing innovative solutions. Furthermore, several methods have emerged that focus on online attention accumulation [26], uncertainty estimation [47], and evaluating CAM's coefficient of variation [48]. #### 3.2.1 Adversarial learning The first kind of method that leverages the idea of adversarial learning is adversarial erasing (AE) based methods. Wei et al. [75] proposed the first AE method, which discovers a small object region and erases the mined regions from the image. Then it feeds the image to the classification network again to drive the network to discover new and complementary object regions. Kweon et al. [34] proposed a class-specific AE framework that generates a class-specific mask for erasing by randomly sampling a single specific class to be erased (target class) among the existing classes on the image for obtaining more precise CAM. Although AE methods expand the CAM by erasing the most discriminative regions, they often encounter high computation cost problem due to the multiple feed-forward process and the over-expansion problem due to the lack of guidance on when to stop the erasing process. To address this, the AEFT method [85] reformulates the AE methods as a form of triplet learning. Specifically, it designates the original image as an anchor image, the masked high-confidence regions of the CAM on the anchor image as a positive image, and another image (which shares no class overlap with the anchor image) as a negative image. It aims to minimize the distance between the anchor and the positive image in the feature space while simultaneously maximizing the distance between the anchor and the negative image. As a result, when the CAMs are over-expanded, the embedding from the low-confidence region includes less information about the objects in the image, making it challenging for the network to differentiate this less informative embedding from the negative embedding. Consequently, the expansion of CAMs is intuitively suppressed. ECS-Net [71] investigates a way to provide additional supervision for the classification network by utilizing predictions of erased images. It first erases high-response regions from images and generates new CAMs of those erased images. Then, it samples reliable pixels from the new CAM and applies their segmentation predictions as semantic labels to train the corresponding original CAM. Instead of erasing multiple times, ECS-Net only needs to erase once, avoiding introducing excessive noise. Instead of directly erasing the mined regions from the image, AdvCAM [38] perturbs the image along pixel gradients which increases the classification score of the target class. The result is that non-discriminative regions, which are nevertheless relevant to that class, gradually become involved in the CAM produced by the classification model. However, a notable drawback of AdvCAM is the computation of these gradients, which is computationally intensive and significantly slows down the process. Unlike all those methods, kweon et al. [35] presented a framework that utilizes adversarial learning between a classifier and an image reconstructor. This method is inspired by the notion that no individual segment should be able to infer color or texture information from other segments if semantic segmentation is perfectly achieved. They introduced an image reconstruction task that aims to reconstruct one image segment from the remaining segments. The classifier is trained not only to classify the image but also to generate CAM that accurately segment the image, while contending with the reconstructor. In the end, the quality of the CAMs is enhanced by jointly training the classifier and the reconstructor in an adversarial manner. #### 3.2.2 Context decoupling Some methods attempt to decouple the object from its surrounding context. For instance, Zhang et al. [87] proposed a structural causal model (CONTA) to analyze the causalities among images, their contexts, and class labels. Based on this, they developed a context adjustment method that eliminates confounding bias in the classification model, resulting in improved CAM. CDA [69] is a context decoupling augmentation technique that modifies the inherent context in which objects appear, thereby encouraging the network to remove reliance on the correlation between object instances and contextual information. Specifically, in the first stage, it uses the off-the-shelf WSSS methods to obtain basic object instances with high-quality segmentation. In the second, these object instances are randomly embedded into raw images to form the new input images. These images then undergo online data augmentation training in a pairwise manner with the original input images. Unlike CDA, which relies on pre-existing WSSS methods to separate background and foreground, BDM [81] utilizes saliency maps to generate a binary mask, cropping out images containing only the background or foreground for a given image. It subsequently applies consistency regularization to the CAMs derived from object instances seen in various scenes, thereby providing self-supervision for network training. Different from CDA and BDM, which apply a mask on the original image to decouple the foreground and background. Xie et al. [80] generates a class-agnostic activation map and disentangles image representation into the foreground and background representation. The disentangled representation are then used to create positive pairs (either foreground-foreground representation or background-background representation) and negative pairs (foreground-background representation) across a group of images. Finally, using these constructed pairs, a contrastive loss function is applied to encourage the network to separate foreground and background effectively. #### 3.2.3 Consistency regularization These methods leverage consistency regularization to guide network learning. SEAM [74] employs consistency regu larization on predicted CAMs from various transformed images to provide self-supervision for network learning. Similarly, SIPE [10] ensures the consistency between the general CAM and the proposed Image-Specific Class Activation Map (IS-CAM), which is derived from image-specific prototypes. Qin et al. [60] proposed an activation modulation and re-calibration scheme that leverages a spotlight branch and a compensation branch to provide complementary and task-oriented CAMs. The spotlight branch denotes the fundamental classification network, while the compensation branch contains an attention modulation module to rearrange the distribution of feature importance from the channel-spatial sequential perspective to dig out the important but easily ignored regions. A consistency loss is employed between the CAMs produced by the two branches. The consistency regularization term is a versatile component that can be integrated into various network designs, provided there are coherent features or CAMs available. Typically presented as an auxiliary loss function, it enhances the model's robustness without adding significant computational demands. This makes it an attractive add-on for ensuring consistent feature representations across different stages of the model. #### 3.2.4 Loss function Some methods investigate the existing problems associated with the BCE loss function used in classification models and propose new loss functions to mitigate these issues. For instance, Lee et al. [36] highlighted that the final layer of a deep neural network, activated by sigmoid or softmax activation functions, often leads to an information bottleneck. To counter this, they proposed a novel loss function that eliminates the final non-linear activation function in the classification model while also introducing a new pooling method that further promotes the transmission of information from non-discriminative regions to the classification task. Similarly, Chen et al. [12] identified a problem with the widespread use of BCE loss -- it fails to enforce class-exclusive learning, often leading to confusion between similar classes. They proved the superiority of softmax cross-entropy (SCE) loss and suggested integrating SCE into the BCE-based model to reactivate the classification model. Specifically, they masked the class-specific CAM on the feature maps and applied SCE loss on the masked feature maps, thereby facilitating better class distinction. Both methods identify shortcomings in the prevailing BCE loss, specifically addressing the information bottleneck problem [36] and the class-exclusive learning problem [12]. They each introduce new loss functions to tackle these specific issues. Recognizing the pivotal role that the loss function plays in network learning, both methods contribute substantially to performance enhancements. #### 3.2.5 Other image-wise methods Other methods that operate at the image level do not conform precisely to the aforementioned categories. Jiang et al. [26] proposed an online attention accumulation (OAA) strategy that maintains a cumulative attention map for each target category in each training image to accumulate the discovered different object parts. So that the integral object regions can be gradually promoted as the training goes. EDAM [77] masks the class-specific CAM on the feature maps and learns separate classifiers for each class. As the region of background and irrelevant foreground objects are removed in the class-specific feature map, to some extent, the performance of classification can be improved. PMM [48] computes the coefficient of variation for each channel of CAMs and then refines CAMs via exponential functions with the coefficient of variation as the control coefficient. This operation smooths the CAMs and could alleviate the partial response problem introduced by the classification pipeline. URN [47] simulates noisy variations of response by scaling the prediction map multiple times for uncertainty estimation. The uncertainty is then used to weigh the segmentation loss to mitigate noisy supervision signals. ESOL [44] employs an Expansion and Shrinkage scheme based on the offset learning in the deformable convolution [15], to sequentially improve the recall and precision of the located object in the two respective stages. The Expansion stage aims to recover the entire object as much as possible, by sampling the exterior object regions beyond the most discriminative ones, to improve the recall of the located object regions. The Shrinkage stage excludes the false positive regions and thus further enhances the precision of the located object regions. Unlike traditional transformers that employ a single class token, MCTFormer [83] uses multiple class tokens to learn the interactions between these class tokens and the patch tokens. This allows the model to learn class-specific activation maps from the class-to-patch attention of various class tokens. ### Cross-Image Methods Moving beyond a single image, certain connections often exist between different images within a dataset. Some methods explore these connections, whether they occur pair-wise [70, 21, 54], group-wise [80, 46, 89], or even on a dataset-wise scale [11, 95]. #### 3.3.1 Pair-wise Some methods focus on capturing pairwise relationships between images. For instance, MCIS [70] employs two neural co-attentions in its classifier to capture complementary semantic similarities and differences across images. Given a pair of training images, one co-attention forces the classifier to recognize the common semantics from co-attentive objects, while the other drives the classifier to identify the unique semantics from the rest, uncommon objects. This dual attention approach helps the classifier discover more object patterns and better ground semantics in image regions. Similarly, Fan et al. [21] proposed an end-to-end cross-image affinity module designed to gather supplementary information from related images. Specifically, it builds pixel-level affinities across different images, allowing incomplete regions to glean additional information from other images. This approach results in more comprehensive object region estimations and mitigates ambiguity. Lastly, MBMNet [54] utilizes a parameter-shared siamese encoder to encode the representation of paired images and models their feature representation with a bipartite graph. They find the maximum bipartite matching between the graph nodes to determine relevant feature points in two images, which are then used to enhance the corresponding representations. #### 3.3.2 Group-wise Some methods attempt to model more complex relationships within a group of images. For instance, Group-WSSS [46] explicitly models semantic dependencies in a group of images to estimate more reliable pseudo masks. Specifically, they formulate the task within a graph neural network (GNN) [64], which operates on a group of images and explores their semantic relations for more effective representation learning. Additionally, Zhang et al. [89] introduced a heterogeneous graph neural network (HGNN) to model the heterogeneity of multi-granular semantics within a set of input images. The HGNN comprises two types of sub-graphs: an external graph and an internal graph. The external graph characterizes the relationships across different images, aiming to mine inter-image contexts. The internal graph, which is constructed for each image individually, is used to mine inter-class semantic dependencies within each individual image. Through heterogeneous graph learning, the network can develop a comprehensive understanding of object patterns, leading to more accurate semantic concept grounding. In these methods, it is important to determine the number of images in a group. There's a delicate balance between capturing meaningful semantic relationships and introducing noise by grouping images. Both approaches utilize groups of \(4\) images. As the number of images increases, the benefits from additional semantic cues plateau, while the noise introduced can lead to a decline in performance. #### 3.3.3 Dataset-wise Beyond a group of images, some methods delve into the semantic connections present in the entire dataset. Chang et al. [7] introduced a self-supervised task leveraging sub-category information. To be more specific, for each class, they perform clustering on all local features (features at each spatial pixel position in the feature maps) within that class to generate pseudo sub-category labels. They then construct a sub-category objective that assigns the network a more challenging classification task. Similarly, LP-CAM [11] also performs clustering on local features. However, instead of creating a sub-category objective, LPCAM utilizes the clustering centers, also known as local prototypes, as a non-biased classifier to compute the CAM. Since these local prototypes contain rich local semantics like the "head", "leg", and "body" of a "sheep", they are able to solve the problem that the weight of the classifier (which is used to compute the CAM) in the classification model only captures the most discriminative features of objects. Zhou et al. [95] proposed Regional Semantic Contrast and Aggregation (RCA) method for dataset-level relation learning. RCA uses a regional memory bank to store a wide array of object patterns that appear in the training data, offering strong support for exploring the dataset-level semantic structure. More specifically, the semantic contrast pushes the network to bring the embedding closer to the memory embedding of the same category while pushing away those of different categories. This contrastive property complements the classification objective for each individual image, thereby improving object representation learning. On the other hand, semantic aggregation enables the model to gather dataset-level contextual knowledge, resulting in more meaningful object representations. This is achieved through a non-parametric attention module which summarizes memory representations for each image independently. Compared to the pair-wise and group-wise methods, the dataset-wise methods can leverage more information from the whole dataset. These methods explore the object patterns on the whole dataset. These patterns are then captured and stored using mechanisms like "sub-category labels" [7], "local prototypes" [11], or "memory banks" [95], which can be later leveraged for model learning or CAM generation. These object patterns can effectively improve the CAM quality. Similar to group-wise methods, one challenge is the balance of meaningful semantic cues and potential noise. Thus, it always needs some operations, like selecting prototypes in LPCAM [11], to preserve useful object patterns and filter out noise. ### Methods with External data In addition to leveraging information within the dataset, some methods further employ external data resources to improve the classification model. These external resources can provide additional, diverse, and complementary information not present in the original dataset, helping to improve the model's overall performance. #### 3.4.1 Saliency map Saliency detection methods [24, 52, 91] generate saliency maps that distinguish between the foreground and the background in an image. Many WSSS methods [25, 26, 37, 70, 77, 95] exploit saliency maps as a post-processing step to refine the initial CAMs. Beyond their usage in post-processing, some methods employ saliency maps to aid model learning. For instance, both SSNet [86] and EPS [42] directly model the connection between saliency detection and WSSS. They minimize a saliency loss, which is defined as the pixel-wise difference between the actual saliency map and the estimated saliency map. However, the strategies they employ to estimate the saliency map of the input image differ. SSNet [86] uses a segmentation network to predict a pixel-level mask for each class and then aggregates the masks to create a saliency map. In contrast, EPS [42] designs a classifier to predict \(C+1\) classes, consisting of \(C\) target classes and one background class. They use \(C\) foreground activation maps and the background activation map to estimate the saliency map. These methods leverage a saliency detection model to generate the saliency maps, eliminating the need for extra human effort to annotate. However, there's no guarantee that the saliency maps they produce will be perfect. Consequently, inaccuracies in the saliency maps can impact the overall performance of the method. #### 3.4.2 Out-of-Distribution data Lee et al. [39] proposed to use Out-of-Distribution (OoD) data to address the issue of spurious correlation between foreground and background cues (e.g., "train" and "rail"). They collected their candidate OoD data, which do not include any foreground classes of interest, from another vision dataset OpenImages [33]. Taking the class "train" as an example, they initially select images in which the classification model's predicted probability for "train" exceeded 0.5. They then manually filter out images that contained a "train". The remaining images, which do not contain a "train" but have a high predicted probability for the class "train" in the classification model, can be used as out-of-distribution data. They assign out-of-distribution data with zero-vector labels (zero for all classes) and apply the common binary cross-entropy (BCE) loss for both in-distribution and out-of-distribution samples. The OoD data helps the model distinguish between in-distribution and out-of-distribution samples, thereby reducing false positive predictions in CAM. Utilizing OoD data can significantly enhance the model's ability to distinguish between objects and co-occurring background cues. However, the annotation of OoD data requires additional human effort. ## 4 Foundation Models ### Contrastive Language-Image Pre-Training Contrastive Language-Image Pre-Training (CLIP) [61] is designed to efficiently learn visual concepts from natural language supervision. The main innovation of CLIP is its use of a contrastive objective function, which is based on the principle that semantically similar inputs should be mapped to nearby points in the feature space, while semantically dissimilar inputs should be mapped to distant points. Specifically, CLIP is trained on a large dataset of image-text pairs using a contrastive loss that encourages the model to learn to map similar image and text representations close together in a joint feature space while pushing dissimilar representations apart. CLIP has emerged as a powerful tool due to its ability to associate much wider visual concepts in the image with their text labels in an open-world setting. Two works have harnessed the potential of CLIP in WSSS. CLIMS [79] utilizes the CLIP model as a text-driven evaluator. Specifically, it employs a CNN to generate initial CAMs and applies the CAMs (or reversed CAMs) on the image as masks to identify the object (or background) regions. It then leverages object text prompts (e.g., "a photo of a train") and class-related background text prompts (e.g., "a photo of railroad" for class "train") to compute matching losses with the masked object and background regions, respectively. These losses work to ensure both the correctness and completeness of the initial CAMs. Different from CLIMS [79], Lin et al. [51] investigated the ability of CLIP to localize different categories through only image-level labels without any additional training. To efficiently generate high-quality segmentation masks from CLIP, they propose a framework with special designs for CLIP. They introduce the softmax function into GradCAM [65] and define a class-related background set to enforce mutual exclusivity among categories, thus suppressing the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, they re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. The results show that the CLIP-based framework can efficiently generate pseudo masks for semantic segmentation without further training and outperforms most traditional WSSS methods. ### Segment Anything Model (SAM) The Segment Anything Model (SAM) [30] is a recent image segmentation model exhibiting superior performance across various segmentation tasks. Different from the traditional semantic segmentation model [9, 68, 92] where the input is an image and the output is a mask, SAM introduces a new promptable segmentation task that supports various types of prompts, such as points, bounding boxes, and tex tual descriptions. It leverages a Transformer model [17] trained on the extensive SA-1B dataset (comprising over 1 billion masks derived from 11 million images), which gives it the ability to handle a wide range of scenes and objects. SAM is remarkable for its capability to interpret diverse prompts and successively generate various object masks. We investigate two settings to apply SAM in WSSS: text input (leveraging the class labels available for the train images in WSSS) and zero-shot (employed when only test data is available, given the absence of class labels for the val and test images in WSSS). The pipeline of the two settings is shown in Figure 3. #### 4.2.1 SAM (text input) We follow the general pipeline in traditional methods (as shown in Figure 2) to generate pseudo masks first and then train the fully supervised semantic segmentation model (DeepLabV2 [8]). To generate pseudo masks, we consider feeding image and text prompts (class labels) as inputs to SAM. However, the text prompt functionality of SAM is not currently open-sourced. To circumvent this limitation, as shown in Figure 3 (a), we utilize Grounded-SAM 1 in our experiments for WSSS. Grounded-SAM is a hybrid of Grounding DINO [53] and SAM [30], enabling the grounding and segmentation of objects via text inputs. In particular, Grounding DINO can generate grounded bounding boxes using text prompts. Following this step, we feed these grounded bounding boxes into SAM to produce corresponding segmentation masks. The pipeline of SAM (text input) is summarized as follows: Footnote 1: [https://github.com/IDEA-Research/Grounded-Segment-Anything](https://github.com/IDEA-Research/Grounded-Segment-Anything) * Step 1: Feed the train images and image-level class labels to Grounding DINO [53] to generate bounding boxes. * Step 2: Feed the train images and generated bounding boxes to SAM [30] to generate masks. * Step 3: Train the fully-supervised segmentation model (e.g. DeepLabV2 [8]) using the train images and generated masks. * Step 4: Feed the test images to segmentation model to generate masks. #### 4.2.2 SAM (zero-shot) Different from the general pipeline (as shown in Figure 2) used in traditional WSSS, we also endeavor to directly assess SAM's performance on the val and test images, which lack any text inputs. As illustrated in Figure 3 (b), we utilize the image tagging model, Recognize Anything Model (RAM) [90], to identify the objects within the image first. RAM is a strong foundation model designed for image tagging. It demonstrates the strong zero-shot ability to recognize any category with high accuracy, surpassing the performance of both fully supervised models and existing generalist approaches like CLIP [61] and BLIP [45]. With the recognized category label, we adhere to the similar pipeline as SAM (text input). The pipeline of SAM (zero-shot) is summarized as follows: * Step 1: Feed the test images to RAM [90] to generate tags. * Step 2: Feed the test images and generated tags to Grounding DINO [53] to generate bounding boxes. * Step 3: Feed the test images and generated bounding boxes to SAM [30] to generate masks. Compared to SAM (text input), SAM (zero-shot) incorporates an additional model (SAM) to identify objects within the image. Notably, the SAM (zero-shot) method Figure 3: The pipeline of applying SAM in WSSS. All models except the fully-supervised segmentation model are kept frozen. eliminates the need for train images and circumvents the requirement to train a fully supervised segmentation model in SAM (text input). This is because SAM (zero-shot) can predict masks even in the absence of class labels. ## 5 Methodological Comparison In this section, we provide a comprehensive comparison between the traditional models introduced in Section 3 and the application of foundation models introduced in Section 4. We also offer insights into the potential and challenges of deploying foundational models in WSSS. ### Evaluation Protocol **Datasets.** There are two benchmark datasets in WSSS: PASCAL VOC 2012 [19] and MS COCO 2014 [50]. PASCAL VOC 2012 dataset contains \(20\) foreground object categories and \(1\) background category with \(1,464\) train images, \(1,449\) val images, and \(1,456\) test images. All works use the enlarged training set with \(10,582\) training images provided by SBD [22]. MS COCO 2014 dataset consists of \(80\) foreground categories and \(1\) background category, with \(82,783\) and \(40,504\) images in the train and val sets, respectively. **Metrics.** There are two evaluation steps for WSSS -- the pseudo mask quality and the semantic segmentation performance. The pseudo mask quality is evaluated by the mean Intersection-over-Union (mIoU) of the generated pseudo masks and the corresponding ground truth masks on the train images. In terms of the semantic segmentation performance, we evaluate the mIoU between the predicted masks and the corresponding ground truth masks on both the val and test images. **Implementation details.** Regarding SAM (text input), we load the default pretrained Grounded-SAM model (Swin-T [55] for Gounding DINO [53] and ViT-H [17] for SAM [30]). After generating pseudo masks, we train the DeepLabV2 [8] with ImageNet [16] pretrained ResNet-101 [23]. In the SAM (zero-shot), we load the pre-trained RAM-14M [90] model based on Swin-T [55]. To align the tags from the RAM with the class labels in the VOC datasets, we establish a mapping strategy due to the different terminologies used by the models and datasets. Specifically, we consider "couch" from RAM to correspond with "sofa" in the VOC dataset, "plane" with "aeroplane", "plant" with "potted plant", and "monitor", "screen", and "television" with "TV monitor". Additionally, we map "person", "man", "woman", "boy", "girl", "child", and "baby" tags from RAM to the "person" class in VOC. This strategy ensures comparable results between SAM and the ground truth annotations of the VOC dataset. On the MS COCO dataset, we adopt a similar strategy for aligning the RAM tags with the class labels. Due to the larger and more diverse range of classes in MS COCO, please refer to our provided code 2. Footnote 2: [https://github.com/zhaozhengChen/SAM_WSSS](https://github.com/zhaozhengChen/SAM_WSSS) For the DeepLabV2 [8] in the step of fully supervised semantic segmentation, following [1, 12, 36], we crop each training image to the size of 321x321. We train the model for 20k and 100k iterations on VOC and MS COCO datasets, respectively, with the respective batch size of 5 and 10. The learning rate is set as 2.5e-4 and weight decay as 5e-4. Horizontal flipping and random crop are used for data augmentation. ### Results of Traditional Models Table 1 (Rows 1-51) presents the performance of the traditional WSSS methods on VOC and MS COCO datasets. To make a methodological comparison, we compile a summary of important factors such as the venue of the original research publication, the classification backbone used, whether the method utilizes a saliency map, the semantic segmentation backbone deployed, and the source of the pre-trained parameters. ### Evaluation of SAM **Quality of pseudo masks.** Both SAM (text input) and SAM (zero-shot) demonstrate outstanding performance. As shown in Table 1, on the VOC dataset, SAM (text input) achieves a mIoU score of \(86.4\%\) (Row 54), surpassing the state-of-the-art method CLIP-ES [51] by a significant margin of \(11.4\%\) (Row 53). Similarly, on the MS COCO dataset, it outperforms the state-of-the-art method LPCAM [11] by an impressive margin of \(17.6\%\) (Row 48). While SAM (zero-shot) (Row 55) may not perform quite as well as SAM (text input), it nevertheless outperforms the state-of-the-art traditional methods on both datasets. **Semantic segmentation results of SAM (text input).** SAM (text input) surpasses traditional methods and closely approaches the performance of the fully-supervised DeepLabV2 [8]. As shown in Table 1, on the VOC dataset, although SAM (text input) significantly outperforms CLIP-ES in terms of the quality of pseudo masks, their segmentation performance gap is only \(1.6\%\) (Row 53 and 54) on the val and \(1.4\%\) on test set. When compared with a fully-supervised method (considered as the upper bound), SAM (text input) remains competitive with a \(2.6\%\) gap (Row 54 and Row 56) on the val set and \(2.8\%\) on test set. This suggests that both CLIP-ES and SAM (text input) are capable of producing pseudo masks of sufficient quality to train the segmentation network effectively. Hence, further improvements in pseudo mask quality yield only marginal gains in segmentation performance. On the more challenging MS COCO dataset, we observe a more substantial gap. There is a \(5.4\%\) performance difference (Row 53 and 54) between CLIP-ES and SAM (text input) on the val set, and a \(2.7\%\) gap (Row 54 and 56) between SAM (text input) and the fully-supervised method. This could suggest that the increase in complexity and diversity in the MS COCO dataset makes it harder for WSSS methods to reach the performance level of fully-supervised methods. **Semantic segmentation results of SAM (zero-shot).** We observe a significant gap in Row 54 between the pseudo mask on train set and the segmentation masks on val set and test set of SAM (text input). This suggests that the performance might be constrained by the fully-supervised DeepLabV2 [8] segmentation model. In the zero-shot setting, as shown in Table 1, the results indicate that SAM (zero-shot) outperforms SAM (text input) (as seen in Row 54 and 55) on both the val and test sets across both datasets. Furthermore, SAM (zero-shot) surpasses the fully-supervised DeepLabV2 model(as evident in the comparison between Rows 55 and 56) on the challenging MS COCO dataset, highlighting the strong capabilities of the foundation models for WSSS. **Qualitative result.** Compared to the traditional method and CLIP-based method, as shown in Figure 4 (a) and (b), both SAM (text input) and SAM (zero-shot) show higher mask quality in terms of the clear boundaries not only between the background and foreground but also among different objects. In the example of "train", SAM (text input) and SAM (zero-shot) successfully distinguish between "train" Figure 4: Visualization of pseudo masks generated by LPCAM [11], CLIP-ES [51], SAM (text input), and SAM (zero-shot) on VOC dataset. (a) Examples showcasing high-quality masks produced by both SAM (text input) and SAM (zero-shot). (b) Examples where SAM produced masks that even surpass the quality of the ground truth masks. (c) Examples illustrating the failure cases of SAM (text input) and SAM (zero-shot). and "railroad", a challenge that many WSSS methods struggle with due to the co-occurrence of these elements. But SAM (text input) and SAM (zero-shot) will also suffer from the false negative and false positive problem when the tagging model (RAM [90]) and grounding model (Grounding DINO [53]) wrongly tag or detect some objects. As shown in Figure 4 (c), SAM (text input) fails to identify a "bird" and mistakenly classifies a "cup" as a "bottle". And SAM (zero-shot) wrongly recognizes a "dining table". All these failures can be attributed to the grounding model or tagging model rather than SAM, suggesting that the capabilities of the grounding or tagging model could potentially limit the performance of SAM (text input). When compared to the human annotation, Figure 4 (b) highlights cases where SAM (text input) generates masks that even surpass the quality of human annotations. For instance, in the "person" example, the human annotation fails to include the person in the top left corner. In the "motorbike" example, SAM (text input) fails to include the person in the top left corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text input) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In "motorbike" example, SAM (text) fails to include the person in the top right corner. In the "motorbike" example, SAM (text) fails to include the person in the top right corner. In "motorbike" example, SAM (text) fails to include the person in the top right corner. (text input) can produce precise boundaries in the overlapping scenarios. This suggests the strong capabilities of the foundation models in WSSS. In complex scene examples from the MS COCO dataset, as illustrated in Figure 5 (a), both SAM (text input) and SAM (zero-shot) demonstrate their robust capability to annotate multiple smaller objects distinctly. For instance, in scenes with numerous "donuts" in close proximity, SAM can distinctly annotate each one, a proficiency similarly observed in the "person" example. Similar to the result on the VOC dataset, both SAM (text input) and SAM (zero-shot) also suffer from the false negative and false positive problem when the tagging model and grounding model wrongly tag or detect the objects. Figure 5 (b) shows a case where SAM (text input) fails to detect the "person" inside the "bus", while SAM (zero-shot) overlooks the "car" near the "bus". These failures can also be attributed to the grounding model or tagging model rather than SAM. **Per-class analysis.** We investigate the per-class pseudo mask quality on the VOC dataset to examine the quality of pseudo masks produced by SAM (text input). As detailed in Table 2, SAM (text input) consistently generates high mIoU scores for most classes related to animals and transportation. However, it fares relatively poorer for classes related to objects or items. We believe that the observed differences could be due to the contrasting nature of the scenes where these subjects are usually found. Specifically, images of animals and transportation modes predominantly portray outdoor scenes, which generally have simpler backgrounds. In contrast, images of objects/items are frequently set indoors and are often characterized by more intricate backgrounds. Similarly, SAM (zero-shot) displays performance comparable to SAM (text input) in most classes associated with animals and transportation. However, there is a substantial gap when it comes to complex indoor scenes. This implies that the tagging model may encounter challenges in recognizing objects within complex indoor settings. **Failure cases analysis.** To delve deeper into the specifics, we turn our attention to the two categories with the lowest mIoU: bicycles and dining tables. As depicted in Fig. 6, the unique annotation protocols of the VOC dataset come to the fore. Notably, the dataset annotations exclude the hollow sections of bicycle wheels and encompass items present on dining tables. In contrast, the results from SAM (text input) manifest opposing tendencies. Consequently, it becomes clear that this observed deviation is not a flaw inherent to the model. Instead, it's a discrepancy that emerges due to the differences in annotation strategies employed during the creation of the dataset. Hence, it's essential to consider such factors when interpreting the performance of the model across different categories. ## 6 Conclusion In this work, we embark on a deep exploration of weakly supervised semantic segmentation (WSSS) methods, and more specifically, the potential of using foundation models for this task. We first recapitulated the key traditional methods used in WSSS, outlining their principles and limitations. Then, we examine the potential of applying the Segment Anything Model (SAM) in WSSS, in two scenarios: text input and zero-shot. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Animals} & \multicolumn{4}{c}{Transportation} & \multicolumn{4}{c}{Objects/Items} \\ \cline{2-13} & & & & & & & & & & & & & & & & & & & & & & & & \\ \hline Text input & 90.0 & 94.3 & 94.5 & 94.8 & 91.1 & 93.9 & 97.4 & 96.8 & 65.2 & 89.6 & 90.2 & 86.5 & 84.1 & 96.1 & 91.0 & 73.8 & 48.9 & 73.8 & 78.4 & 88.9 \\ Zero-shot & 86.0 & 85.9 & 87.0 & 94.8 & 81.4 & 86.7 & 97.3 & 96.6 & 63.9 & 76.9 & 90.1 & 71.0 & 84.7 & 93.9 & 89.2 & 41.1 & 31.2 & 56.7 & 52.5 & 82.9 \\ \hline \hline \end{tabular} \end{table} Table 2: Per-class pseudo mask quality (mIoU) of SAM (text input) and SAM (zero-shot) on VOC dataset. We classify the 20 classes in the VOC dataset into three categories: “animals”, “transportation”, and “objects/items”, with an additional class for “person”. Figure 6: Examples (including images, ground truth labels, and pseudo masks produced by SAM (text input)) of bicycles and dining tables on VOC dataset. Our results demonstrate that SAM (text input) and SAM (zero-shot) significantly outperform traditional methods in terms of pseudo mask quality. In some instances, they even approach or surpass the performance of fully supervised methods. These results highlight the immense potential of foundation models in WSSS. Our qualitative analysis further revealed that the quality of pseudo masks generated by our methods often exceeds the quality of human annotations, demonstrating their capability to produce high-quality segmentation masks. Our in-depth analysis brought to light some of the challenges in applying foundation models to WSSS. One such challenge is the discrepancy between the annotation strategies employed in the creation of the dataset and the models' segmentation approach, leading to divergences in performance. Also, we observed that the complexity of the scenes and the diversity of the categories in datasets could affect the model's performance. Despite the considerable success of the SAM approach, several avenues for future research emerge from this work. First, further improvements could be made to the grounding and tagging models, which could potentially boost SAM's performance. Second, the discrepancies identified between different annotation strategies suggest the necessity for a more standardized annotation protocol. Lastly, exploring the application of foundation models in WSSS in other more challenging datasets could provide insights into their generalizability and robustness across diverse scenarios. This research provides valuable insights into the WSSS task and highlights the promise of foundation models. We anticipate they will play an increasingly central role in advancing weakly supervised learning tasks. **Acknowledgments** The author gratefully acknowledges the support of the A*STAR under its AME YIRG Grant (Project No.A20E6c0101).
2303.04462
Poset Ramsey number $R(P,Q_n)$. III. Chain Compositions and Antichains
An induced subposet $(P_2,\le_2)$ of a poset $(P_1,\le_1)$ is a subset of $P_1$ such that for every two $X,Y\in P_2$, $X\le_2 Y$ if and only if $X\le_1 Y$. The Boolean lattice $Q_n$ of dimension $n$ is the poset consisting of all subsets of $\{1,\dots,n\}$ ordered by inclusion. Given two posets $P_1$ and $P_2$ the poset Ramsey number $R(P_1,P_2)$ is the smallest integer $N$ such that in any blue/red coloring of the elements of $Q_N$ there is either a monochromatically blue induced subposet isomorphic to $P_1$ or a monochromatically red induced subposet isomorphic to $P_2$. We provide upper bounds on $R(P,Q_n)$ for two classes of $P$: parallel compositions of chains, i.e.\ posets consisting of disjoint chains which are pairwise element-wise incomparable, as well as subdivided $Q_2$, which are posets obtained from two parallel chains by adding a common minimal and a common maximal element. This completes the determination of $R(P,Q_n)$ for posets $P$ with at most $4$ elements. If $P$ is an antichain $A_t$ on $t$ elements, we show that $R(A_t,Q_n)=n+3$ for $3\le t\le \log \log n$. Additionally, we briefly survey proof techniques in the poset Ramsey setting $P$ versus $Q_n$.
Christian Winter
2023-03-08T09:26:46Z
http://arxiv.org/abs/2303.04462v2
# Poset Ramsey number \(R(P,Q_{n})\). IV. Chain Compositions ###### Abstract An induced subposet \((P_{2},\leq_{2})\) of a poset \((P_{1},\leq_{1})\) is a subset of \(P_{1}\) such that for every two \(X,Y\in P_{2}\), \(X\leq_{2}Y\) if and only if \(X\leq_{1}Y\). The Boolean lattice \(Q_{n}\) of dimension \(n\) is the poset consisting of all subsets of \(\{1,\ldots,n\}\) ordered by inclusion. Given two posets \(P_{1}\) and \(P_{2}\) the poset Ramsey number \(R(P_{1},P_{2})\) is the smallest integer \(N\) such that in any blue/red coloring of the elements of \(Q_{N}\) there is either a monochromatically blue induced subposet isomorphic to \(P_{1}\) or a monochromatically red induced subposet isomorphic to \(P_{2}\). We provide an upper bound on \(R(P,Q_{n})\) for two classes of \(P\): parallel compositions of chains, i.e. posets consisting of disjoint chains which are pairwise element-wise incomparable, as well as subdivided \(Q_{2}\), which are posets obtained from two parallel chains by adding a common minimal and a common maximal element. This completes the determination of \(R(P,Q_{n})\) for posets \(P\) with at most 4 elements. Additionally, we briefly survey proof techniques and tools in the poset Ramsey setting \(P\) versus \(Q_{n}\). ## 1 Introduction ### Basic setting and background A _partially ordered set_, or _poset_ for short, is a pair \((P,\leq_{P})\) of a set \(P\) and a partial order \(\leq_{P}\) on this set, i.e. a binary relation that is transitive, reflexive and anti-symmetric. Usually we refer to a poset \((P,\leq_{P})\) just as \(P\). The elements of \(P\) are often called _vertices_. If two vertices \(A\) and \(B\) are _incomparable_, i.e. if \(A\not\leq B\) and \(A\not\geq B\), we write \(A\parallel B\). A poset \((P_{2},\leq_{2})\) is an _(induced) subposet_ of a poset \((P_{1},\leq_{1})\) if \(P_{2}\subseteq P_{1}\) and for every two \(X,Y\in P_{2}\), \(X\leq_{2}Y\) if and only if \(X\leq_{1}Y\). If such a \(P_{2}\) is isomorphic to some poset \(P^{\prime}\), we say that \(P_{2}\) is a _copy_ of \(P^{\prime}\) in \(P_{1}\). The _Boolean lattice_\(Q_{n}\) is the poset whose vertices are the subsets of an \(n\)-element set ordered by inclusion. In this paper we consider colorings of the vertices of posets. A _blue/red coloring_ of a poset \(P\), is a mapping \(c\colon P\to\{\mbox{blue},\mbox{red}\}\). We say that a poset is _monochromatic_ if all its vertices have the same color. If all vertices are blue, we say that the poset is _blue_. Similarly, if all vertices are red, the poset is _red_. Ramsey-type problems are widely studied for graphs and hypergraphs and were extended to posets by Nesetril and Rodl [15] in a general form. A special case of their studies asks to find for a fixed poset \(P\), a hosting poset \(W\) such that every blue/red-coloring of the elements of \(W\) contains a monochromatic copy of \(P\). Kierstead and Trotter [12] considered this setting with the goal of minimizing \(p(W)\) for all posets \(P\) with fixed \(p(P)\), where \(p\) is a poset parameter such as size, height or width. Axenovich and Walzer [1] introduced a closely related Ramsey setting which recently attracted the attention of various researchers. For fixed posets \(P_{1}\) and \(P_{2}\) the _poset Ramsey number_ of \(P_{1}\) versus \(P_{2}\) is \[R(P_{1},P_{2})=\min\{N\in\mathbb{N}\colon\text{ every blue/red coloring of $Q_{N}$ contains either}\\ \text{ a blue copy of $P_{1}$ or a red copy of $P_{2}$}\}.\] For the diagonal setting \(P_{1}=P_{2}=Q_{n}\), the bounds on \(R(Q_{n},Q_{n})\) were gradually improved to \(2n+1\leq R(Q_{n},Q_{n})\leq n^{2}-n+2\), see chronologically Axenovich and Walzer [1], Cox and Stolee [7], Lu and Thompson [13], and Bohman and Peng [4]. The asymptotic behaviour in terms of \(n\) remains an open problem. Note that for any \(P_{1}\) and \(P_{2}\) the poset Ramsey number is well-defined: It is easy to see that every two posets \(P_{1}\) and \(P_{2}\) are induced subposets of \(Q_{n}\) for large \(n\), thus an upper bound on \(R(Q_{n},Q_{n})\) implies the existence of \(R(P_{1},P_{2})\). For further results on diagonal poset Ramsey numbers \(R(P,P)\) see e.g. Chen et al. [6]. Another actively investigated setting of poset Ramsey numbers considers \(R(Q_{m},Q_{n})\) for \(m\) fixed and \(n\) large. It is trivial to see that \(R(Q_{1},Q_{n})=n+1\). In the case \(m=2\) it was shown that \(R(Q_{2},Q_{n})=n+\Theta\big{(}\frac{n}{\log(n)}\big{)}\) with upper bound due to Grosz, Methuku, and Tompkins [9] and lower bound due to Axenovich and the author [2]. For \(m\geq 3\) only rough estimates are known, see Lu and Thompson [13]. This open field of research as well as Erdos-Hajnal-type questions on posets motivated a detailed study of the off-diagonal poset Ramsey number \(R(P,Q_{n})\) for fixed \(P\) and large \(n\) which is presented in a series of papers [19], [20], [3] including the present paper. Other extremal problems on posets include rainbow Ramsey problems, see Chang et al. [5]; Turan-type, most notable see Methuku and Palvolgyi [14]; and saturation-type questions, which are discussed in a recent survey by Keszegh et al. [11]. ### Summary of results In order to understand the general framework of bounds on \(R(P,Q_{n})\) we consider two special posets. The \(V\)-shaped poset \(\mathcal{V}\) has three distinct vertices \(A,B\), and \(C\) with \(C\leq A\), \(C\leq B\), and \(A\parallel B\). Its symmetric counterpart is the poset \(\mathfrak{h}\) which has three distinct vertices \(A,B\), and \(C\) where \(C\geq A\), \(C\geq B\), and \(A\parallel B\). We say that a poset \(P\) is _non-trivial_ if \(P\) contains a copy of either \(\mathcal{V}\) or \(\mathfrak{h}\), otherwise we say that \(P\) is _trivial_. It was shown by Axenovich and the author [2] that two different asymptotic behaviours of \(R(P,Q_{n})\) emerge depending on whether the fixed \(P\) is trivial or not. Combined with a general lower bound of Walzer [18] and a general upper bound by Axenovich and Walzer [1] the following is known: **Theorem 1** ([1][2][18]).: _Let \(P\) be a trivial poset, then for every \(n\),_ \[n+h(P)-1\leq R(P,Q_{n})\leq n+h(P)+2\log(w(P))+2.\] _Let \(P\) be a non-trivial poset, then for sufficiently large \(n\),_ \[n+\tfrac{1}{15}\tfrac{n}{\log n}\leq R(P,Q_{n})\leq h(P)\cdot n+\dim_{2}(P).\] The poset parameters _height_\(h(P)\), _width_\(w(P)\) and \(2\)_-dimension_\(\dim_{2}(P)\) are formally introduced in Section 2.1. Throughout this paper log refers to the logarithm base \(2\). We remark that for trivial \(P\), [2] actually provides a slightly better upper bound than mentioned here. Collecting known results and providing some new bounds in this paper, we obtain bounds on \(R(P,Q_{n})\) which are asymptotically tight in the two leading additive terms for all posets \(P\) on at most \(4\) vertices (of which there are \(19\) up to symmetry), see Table 1. In every row of the table a poset \(P\) is defined by its Hasse diagram and labelled using the notation of this paper. Formal definitions of all posets are stated in Sections 1.3 and 2.1. Some posets have alternative names used in the literature, these are additionally mentioned in the table. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & **poset**\(P\) & \(R(P,Q_{n})\) & **proof** \\ \hline \hline \(\bullet\) & \(C_{1}\) & \(=Q_{0}\) & \(n+0\) & trivial \\ \hline \hline **1** & \(C_{2}\) & \(=Q_{1}\) & \(n+1\) & Thm. 4 \\ \hline \(\bullet\) & \(A_{2}\) & \(=2C_{1}\) & \(n+2\) & [20] \\ \hline \hline **1** & \(C_{3}\) & \(n+2\) & Thm. 4 \\ \hline **1** & \({\cal C}_{2,1}\) & \(n+3\) & Thm. 4 \\ \hline **1** & \(A_{3}\) & \(=3C_{1}\) & \(n+3\) & [20] \\ \hline **V** & \({\cal V}\) & \(=K_{1,2}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 1+o(1)\) & [2] \\ \hline \hline **1** & \(C_{4}\) & \(n+3\) & Thm. 4 \\ \hline **1** & \({\cal C}_{2,2}\) & \(=2C_{2}\) & \(n+3\) & Thm. 4 \\ \hline **.....** & \(A_{4}\) & \(=4C_{1}\) & \(n+3\) & [20] \\ \hline **1** & \({\cal C}_{3,1}\) & \(n+4\) & Thm. 4 \\ \hline **1** & \({\cal C}_{2,1,1}\) & \(n+4\) & Thm. 5 \\ \hline **V** & \({\cal V}+C_{1}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 1+o(1)\) & [2] + Thm. 11 ([18]) \\ \hline **V** & \(K_{1,3}\) & \(=V_{3}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 1+o(1)\) & Thm. 1 ([2]) + [19] \\ \hline **N** & \({\cal N}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 1+o(1)\) & Thm. 1 ([2]) + [3] \\ \hline **N** & \(Q_{2}\) & \(=K_{1,2,1}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 2+o(1)\) & Thm. 1 ([2]) + [9] \\ \hline **Y** & \(K_{1,1,2}\) & \(=Y\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 2+o(1)\) & Thm. 1 ([2]) + [19] \\ \hline **V** & \({\cal J}\) & \(=Q_{2}^{-}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 2+o(1)\) & Thm. 1 ([2]) + Cor. 3 \\ \hline **N** & \(K_{2,2}\) & \(=\mathbb{M}\) & \(n+\frac{cn}{\log(n)},\ \frac{1}{15}\leq c\leq 4+o(1)\) & Thm. 1 ([2]) + [19] \\ \hline \end{tabular} \end{table} Table 1: Off-diagonal poset Ramsey bounds for small \(P\) ### New results A _chain_\(C_{t}\) of length \(t\) is a poset on \(t\) vertices forming a linear order. For \(s,t\in\mathbb{N}\), let \(\mathcal{SD}_{s,t}\) denote the \((s,t)\)_-subdivided diamond_, the poset obtained from two disjoint and element-wise incomparable chains of length \(s\) and \(t\), respectively, by adding a common minimal vertex and a common maximal vertex, i.e. a vertex which is smaller than all others and a vertex which is larger than all other, see Figure 1. Note that \(\mathcal{SD}_{1,1}=Q_{2}\). Our first result shows that for subdivided diamonds the lower bound obtained from Theorem 1 is asymptotically tight in a strong sense. **Theorem 2**.: _Let \(s\) and \(t\) be fixed natural numbers. Then for sufficiently large \(n\),_ \[R(\mathcal{SD}_{s,t},Q_{n})\leq n+\frac{\big{(}2+o(1)\big{)}n}{\log n}.\] Note that the poset \(\mathcal{J}\), as defined in the penultimate line of Table 1, is an induced subposet of \(\mathcal{SD}_{1,2}\). Thus, Theorem 2 implies: **Corollary 3**.: _For \(n\) sufficiently large, \(R(\mathcal{J},Q_{n})\leq R(\mathcal{SD}_{1,2})\leq n+\big{(}2+o(1)\big{)} \frac{n}{\log n}.\)_ Given a poset \(\mathcal{Q}\), two subposets \(P_{1},P_{2}\subseteq\mathcal{Q}\) are _parallel_ if they are element-wise incomparable. We denote by \(P_{1}+P_{2}\) the _parallel composition_ of two posets \(P_{1}\) and \(P_{2}\), that is the poset consisting of a copy of \(P_{1}\) and a copy of \(P_{2}\) which are disjoint and parallel. In the literature the parallel composition is also referred to as _independent union_. Note that this operation is commutative and associative. It is a simple observation that a poset is trivial if and only if it is a parallel composition of chains \(C_{t_{1}},\ldots,C_{t_{\ell}}\). We say that this is the _chain composition_ with parameters \(t_{1},\ldots,t_{\ell}\), \[\mathcal{C}_{t_{1},t_{2},\ldots,t_{\ell}}=C_{t_{1}}+C_{t_{2}}+\cdots+C_{t_{ \ell}}.\] Throughout the paper we use the convention that \(t_{1}\geq t_{2}\geq\cdots\geq t_{\ell}\). Theorem 1 provides that \(n+t_{1}-1\leq R(\mathcal{C}_{t_{1},t_{2},\ldots,t_{\ell}},Q_{n})\leq n+t_{1}+ \ell+1.\) Here we show the following. **Theorem 4**.: _Let \(n,t_{1},t_{2}\in\mathbb{N}\) such that \(t_{1}\geq t_{2}\). Then_ \[R(C_{t_{1}},Q_{n})=n+t_{1}-1\qquad\text{and}\qquad R(\mathcal{C}_{t_{1},t_{2}},Q_{n})=n+t_{1}+1.\] **Theorem 5**.: _Let \(n,t,t^{\prime}\in\mathbb{N}\) with \(t-1\geq t^{\prime}\). Then_ \[R(\mathcal{C}_{t,t-1,t^{\prime}},Q_{n})=n+t+2.\] For trivial posets \(P\), a general Ramsey bound is given by Theorem 1. By applying a result of Habib, Nourine, Raynaud, Thierry [10] to the construction given by Axenovich and the author [2], we obtain a slightly better estimate: \[n+h(P)-1\leq R(P,Q_{n})<n+h(P)+\Big{(}\log(w(P))+\tfrac{1}{2}\log\log(w(P))+3 \Big{)}.\] Theorem 4 implies an improvement of this general bound. **Corollary 6**.: _Let \(P\) be a trivial poset. If \(w(P)=1\), then \(R(P,Q_{n})=n+h(P)-1\). If \(w(P)\geq 2\), then_ \[n+h(P)+1\leq R(P,Q_{n})<n+h(P)+\Big{(}\log(w(P))+\tfrac{1}{2}\log\log(w(P))+1 \Big{)}.\] Here the upper bound is obtained by additionally applying Theorem 11, which is due to Walzer [18] and the aforementioned Habib, Nourine, Raynaud, Thierry [10]. Note that every poset with \(w(P)=1\) is trivial. By Theorem 1, the lower bound for \(w(P)\geq 2\) also holds for non-trivial posets if \(n\) is large. However, for small \(n\) Corollary 6 does not extend to non-trivial posets, for example it can be easily checked that \(R(\mathcal{V},Q_{1})=3<4\). Our paper is structured as follows. In Section 2 we introduce basic definitions and notation, and discuss preliminary lemmata. Section 3 gives a proof of Theorem 2. In Section 4, we present proofs of Theorems 4 and 5. Finally, in Section 5 we summarize known proof techniques and tools of reduction for bounding off-diagonal poset Ramsey numbers and collect some open problems. In this paper we omit floors and ceilings where appropriate. The set of the first \(n\) natural numbers is denoted by \([n]=\{1,\ldots,n\}\). ## 2 Preliminaries ### Poset notation In the previous section we stated formal definitions of the _Boolean lattice_\(Q_{n}\), the \(V\)-shaped poset \(\mathcal{V}\), the _chain_\(C_{t}\), the _\((s,t)\)-subdivided diamond_\(\mathcal{SD}_{s,t}\) and the _chain composition_\(\mathcal{C}_{t_{1},t_{2},\ldots,t_{\ell}}\). Besides those we use the following notation for posets. Examples of all mentioned posets are given in Table 1. An _antichain_\(A_{\ell}\) on \(\ell\) elements is the poset \(\mathcal{C}_{1,\ldots,1}\) with \(\ell\) parameters \(t_{i}=1\), i.e. a set of \(\ell\) pairwise incomparable vertices. The _complete \(\ell\)-partite poset_\(K_{t_{1},\ldots,t_{\ell}}\) is a poset on \(\sum_{i=1}^{\ell}t_{i}\) many vertices defined as follows. For each index \(i\in[\ell]\), there is a set of \(t_{i}\) distinct vertices, called _layer_\(i\). Every pair of vertices in the same layer is incomparable. For any two vertices \(X\) and \(Y\) belonging to different layers \(i_{X}\) and \(i_{Y}\) with \(i_{X}<i_{Y}\), we have \(X\leq Y\). The poset \(\mathcal{N}\) consists of four distinct vertices \(A,B,C\), and \(D\) such that \(A\leq C\), \(C\geq B\), \(B\leq D\), \(A||B\), \(A||D\), and \(C||D\). The hook-shaped poset \(\mathcal{J}\) has distinct vertices \(A,B,C\), and \(D\) where \(B\leq C\leq D\), \(A\geq B\), \(A||C\), and \(A||D\). In this paper, we commonly consider a Boolean lattice \(Q_{n}\) with a specified ground set. Given a set \(\mathcal{X}\), the _Boolean lattice_\(\mathcal{Q}(\mathcal{X})\) is the poset on all subsets of \(\mathcal{X}\) equipped with set inclusion relation. The _dimension_ of \(\mathcal{Q}(\mathcal{X})\) is \(|\mathcal{X}|\). Let \(\mathcal{Q}\) be a Boolean lattice of dimension \(N\). For \(\ell\in\{0,\ldots,N\}\), we say that _layer_\(\ell\) of \(\mathcal{Q}\) is the set of elements \(\{Z\in\mathcal{Q}:\ |Z|=\ell\}\). Note that \(\mathcal{Q}\) consists of \(N+1\) layers and that each layer induces an antichain in \(\mathcal{Q}\). A blue/red coloring of a Boolean lattice is _layered_ if within each layer every vertex receives the same color. Let \(P\) be a poset. A vertex \(X\) is the _minimum_ of \(P\) if \(X\leq Z\) for every \(Z\in P\). Similarly, a _maximum_ is a vertex \(X\in P\) such that \(X\geq Z\) for every \(Z\in P\). The _height_\(h(P)\) of \(P\) is the size of the largest chain in \(P\). The _width_\(w(P)\) of \(P\) denotes the size of the largest antichain in \(P\). The _\(2\)-dimension_\(\dim_{2}(P)\) of \(P\) is the smallest dimension \(N\) of a Boolean lattice \(Q_{N}\) which contains a copy of \(P\). It is a basic observation that this poset parameter is well-defined. Note that \(h(Q_{n})=n+1\) and \(\dim_{2}(Q_{n})=n\). Moreover, Sperner [16] famously showed that the width of \(Q_{n}\) is \(w(Q_{n})=\binom{n}{\lceil n/2\rceil}\). ### Red Boolean lattice versus blue chain In this paper we often consider the Boolean lattice \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) where \(\mathcal{X}\) and \(\mathcal{Y}\) are two disjoints sets with \(\mathcal{Y}\neq\varnothing\). We denote a linear ordering \(\tau\) of \(\mathcal{Y}\) where \(y_{1}<_{\tau}y_{2}<_{\tau}\dots<_{\tau}y_{k}\) by a sequence \(\tau=(y_{1},\dots,y_{k})\) implying that \(\mathcal{Y}=\{y_{1},\dots,y_{k}\}\). Given a linear ordering \(\tau=(y_{1},\dots,y_{k})\) of \(\mathcal{Y}\), a \(\mathcal{Y}\)_-chain_ corresponding to \(\tau\) is a \((k+1)\)-element chain \(C\) in \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) on vertices \[C=\big{\{}X_{0}\cup\varnothing,X_{1}\cup\{y_{1}\},X_{2}\cup\{y_{1},y_{2}\}, \dots,X_{k}\cup\mathcal{Y}\big{\}},\] where \(X_{0}\subseteq X_{1}\subseteq\dots\subseteq X_{k}\subseteq\mathcal{X}\). Note that \(\mathcal{Y}\)-chains corresponding to distinct linear orderings of \(\mathcal{Y}\) are distinct. The following _Chain Lemma_ was proved implicitly by Grosz, Methuku and Tompkins [9]. Since it is a fundamental lemma in determining \(R(P,Q_{n})\), we restate its self-contained proof by following the lines of Axenovich and the author, see Lemma 8 in [2]. **Lemma 7** ([9] Chain Lemma).: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be disjoint sets with \(|\mathcal{X}|=n\) and \(|\mathcal{Y}|=k\). Let \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) be a blue/red colored Boolean lattice. Fix a linear ordering \(\tau=(y_{1},\dots,y_{k})\) of \(\mathcal{Y}\). Then there exists in \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) either a red copy of \(Q_{n}\), or a blue \(\mathcal{Y}\)-chain corresponding to \(\tau\)._ Proof.: We denote the first \(i\) elements of \(\mathcal{Y}\) with respect to \(\tau\) by \(\mathcal{Y}[0]=\varnothing\) and \(\mathcal{Y}[i]=\{y_{1},\dots,y_{i}\}\) for \(i\in[k]\). Assume that there does not exist a blue \(\mathcal{Y}\)-chain corresponding to \(\tau\). We show that there is a red copy of \(Q_{n}\) using the following characterisation of a _copy_: We say that an _embedding_ of a poset \((P,\leq_{P})\) in a poset \((Q,\leq_{Q})\) is a function \(\phi\colon P\to Q\) such that for any two \(A,B\in P\), \(A\leq_{P}B\) if and only if \(\phi(A)\leq_{Q}\phi(B)\). Note that the image of \(\phi\) is an induced subposet of \(Q\) which is isomorphic to \(P\), i.e. a copy of \(P\) in \(Q\). In the following for every \(X\subseteq\mathcal{X}\), we recursively define a label \(\ell_{X}\in\{0,\dots,k\}\) such that the function \[\phi\colon\mathcal{Q}(\mathcal{X})\to\mathcal{Q}(\mathcal{X}\cup\mathcal{Y}), \ \phi(X)=X\cup\mathcal{Y}[\ell_{X}]\] is an embedding with monochromatic red image. The image of such an embedding is a red copy of \(Q_{n}\) as required. We choose labels \(\ell_{X}\), \(X\subseteq\mathcal{X}\), with the following properties: * For any \(U\subseteq X\), \(\ \ell_{U}\leq\ell_{X}\). * There is a blue chain \(C^{X}\) of length \(\ell_{X}\) "below" the vertex \(X\cup\mathcal{Y}[\ell_{X}]\), i.e. in the Boolean lattice \(Q^{X}:=\mathcal{Q}(X\cup\mathcal{Y}[\ell_{X}])\) there is a blue \(\mathcal{Y}[\ell_{X}]\)-chain corresponding to the linear ordering \((y_{1},\dots,y_{\ell_{X}})\). * The vertex \(X\cup\mathcal{Y}[\ell_{X}]\) is colored red. First, consider the subset \(\varnothing\subseteq\mathcal{X}\). Let \(\ell_{\varnothing}\) be the smallest index \(\ell\), \(0\leq\ell\leq k\), such that the vertex \(\varnothing\cup\mathcal{Y}[\ell]\) is red. If there is no such \(\ell\), then the vertices \(\varnothing\cup\mathcal{Y}[0],\dots,\varnothing\cup\mathcal{Y}[k]\) form a blue \(\mathcal{Y}\)-chain corresponding to \(\tau\), a contradiction. It is immediate that Properties (i) and (iii) hold for \(\ell_{\varnothing}\). If \(\ell_{\varnothing}=0\), then (ii) holds trivially. If \(\ell_{\varnothing}\geq 1\), then vertices \(\varnothing\cup\mathcal{Y}[0],\dots,\varnothing\cup\mathcal{Y}[\ell_{ \varnothing}-1]\) form a blue chain of length \(\ell_{\varnothing}\) as required for (ii). Now consider an arbitrary non-empty \(X\subseteq\mathcal{X}\) and suppose that for every \(U\subset X\) we already defined \(\ell_{U}\) with Properties (i)-(iii). Let \(\ell_{X}^{\prime}=\max_{\{U\subset X\}}\ell_{U}\) and fix some \(W\subset X\) with \(\ell_{W}=\ell_{X}^{\prime}\). Let \(C^{W}\) be the blue chain obtained by Property (ii) for \(W\). We define \(\ell_{X}\) as the smallest integer \(\ell\) with \(\ell^{\prime}_{X}\leq\ell\leq k\) such that the vertex \(X\cup\mathcal{Y}[\ell]\) is red in the coloring of \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\). If such an \(\ell\) does not exist, i.e. if there is no red \(X\cup\mathcal{Y}[\ell]\), the vertices \(X\cup\mathcal{Y}[\ell^{\prime}_{X}],\ldots,X\cup\mathcal{Y}[k]\) form a blue chain of length \(k-\ell^{\prime}_{X}+1\). The chain \(C^{W}\) is a blue chain "below" \(W\cup\mathcal{Y}[\ell_{W}]\). Note that \(W\cup\mathcal{Y}[\ell_{W}]\subset X\cup\mathcal{Y}[\ell^{\prime}_{X}]\), thus both chains combine to a chain \(C^{X}\) of length \(k+1\). It is easy to see that \(C^{X}\) is a \(\mathcal{Y}\)-chain corresponding to \(\tau\), so we arrive at a contradiction. Thus, \(\ell_{X}\) is well-defined. It is immediate that Property (iii) holds for \(\ell_{X}\). Furthermore, for \(U\subset X\subseteq\mathcal{X}\) we have \(\ell_{U}\leq\ell^{\prime}_{X}\leq\ell_{X}\), thus (i) holds. It remains to verify Property (ii) for \(\ell_{X}\). Recall that \(W\subset X\) such that \(\ell_{W}=\ell^{\prime}_{X}\). If \(\ell_{X}=\ell^{\prime}_{X}\), the chain \(C^{X}:=C^{W}\) is as required. If \(\ell_{X}\neq\ell^{\prime}_{X}\), the chain \(C^{W}\) together with vertices \(X\cup\mathcal{Y}[\ell^{\prime}_{X}],\ldots,X\cup\mathcal{Y}[\ell_{X}-1]\) is a blue chain of length \(\ell_{X}\), which verifies Property (ii). We use the labels \(\ell_{X}\), \(X\subseteq\mathcal{X}\), to define an embedding of a Boolean lattice \(Q_{n}\) in \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\). Let \(\phi\colon\mathcal{Q}(\mathcal{X})\to\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) with \(\phi(X)=X\cup\mathcal{Y}[\ell_{X}]\). Property (iii) implies that the image of \(\phi\) is colored monochromatically red. We show that for any two \(X_{1},X_{2}\subseteq\mathcal{X}\), it holds that \(X_{1}\subseteq X_{2}\) if and only if \(\phi(X_{1})\subseteq\phi(X_{2})\). Indeed, if \(\phi(X_{1})\subseteq\phi(X_{2})\), it is immediate that \(X_{1}=\phi(X_{1})\cap\mathcal{X}\subseteq\phi(X_{2})\cap\mathcal{X}=X_{2}\). Conversely, if \(X_{1}\subseteq X_{2}\), then by Property (i) we see that \(\ell_{X_{1}}\leq\ell_{X_{2}}\). Thus \(X_{1}\cup\mathcal{Y}[\ell_{X_{1}}]\subseteq X_{2}\cup\mathcal{Y}[\ell_{X_{2}}]\). Therefore, \(\phi\) is an embedding of \(\mathcal{Q}(\mathcal{X})\) with red image, so in particular \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) contains a red copy of \(Q_{n}\). The red copy of \(Q_{n}\) obtained in this lemma has a strong additional property with respect to \(\mathcal{X}\) which is not needed here. For further details we refer the reader to Lemma 8 of [2]. The following corollary is a simplified version of the Chain Lemma. **Corollary 8**.: _Let \(n\) and \(k\) be positive integers. Let \(\mathcal{Q}\) be a blue/red colored Boolean lattice of dimension \(n+k\). Then \(\mathcal{Q}\) contains a red copy of \(Q_{n}\) or a blue chain of length \(k+1\)._ Note that Corollary 8 immediately implies the first lower bound in Theorem 4. ### Counting permutations In this subsection we bound the number of permutations with a special property, in preparation for our proof of Theorem 2. A permutation \(\pi\colon[k]\to[k]\) is called _\(r\)-proper_ if for every \(j\in[k]\), \(|\{\ell\leq j:\pi(\ell)\geq j-1\}|\leq r\). If \(r\) is clear from the context we usually omit the parameter. For example, the permutation \(\hat{\pi}\) given by \((\hat{\pi}(1),\ldots,\hat{\pi}(k))=(k,1,3,4,5,\ldots,k-1,2)\), see Figure 2, is not \(1\)-proper because at \(j=3\), \(\{\ell\leq 3:\hat{\pi}(\ell)\geq 2\}=\{1,3\}\). However, \(\hat{\pi}\) is \(2\)-proper. **Lemma 9**.: _Let \(r,k\in\mathbb{N}\). There are at most \(2^{(r+\log r)k}\) distinct \(r\)-proper permutations \(\pi\colon[k]\to[k]\)._ Proof.: For an \(r\)-proper permutation \(\pi\), we say that an index \(i\in[k]\) is _bad_ if \(\pi(i)\geq i\), and _good_ if \(\pi(i)\leq i-1\). Let \(\mathcal{B}_{\pi}\) and \(\mathcal{G}_{\pi}\) denote the set of indices that are bad and good, respectively, i.e. the sets partition \([k]\). Again considering the example \((\hat{\pi}(1),\ldots,\hat{\pi}(k))=(k,1,3,4,5,\ldots,k-1,2)\), we have that \(\mathcal{B}_{\hat{\pi}}=\{1\}\cup\{3,4,\ldots,k-1\}\) and \(\mathcal{G}_{\hat{\pi}}=\{2,k\}\). Given a proper permutation \(\pi\), the _proper restriction_\(\rho\) of \(\pi\) is the restriction of \(\pi\) to its bad indices, i.e. \(\rho\colon\mathcal{B}_{\pi}\to[k]\) with \(\rho(i)=\pi(i)\) for every \(i\). Note that \(\rho\) does not depend on \(r\). For example, the proper restriction of \(\hat{\pi}\) is \(\hat{\rho}\colon[k-1]\setminus\{2\}\to[k]\) with \(\hat{\rho}(1)=k\) and \(\hat{\rho}(i)=i\) for \(3\leq i\leq k-1\). Observe that a function \(\rho\) can be the proper restriction of distinct proper permutations. Let \(\Pi\) be the set of all proper permutations \(\pi\colon[k]\to[k]\). If a function \(\rho\) is the proper restriction of some \(\pi\in\Pi\), we say that \(\rho\) is an _\(\Pi\)-restriction_. To avoid ambiguity, we denote the domain of \(\rho\) by \(\mathcal{B}_{\rho}\). Inheriting the properties of a proper permutation, \(\rho\) is injective and \[\big{|}\{\ell\in\mathcal{B}_{\rho}:\ell\leq j,\ \rho(\ell)\geq j-1\}\big{|} \leq r.\] In the following we bound \(|\Pi|\) by first estimating \(|\{\rho:\ \rho\text{ is }\Pi\text{-restriction}\}|\), and then bounding \(|\{\pi\in\Pi:\ \rho\text{ is proper restriction of }\pi\}|\) for every fixed \(\rho\). **Claim 1:** There are at most \(2^{rk}\) distinct \(\Pi\)-restrictions. _Proof of Claim 1:_ We show that every \(\Pi\)-restriction has a distinct representation as a collection of \(r\) vectors \(V_{1},\ldots,V_{r}\in\{\texttt{0},\texttt{1}\}^{k}\), which implies that there are at most \(2^{rk}\)\(\Pi\)-restrictions. Let \(\rho\) be a \(\Pi\)-restriction with domain \(\mathcal{B}_{\rho}\). For every \(i\in\mathcal{B}_{\rho}\) we define an integer interval \(I_{i}=[i,\rho(i)+1]\). Consider the _interval graph_\(G\) given by intervals \(I_{i}\), i.e. the graph on vertex set \(\mathcal{B}_{\rho}\) where \(\{i,j\}\) is an edge if and only if \(i\neq j\) and \(I_{i}\cap I_{j}\neq\varnothing\). In the following we use terminology common in graph theory, for a formal introduction we refer the reader to Diestel [8]. Next we bound the maximal size of a clique in \(G\). Suppose that vertices \(i_{1},\ldots,i_{m}\) form a clique in \(G\), then the intervals \(I_{i_{1}},\ldots,I_{i_{m}}\) pairwise intersect. Thus there exists an integer \(j\in[k]\) such that \(j\in I_{i_{1}}\cap\cdots\cap I_{i_{m}}\). Now \[m=\big{|}\{\ell\in\mathcal{B}_{\rho}:\ j\in I_{\ell}\}\big{|}=\big{|}\{\ell \in\mathcal{B}_{\rho}:\ \ell\leq j,\ \rho(\ell)+1\geq j\}\big{|}\leq r,\] where the last inequality holds since \(\rho\) is a proper restriction. Thus there is no clique of size \(r+1\) in \(G\). It is common knowledge that interval graphs are perfect, so there exists a proper vertex coloring of \(G\) using at most \(r\) colors. Fix such a coloring \(c\) of \(G\) with set of colors \([r]\). Note that for each color class the corresponding intervals are pairwise disjoint. For every fixed \(s\in[r]\), let the set of indices with color \(s\) be \(\mathcal{B}_{s}=\{i\in\mathcal{B}_{\rho}:\ c(I_{i})=s\}\). We define a vector \(V_{s}\in\{\texttt{0},\texttt{1}\}^{k}\) as follows. Let \[V_{s}(i)=\cdots=V_{s}(\rho(i))=\texttt{1}\text{ for any }i\in\mathcal{B}_{s} \quad\text{ and }\quad V_{s}(j)=\texttt{0}\text{ for all other }j\in[k].\] Since the intervals \(I_{i}\), \(i\in\mathcal{B}_{s}\), are pairwise disjoint, \(V_{s}\) is well-defined. Moreover, we obtain that \(V_{s}(\rho(i)+1)=\texttt{0}\) for every \(i\in\mathcal{B}_{s}\). This implies that \(V_{s}(i-1)=\texttt{0}\), if defined, for \(i\in\mathcal{B}_{s}\). Observe that the vector \(V_{s}\) encodes all indices in \(\mathcal{B}_{s}\) and their respective functional values \(\rho(i)\), \(i\in\mathcal{B}_{s}\): If for some \(j\in[k]\), \(V_{s}(j)=\texttt{1}\) and \(V_{s}(j-1)=\texttt{0}\), then \(j\in\mathcal{B}_{s}\) and \(\rho(j)\) is given by the maximal index \(j^{\prime}\) such that \(V_{s}(j)=\cdots=V_{s}(j^{\prime})=\texttt{1}\). We obtain a vector representation \(V_{1},\ldots,V_{r}\) of \(\rho\). It is easy to see that distinct \(\Pi\)-restrictions have distinct representations. There are at most \((2^{k})^{r}\) distinct such vector respresentations, which proves the claim. **Claim 2:** Given a fixed \(\Pi\)-restriction \(\rho\), the number of proper permutations \(\pi\) with proper restriction \(\rho\) is at most \(r^{k}\). Proof of Claim 2:.: Let \(\rho\) be a fixed \(\Pi\)-restriction and let \(\mathcal{G}=[k]\setminus\mathcal{B}_{\rho}\). We count the possible assignments of good indices \(i\in\mathcal{G}\) in a proper permutation given that \(\pi(\ell)=\rho(\ell)\) for every \(\ell\in\mathcal{B}_{\rho}\). For this purpose we iterate through all good indices \(i\in\mathcal{G}\) in increasing order while counting the choices for each \(\pi(i)\). Observe that \(1\notin\mathcal{G}\) since \(\pi(1)\geq 1\) for any permutation \(\pi\). Fix an \(i\in\mathcal{G}\), i.e. \(i\geq 2\). Suppose that all indices \(\ell\in\mathcal{G}\cap[i-1]\) are already assigned to an integer \(\pi(\ell)\leq\ell-1\) and all \(\ell\in\mathcal{B}_{\rho}\) are assigned to \(\pi(\ell)=\rho(\ell)\). There are two conditions on the choice of \(\pi(i)\): On the one hand \(i\) is a good index, so we require \(\pi(i)\in[i-1]\). On the other hand \(\pi\) is injective, thus \(\pi(i)\neq\pi(\ell)\) for all \(\ell<i\). Therefore \(\pi(i)\in[i-1]\setminus\{\pi(\ell)\in[i-1]:\ell<i\}\). We evaluate the size of this set using the fact that \(|\{\ell<i:\pi(\ell)\geq i-1\}|\leq r\): \[\big{|}\{\pi(\ell)\in[i-1]:\ell<i\}\big{|} =\big{|}\{\ell<i:\pi(\ell)\leq i-1\}\big{|}=\big{|}[i-1]\setminus \{\ell<i:\pi(\ell)>i-1\}\big{|}\] \[\geq\big{|}[i-1]\setminus\{\ell<i:\pi(\ell)\geq i-1\}\big{|}\geq( i-1)-r.\] Thus also \[\big{|}[i-1]\setminus\{\pi(\ell)\in[i-1]:\ell<i\}\big{|}\leq(i-1)-(i-1-r)=r.\] Hence, there are at most \(r\) choices for selecting \(\pi(i)\) for each \(i\in\mathcal{G}\). Note that \(|\mathcal{G}|\leq k\), consequently the number of proper permutations with proper restriction \(\rho\) is at most \(r^{k}\). Combining both claims, the number of proper permutations is at most \(2^{rk}r^{k}=2^{(r+\log r)k}\). The bound provided here is not best possible. With a more careful approach the number \(N(k,r)\) of \(r\)-proper permutations \(\pi\colon[k]\to[k]\) can be bounded between \[r^{k}\leq N(k,r)\leq(2r)^{2k}.\] Studying this extremal function might be of independent interest. ## 3 Proof of Theorem 2 Before presenting the main proof we give a lemma which is purely computational and follows the lines of a similar claim by Grosz, Methuku and Tompkins [9]. **Lemma 10**.: _Let \(n\in\mathbb{N}\), and let \(c\in\mathbb{R}\) be a positive constant. Let \(k=\frac{(2+\epsilon)n}{\log n}\) where \(\epsilon=3(\log\log n+\log e+c+2)(\log n)^{-1}\). Then for sufficiently large \(n\),_ \[k!>2^{ck}\cdot 2^{2(n+k)}.\] Proof.: By Stirling's formula \(k!>\big{(}\frac{k}{\epsilon}\big{)}^{k}=2^{k(\log k-\log e)}\). We shall show that \(k(\log k-\log e)>ck+2(n+k)\). Using the fact that \(k=\frac{(2+\epsilon)n}{\log n}\), we obtain \[k\big{(}\log k-\log e-c-2\big{)}-2n\] \[\geq\frac{(2+\epsilon)n}{\log n}\big{(}\log(2+\epsilon)+\log n- \log\log n-\log e-c-2\big{)}-2n\] \[\geq\epsilon n-\frac{n}{\log n}(2+\epsilon)\big{(}\log\log n+ \log e+c+2\big{)}>0,\] where the last inequality holds for sufficiently large \(n\) Proof of Theorem 2.: For any \(s\leq t\) note that \(\mathcal{SD}_{s,t}\) is an induced subposet of \(\mathcal{SD}_{t,t}\), so it suffices to show the Ramsey bound for \(s=t\). Let \(k=\frac{(2+\epsilon)n}{\log n}\) where \(\epsilon=3(\log\log n+\log e+c+2)(\log n)^{-1}\) and \(c=2t+2+\log(2t+2)\). Let \(\mathcal{X}\) and \(\mathcal{Y}\) be disjoint sets with \(|\mathcal{X}|=n\), \(|\mathcal{Y}|=k\). Consider an arbitrary blue/red coloring of \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) with no red copy of \(Q_{n}\). We shall show that there is a blue copy of \(\mathcal{SD}_{t,t}\) in this coloring. There are \(k!\) linear orderings of \(\mathcal{Y}\). For every linear ordering \(\tau\) of \(\mathcal{Y}\), Lemma 7 provides a blue \(\mathcal{Y}\)-chain \(C^{\tau}\) in \(\mathcal{Q}(\mathcal{X}\cup\mathcal{Y})\) corresponding to \(\tau\), say on vertices \(Z_{0}^{\tau}\subset Z_{1}^{\tau}\subset\ldots\subset Z_{k}^{\tau}\). Consider the smallest vertex \(Z_{0}^{\tau}\) as well as the largest vertex \(Z_{k}^{\tau}\). Both vertices are subsets of \(\mathcal{X}\cup\mathcal{Y}\), so there are \(2^{2(n+k)}\) distinct pairs \((Z_{0}^{\tau},Z_{k}^{\tau})\). By pigeonhole principle there is a collection \(\tau_{1},\ldots,\tau_{m}\) of \(m=\frac{k!}{2^{2(n+k)}}\) distinct linear orderings of \(\mathcal{Y}\) such that all of the corresponding \(\mathcal{Y}\)-chains \(C^{\tau_{i}}\) have both \(Z_{0}^{\tau_{i}}\) and \(Z_{k}^{\tau_{i}}\) in common. Lemma 10 shows that \(m>2^{ck}\). Fix an arbitrary \(\sigma\in\{\tau_{1},\ldots,\tau_{m}\}\). By relabelling \(\mathcal{Y}\) we can suppose that \(\sigma=(1,\ldots,k)\), i.e. \(1<_{\sigma}\cdots<_{\sigma}k\). Consider a linear ordering \(\tau_{j}\), \(j\in[m]\), (allowing that \(\tau_{j}=\sigma\)) and let \(\tau_{j}=(y_{1},\ldots,y_{k})\). Then we say that \(\tau_{j}\) is \(t\)_-close_ to \(\sigma\) for some \(t\in\mathbb{N}\) if for every \(i\in[k-t]\), either \([i]\subseteq\{y_{1},\ldots,y_{i+t}\}\) or \(\{y_{1},\ldots,y_{i}\}\subseteq[i+t]\). For example the linear ordering \((4,5,\ldots,k,1,2,3)\) is \(3\)-close to \(\sigma\) since the first \(i\) elements of this linear ordering are contained in \([i+3]\), for any \(i\in[k-3]\). However, our example is not \(2\)-close to \(\sigma\), because neither \(\{1\}\subseteq\{4,5,6\}\) nor \(\{4\}\subseteq[3]\). In the remaining proof we distinguish two cases. If there is a linear ordering \(\tau_{j}\) which is not \(t\)-close to \(\sigma\), we build a copy of \(\mathcal{SD}_{t,t}\) from the \(\mathcal{Y}\)-chains corresponding to \(\sigma\) and \(\tau_{j}\). If every linear ordering \(\tau_{1},\ldots,\tau_{m}\) is \(t\)-close to \(\sigma\), we find \(m>2^{ck}\) permutations fulfilling the property of Lemma 9, thus we arrive at a contradiction. **Case 1:** There is a linear ordering \(\tau\in\{\tau_{1},\ldots,\tau_{m}\}\) which is not \(t\)-close to \(\sigma\). Suppose that the \(\mathcal{Y}\)-chains corresponding to \(\sigma\) and \(\tau\) are given by \(Z_{0}^{\sigma},\ldots,Z_{k}^{\sigma}\) and \(Z_{0}^{\tau},\ldots,Z_{k}^{\tau}\), respectively. Recall that \(Z_{0}^{\sigma}=Z_{0}^{\tau}\) and \(Z_{k}^{\sigma}=Z_{k}^{\tau}\). Since \(\tau\) is not \(t\)-close to \(\sigma\) there is an index \(i\in[k-t]\) such that neither \([i]\subseteq\{y_{1},\ldots,y_{i+t}\}\) nor \(\{y_{1},\ldots,y_{i}\}\subseteq[i+t]\). Note that \(i<k-t\) because for \(i=k-t\) we have \([k-t]\subseteq[k]=\mathcal{Y}=\{y_{1},\ldots,y_{k}\}\). The definition of \(\mathcal{Y}\)-chains provides that \(Z_{i}^{\sigma}\cap\mathcal{Y}=[i]\) and \(Z_{i+t}^{\tau}\cap\mathcal{Y}=\{y_{1},\ldots,y_{i+t}\}\), thus \(Z_{i}^{\sigma}\not\subseteq Z_{i+t}^{\tau}\). By transitivity, \(Z_{j}^{\sigma}\not\subseteq Z_{j^{\prime}}^{\tau}\) for any two \(j,j^{\prime}\in\{i,\ldots,i+t\}\). Similarly, \(Z_{i}^{\tau}\not\subseteq Z_{i+t}^{\sigma}\) and so \(Z_{j}^{\tau}\not\subseteq Z_{j^{\prime}}^{\sigma}\). This implies that the set \[\mathcal{P}=\left\{Z_{j}^{\sigma},Z_{j}^{\tau}:\ j\in\{0,k\}\cup\{i,\ldots,i+t \}\right\}\] forms a copy of \(\mathcal{SD}_{t,t}\). Furthermore, every vertex of \(\mathcal{P}\) is included in a blue \(\mathcal{Y}\)-chain and thus colored blue. This completes the proof for Case 1. **Case 2:** Every linear ordering \(\tau\in\{\tau_{1},\ldots,\tau_{m}\}\) is \(t\)-close. Here we use the fact that every linear ordering \(\tau_{j}\), \(j\in[m]\), is obtained by permuting the linear ordering \(\sigma\). Fix an arbitrary \(\tau\in\{\tau_{1},\ldots,\tau_{m}\}\), and let \(\tau=(y_{1},\ldots,y_{k})\). We say that the permutation _corresponding_ to \(\tau\) is \(\pi\colon[k]\to[k]\) with \(\pi(\ell)=y_{\ell}\). We show that \(\pi\) has the following property. **Claim:** For every \(j\in[k]\), \(|\{\ell\leq j:\ \pi(\ell)>j+t\}|\leq t\). The statement is trivially true if \(j+t>k\). Now fix some \(j\in[k-t]\). By \(t\)-closeness of \(\tau\) either \(\{\pi(1),\ldots,\pi(j)\}=\{y_{1},\ldots,y_{j}\}\subseteq[j+t]\) or \([j]\subseteq\{y_{1},\ldots,y_{j+t}\}=\{\pi(1),\ldots,\pi(j+t)\}\). If \(\{\pi(1),\ldots,\pi(j)\}\subseteq[j+t]\), then for every \(\ell\leq j\) we have \(\pi(\ell)\leq j+t\). Therefore \(\{\ell\leq j:\pi(\ell)>j+t\}=\varnothing\) and the statement holds. If \([j]\subseteq\{\pi(1),\ldots,\pi(j+t)\}\), then let \(I=\{\pi(1),\ldots,\pi(j+t)\}\setminus[j]\), note that \(|I|=t\). Observe that for every \(\ell\leq j\) with \(\pi(\ell)>j+t\), we know in particular that \(\pi(\ell)\notin[j]\), thus \(\pi(\ell)\in I\). Since \(\pi\) is bijective, \[\big{|}\{\ell\leq j:\ \pi(\ell)>j+t\}\big{|}=\big{|}\{\pi(\ell):\ \ell\leq j,\ \pi(\ell)>j+t\}\big{|}\leq|I|=t.\] This proves the claim. In particular, \(\pi\) has the property that \(|\{\ell\leq j:\pi(\ell)\geq j-1\}|\leq 2t+2\) for every \(j\in[k]\), i.e. \(\pi\) is \((2t+2)\)-proper. Note that distinct linear orderings \(\tau_{i}\), \(i\in[m]\), correspond to distinct permutations \(\pi_{i}\colon[k]\to[k]\). Lemma 9 provides that the number of \((2t+2)\)-proper permutations \(\pi_{i}\) is at most \[m\leq 2^{(2t+2+\log(2t+2))k}=2^{ck}.\] Recall that by Lemma 10, \(m>2^{ck}\), so we arrive at a contradiction. ## 4 Proofs of Theorems 4 and 5 Proof of Theorem 4.: First we determine \(R(C_{t_{1}},Q_{n})\). The lower bound follows from Theorem 1, so we only need to show that \(R(C_{t_{1}},Q_{n})\leq n+t_{1}-1\). Suppose that in a blue/red colored Boolean lattice \(\mathcal{Q}^{1}:=\mathcal{Q}([n+t_{1}-1])\) there is no blue copy of \(C_{t_{1}}\). Then by Corollary 8 there is a red copy of \(Q_{n}\) in \(\mathcal{Q}^{1}\). Next we consider the poset Ramsey number \(R(\mathcal{C}_{t_{1},t_{2}},Q_{n})\). Let \(N=n+t_{1}+1\). In order to prove that \(R(\mathcal{C}_{t_{1},t_{2}},Q_{n})\leq N\), we consider an arbitrarily blue/red colored Boolean lattice \(\mathcal{Q}^{2}:=\mathcal{Q}([N])\). We show that there is either a red copy of \(Q_{n}\) or a blue copy of \(\mathcal{C}_{t_{1},t_{2}}\) in this coloring. Corollary 8 shows that there is either a red copy of \(Q_{n}\) or a blue chain of length \(t_{1}+2\). If the former happens, the proof is already complete, so suppose that there is a blue chain \(C\) on vertices \(Z_{0}\subset Z_{1}\subset\cdots\subset Z_{t_{1}+1}\). Consider the subposet \(C^{\prime}\) of \(C\) on vertices \(Z_{1},\ldots,Z_{t_{1}}\), i.e. obtained by discarding the minimum and maximum vertex of \(C\). The poset \(C^{\prime}\) is a chain of length \(t_{1}\). Note that \(Z_{1}\neq\varnothing\) since it has a proper subset \(Z_{0}\subset Z_{1}\), thus there is an element \(a\in Z_{1}\). Similarly, \(Z_{t_{1}}\neq[N]\), so we find a \(b\in[N]\setminus Z_{t_{1}}\). We obtain that \(\{a\}\subseteq Z_{1}\subseteq\cdots\subseteq Z_{t_{1}}\subseteq[N]\setminus\{b\}\). We consider the subposet \(\mathcal{Q}^{3}:=\{Z\in\mathcal{Q}^{2}\colon b\in Z,\ a\notin Z\}\), see Figure 3. Note that \(\mathcal{Q}^{3}\) is a Boolean lattice of dimension \(N-2=n+t_{1}-1\) with a blue/red coloring induced by the one of \(\mathcal{Q}^{2}\). The posets \(\mathcal{Q}^{3}\) and \(C^{\prime}\) are parallel, i.e. for every two \(Z\in\mathcal{Q}^{3}\) and \(U\in C^{\prime}\) we have \(Z||U\), because \(a\in U\), \(a\notin Z\) and \(b\in Z\), \(b\notin U\). Applying the first bound shown in this proof we obtain that \(R(C_{t_{2}},Q_{n})=n+t_{2}-1\leq n+t_{1}-1\). Since \(\mathcal{Q}^{3}\) has dimension \(n+t_{1}-1\) there is either a red copy of \(Q_{n}\) or a blue copy \(D\) of \(C_{t_{2}}\) in \(\mathcal{Q}^{3}\). In the first case the proof is complete, and in the second case the vertices of \(C^{\prime}\) and \(D\) form a blue copy of \(\mathcal{C}_{t_{1},t_{2}}\), which also completes the proof. Thus, \(R(\mathcal{C}_{t_{1},t_{2}},Q_{n})\leq N\). It remains to show that \(R(\mathcal{C}_{t_{1},t_{2}},Q_{n})\geq N=n+t_{1}+1\). We verify this lower bound by introducing a layered coloring of \(Q_{N-1}\) which neither contains a blue copy of \(\mathcal{C}_{t_{1},t_{2}}\) nor a red copy of \(Q_{n}\). Consider the Boolean lattice \(\mathcal{Q}^{4}:=\mathcal{Q}([N-1])\) which is colored such that layer \(0\) and layer \(N-1\) (i.e. both one-element layers) are monochromatically blue, \(t_{1}-1\) arbitrary additional layers are blue, and all remaining \(N-(t_{1}+1)=n\) layers are red. Since \(Q_{n}\) has height \(n+1\), there is no monochromatic red copy of \(Q_{n}\) in this coloring. Assume that there is a blue copy \(\mathcal{P}\) of \(\mathcal{C}_{t_{1},t_{2}}\) in \(\mathcal{Q}^{4}\). Note that \(\varnothing\notin\mathcal{P}\) and \([N-1]\notin\mathcal{P}\), because each of these two vertices is comparable to all other vertices of \(\mathcal{Q}^{4}\) and in \(\mathcal{P}\) no vertex comparable to all other vertices of \(\mathcal{P}\). Other than \(\varnothing\) and \([N-1]\), \(\mathcal{Q}^{4}\) has only \(t_{1}-1\) layers containing blue vertices, but \(h(\mathcal{P})=h(\mathcal{C}_{t_{1},t_{2}})=t_{1}\), so there can not be a blue copy \(\mathcal{P}\) of \(\mathcal{C}_{t_{1},t_{2}}\). Proof of Theorem 5.: Let \(N=n+t+2\). In order to show the upper bound on \(R(\mathcal{C}_{t,t-1,t^{\prime}},Q_{n})\), fix an arbitrary blue/red coloring of \(\mathcal{Q}^{1}:=\mathcal{Q}([N])\). We shall find either a red copy of \(Q_{n}\) or a blue copy of \(\mathcal{C}_{t,t-1,t^{\prime}}\) in \(\mathcal{Q}^{1}\). By Corollary 8 we find either a red copy of \(Q_{n}\) or a blue chain \(C\) of length \(t+2\). In the first case the proof is complete, so assume the second case. Let \(C\) have vertices \(Z_{0}\subset\cdots\subset Z_{t+1}\). Consider the subposet \(C^{\prime}\) of \(C\) on vertices \(Z_{1},\ldots,Z_{t}\), which is a chain of length \(t\). Note that \(Z_{1}\neq\varnothing\) and \(Z_{t}\neq[N]\), thus we find some \(a,b\in[N]\) such that \(\{a\}\subseteq Z_{1}\subseteq\cdots\subseteq Z_{t}\subseteq[N]\setminus\{b\}\). Now consider the subposet \(\mathcal{Q}^{2}:=\{Z\in\mathcal{Q}^{1}\colon b\in Z,\ a\notin Z\}\), which is a Boolean lattice of dimension \(N-2=n+t\) with a blue/red coloring induced by \(\mathcal{Q}^{1}\). Theorem 4 yields that \(\mathcal{Q}^{2}\) contains either a red copy of \(Q_{n}\) or a blue copy \(\mathcal{P}\) of \(\mathcal{C}_{t-1,t^{\prime}}\). In the first case we found a red copy of \(Q_{n}\) in \(\mathcal{Q}^{1}\). In the second case we know for every two \(Z\in\mathcal{P}\) and \(U\in C^{\prime}\) that \(a\in U\), \(a\notin Z\), \(b\notin U\) and \(b\in Z\), so \(Z||U\). Consequently, the vertices of \(\mathcal{P}\) and \(C^{\prime}\) induce a blue copy of \(\mathcal{C}_{t,t-1,t^{\prime}}\). It remains to verify the lower bound. For this purpose we shall find a blue/red coloring of \(\mathcal{Q}^{3}:=\mathcal{Q}([N-1])\) which neither contains a red copy of \(Q_{n}\) nor a blue copy of \(\mathcal{C}_{t,t-1,t^{\prime}}\). Note that \(t\geq t^{\prime}+1\geq 2\). Color all vertices in the four layers \(\ell\in\{0,1,N-2,N-1\}\) monochromatically in blue. Color \(t-2\) arbitrary additional layers blue and all remaining \(N-(4+t-2)=n\) layers red. Clearly, this coloring contains no red copy of \(Q_{n}\) since \(h(Q_{n})=n+1\). Assume for a contradiction that there is a blue copy \(\mathcal{P}\) of \(\mathcal{C}_{t,t-1,t^{\prime}}\) in \(\mathcal{Q}^{3}\). In \(\mathcal{P}\) denote the blue chain of length \(t\) by \(C\), see Figure 4. Furthermore there is a chain \(D\) of length \(t-1\) in \(\mathcal{P}\) which is parallel to \(C\). Let \(F\) be a vertex of \(\mathcal{P}\) which is neither in \(C\) nor in \(D\), i.e. \(F\) is incomparable to every vertex in \(C\) or \(D\). Figure 4: Chains \(C\) and \(D\) and vertex \(F\) in a copy of \(\mathcal{C}_{t,t-1,t^{\prime}}\) It is easy to see that neither \(\varnothing\) nor \([N-1]\) are in \(\mathcal{P}\), because both of these vertices are comparable to every other vertex in \(\mathcal{Q}^{3}\) which does not occur in \(\mathcal{P}\). Excluding the two vertices \(\varnothing\) and \([N-1]\), there are exactly \(t\) layers containing blue vertices, including layer 1 and layer \(N-2\). Recall that \(C\) is a blue chain of length \(t\), therefore the minimum vertex \(Z_{1}\) of \(C\) is in layer 1, while the maximum vertex \(Z_{t}\) of \(C\) is in layer \(N-2\). Thus we find \(a,b\in[N-1]\) such that \(Z_{1}=\{a\}\) and \(Z_{t}=[N-1]\backslash\{b\}\). Let \(\mathcal{Q}^{4}:=\{Z\in\mathcal{Q}^{3}:b\in Z,\ a\notin Z\}\), this poset is a Boolean lattice. Since \(F\) is incomparable to every vertex in \(C\), we obtain that \(a\notin F\) and \(b\in F\). Thus \(F\in\mathcal{Q}^{4}\) and in particular \(F\supseteq\{b\}\) and \(F\subseteq[N-1]\backslash\{a\}\). Similarly, \(D\subseteq\mathcal{Q}^{4}\). The coloring of \(\mathcal{Q}^{3}\) induces a layered coloring of \(\mathcal{Q}^{4}\) where exactly \(t\) of the layers are blue. Two of these blue layers in \(\mathcal{Q}^{4}\) are the one-element layers given by \(\{b\}\) and \([N-1]\setminus\{a\}\). Since the chain \(D\) has height \(h(D)=t-1\), either \(\{b\}\in D\) or \([N-1]\backslash\{a\}\in D\). This is a contradiction, because \(F\) is incomparable to every vertex in \(D\) but both \(\{b\}\) and \([N-1]\backslash\{a\}\) are comparable to \(F\). ## 5 Summary of known approaches ### Proof techniques In order to give an insight on research of off-diagonal poset Ramsey numbers, we briefly survey known proof methods for bounding \(R(P,Q_{n})\) when \(P\) is fixed. For some easy-to-analyze posets, e.g. several trivial posets, no advanced tools are required to get an exact bound. An example of this is our proof of Theorem 4 in Section 4, which only relies on the basic observation Corollary 8. Many other posets require more involved proof techniques, and there are three methods which provides **upper bounds** on \(R(P,Q_{n})\). A _blob approach_ is used in particular for Boolean lattices \(P=Q_{m}\). The idea behind this method is to consider a blue/red colored hosting lattice in which we define many _blobs_, i.e. small sublattices that are pairwise disjoint, arranged in a product structure. If any blob is monochromatically blue, we obtain a blue copy of \(P\). Otherwise, we find a red copy of \(Q_{n}\) by choosing one red vertex in each blob. This proof technique was introduced by Axenovich and Walzer [1], for a more refined version see Lu and Thompson [13]. Grosz, Methuku and Tompkins [9] introduced another proof technique, the _chain approach_. Here we consider the large number of blue chains obtained by the Chain Lemma, Lemma 7, and use counting arguments to force the existence of a monochromatically blue copy of \(P\). Exemplary for this method are the proof of Theorem 2 in Section 3 and Theorem 1 in Winter [19]. A more technical proof method is the _blocker approach_. It tries to strengthen the Chain Lemma in order to get a precise picture on how the blue chains in the coloring are located relative to each other, in fact they are grouped into structures called _blockers_. This proof technique is described in another part of this series, see Axenovich and the author [3]. A general **lower bound** on \(R(P,Q_{n})\) for every \(P\) is obtained from a trivial layered coloring, see Theorem 1. For bounding \(R(P,Q_{n})\) from below, most constructions slightly refine a layered coloring according to the fixed \(P\), see e.g. the proofs of Theorems 4 and 5 or Grosz, Methuku and Tompkins [9]. Most known proofs for the lower bound strengthen the trivial lower bound just by a constant. The only non-marginal improvement of the trivial lower bound is given by Axenovich and the author [2] building on the structural insights obtained from the _blocker approach_. ### Known reductions To the best knowledge of the author, there are three known tools for transferring Ramsey bounds to other posets. The first of them is the basic observation that if \(P_{1}\) is a subposet of \(P_{2}\), then \(R(P_{1},Q_{n})\leq R(P_{2},Q_{n})\). Another useful result was established by Walzer [18] considering the poset Ramsey number \(R(\mathcal{P},Q_{n})\) where \(\mathcal{P}\) is a parallel compositions of posets. His finding is based on a result of Habib, Nourine, Raynaud, Thierry [10] analysing the Sperner number \(\alpha(\ell)\), which is the smallest integer \(N\) such that \(\binom{N}{\lfloor N/2\rfloor}\geq\ell\). **Theorem 11** (Walzer [18]).: _Let \(\ell\geq 2\) and let \(P_{1},P_{2},\ldots,P_{\ell}\), and \(Q\) be arbitrary posets. Let \(\mathcal{P}=P_{1}+P_{2}+\cdots+P_{\ell}\) be the parallel composition of \(P_{1},\ldots,P_{\ell}\). Then_ \[R(\mathcal{P},Q)\leq\max_{j\in[\ell]}\left\{R(P_{j},Q)\right\}+\alpha(\ell)< \max_{j\in[\ell]}\left\{R(P_{j},Q)\right\}+\log(\ell)+\tfrac{1}{2}\log\log( \ell)+2.\] A third tool was established by the author [19] and is based on a _gluing_ operation: Let \(P_{1}\) and \(P_{2}\) be two disjoint posets such that \(P_{1}\) has a minimum \(Z_{1}\) and \(P_{2}\) has a maximum \(Z_{2}\). The poset \(P_{1}\!\backslash P_{2}\) is the poset obtained by identifying \(Z_{1}\) and \(Z_{2}\). **Lemma 12** ([19]).: _Let \(P_{1}\) be a poset with a minimum vertex and let \(P_{2}\) be a poset with a maximum vertex. If \(R(P_{1},Q_{n})\leq n+\frac{c_{1}n}{\log n}\) as well as \(R(P_{2},Q_{n})\leq n+\frac{c_{2}n}{\log n}\), then_ \[R(P_{1}\!\backslash P_{2},Q_{n})\leq n+\left(c_{1}+c_{2}+o(1)\right)\tfrac{n} {\log n}.\] To give an example of utilizing these reductions, Lemma 12 implies a generalization of our Ramsey bound for subdivided diamonds (note that the argument is somewhat technical, so we omit it here). We can show that for every poset \(P\) with width \(w(P)=2\) and which contains no copy of \(\mathcal{N}\), \[R(P,Q_{n})=n+O\left(\frac{n}{\log n}\right).\] The result follows from Theorem 1 of [19], Lemma 12 and Theorem 2 of the present article as well as a characterization by Valdes [17] stating that a poset is \(\mathcal{N}\)-free if and only if it is series-parallel. ### Open problems In our series of papers we discussed the asymptotic behaviour in the off-diagonal poset Ramsey setting \(R(P,Q_{n})\) for \(P\) fixed and \(n\) large. We close this study by collecting some open problems. For trivial \(P\), i.e. posets containing neither \(\mathcal{V}\) nor \(\mathcal{h}\), Theorem 1 bounds \(R(P,Q_{n})=n+\Theta(1)\) tight up to an additive constant. We improved the bounds on this constant in Corollary 6, but a general exact bound remains to be determined. For non-trivial \(P\) the picture is more unclear. Theorem 1 provides that \(n+\frac{1}{15}\frac{n}{\log n}\leq R(P,Q_{n})\leq c_{P}\cdot n\). Axenovich and the author [2] conjectured that the true value of \(R(P,Q_{n})\) is closer to the lower bound. **Conjecture 13** ([2]).: _For every fixed poset \(P\), \(R(P,Q_{n})=n+o(n)\)._ The lower bound \(R(P,Q_{n})=n+\Omega(\frac{n}{\log n})\) is known to be asymptotically tight in the two leading additive terms for some non-trivial \(P\), i.e. \(R(P,Q_{n})=n+\Theta(\frac{n}{\log n})\). We say that such a poset \(P\) is _modest_. Note that \(\mathcal{V}\) and \(\mathcal{h}\) are modest, and every non-trivial poset contains either \(\mathcal{V}\) or \(\mathcal{h}\) as a subposet. Belonging to the class of modest posets are e.g. subdivided diamonds, see Theorem 2, complete multipartite posets [19] and the N-shaped poset \(\mathcal{N}\)[3]. Notably it remains open whether there exists a non-trivial poset which is _not_ modest. **Conjecture 14**.: _There is a fixed poset \(P\) with \(R(P,Q_{n})=n+\omega\left(\frac{n}{\log n}\right).\)_ Known modest posets differ in various poset parameters, for example \(\mathcal{SD}_{t,t}\) has large height and \(K_{1,t}\) has large width. However, every known modest poset has order dimension 2. The _order dimension_ of \(P\) is the minimal number of linear orderings such that \(P\) is the intersection of these linear orderings. Natural candidates for proving Conjecture 14 are either \(Q_{3}\) or the standard example \(S_{3}\), the 6-element poset induced by layer 1 and 2 of \(Q_{3}\). Both posets have order dimension 3. Determining \(R(Q_{m},Q_{n})\) for \(m\in\mathbb{N}\) is one of the most interesting open problems regarding poset Ramsey numbers. While well-understood for \(m\in\{1,2\}\), the asymptotic behaviour of \(R(Q_{m},Q_{n})\) for fixed \(m\geq 3\) is only bounded up to a constant linear factor, see Lu and Thompson [13]. Conjectures 13 and 14 are equivalent to the following. **Conjecture 15**.: _For every fixed \(m\in\mathbb{N}\), \(R(Q_{m},Q_{n})=n+o(n)\). Furthermore there is a fixed integer \(m\in\mathbb{N}\) with \(R(Q_{m},Q_{n})=n+\omega\left(\frac{n}{\log n}\right).\)_ Note that Conjecture 15 is a strengthening of an open conjecture raised by Lu and Thompson [13]. **Acknowledgments:** Research was partially supported by DFG grant FKZ AX 93/2-1. The author would like to thank Maria Axenovich for many helpful comments, discussions, and her contribution to other parts of this series. The author thanks Torsten Ueckerdt for his comments leading to a cleaner argument for Lemma 9 as well as Felix Clemen for valuable comments on the manuscript.
2301.12479
Magnetic field effect on tunneling through triple barrier in AB bilayer graphene
We investigate electron tunneling in AB bilayer graphene through a triple electrostatic barrier of heights $U_i (i=2,3,4)$ subjected to a perpendicular magnetic field. By way of the transfer matrix method and using the continuity conditions at the different interfaces, the transmission probability is determined. Additional resonances appear for two-band tunneling at normal incidence, and their number is proportional to the value of $U_4$ in the case of $U_2<U_4$. However, when $U_2>U_4$, anti-Klein tunneling increases with $U_2$. The transmission probability exhibits an interesting oscillatory behavior when $U_3>U_2=U_4$ and $U_3 <U_2=U_4$. For fixed energy $E=0.39\gamma_1$, increasing barrier widths increases the number of oscillations and decreases Klein tunneling. The interlayer bias creates a gap for $U_2<U_3<U_4$ and $U_3>U_2=U_4$. In the four-band tunneling case, the transmission decreases in $T^+_+$, $T^-_+$ and $T^-_-$ channels in comparison with the single barrier case. It does, however, increase for $T^+_-$ when compared to the single barrier case. Transmission is suppressed in the gap region when an interlayer bias is introduced. This is reflected in the total conductance $G_{\text{tot}}$ in the region of zero conductance. Our results are relevant for electron confinement in AB bilayer graphene and for the development of graphene-based transistors.
Mouhamadou Hassane Saley, Ahmed Jellal
2023-01-29T16:14:57Z
http://arxiv.org/abs/2301.12479v2
# Magnetic field effect on tunneling through triple barrier in AB bilayer graphene ###### Abstract We investigate electrons tunneling in AB bilayer graphene through a triple electrostatic barrier in the presence of a perpendicular magnetic field. By way of the transfer matrix method and using the continuity conditions at the different interfaces, transmission probability is determined. Additional resonances appear for two-band tunneling and at normal incidence, and their number is proportional to the value of \(U_{4}\) in cases where \(U_{2}<U_{4}\) is present. However, when \(U_{2}>U_{4}\), anti-Klein tunneling increases with \(U_{2}\). The transmission probability exhibits an interesting oscillatory behavior when \(U_{3}>U_{2}=U_{4}\) and \(U_{3}<U_{2}=U_{4}\). For fixed energy \(E=0.39\gamma_{1}\), increasing barrier widths increases the number of oscillations and decreases Klein tunneling. An interlayer bias creates a gap for: \(U_{2}<U_{3}<U_{4}\) and \(U_{3}>U_{2}=U_{4}\). In the four-band tunneling case, transmission decreases in \(T_{+}^{+}\), \(T_{+}^{-}\) and \(T_{-}^{-}\) channels in comparison with the single barrier case. It does, however, increase for \(T_{-}^{+}\) when compared to the single barrier case. Transmission is suppressed in the gap region when an interlayer bias is introduced. Our results are relevant for electron confinements in AB bilayer graphene and for the development of graphene-based transistors. bilayer graphene, AB-Stacking, triple barrier, magnetic field, energy spectrum, transmission channels, Klein tunneling pacs: 72.80.Vp, 73.21.Ac, 73.23.Ad ## I Introduction Since its discovery in 2004, graphene [1] has been the focus of multiple theoretical and experimental investigations. This is due to its remarkable features on the one hand, and its broad range of potential applications in both fundamental and technological sciences on the other. In fact, unlike charge carriers in ordinary materials, those in graphene behave like relativistic massless particles and possess a linear energy dispersion at the Dirac points [2; 3; 4; 5]. As a result, graphene exhibits unusual electronic properties such as an unconventional quantum Hall effect [5; 6; 7], the Klein paradox [2; 8; 9], and a minimum conductivity [6; 10]. Graphene is a promising material for applications such as carbon-based transistors [11; 12; 13; 14], optoelectronic devices [15; 16; 17], and strain sensors [18; 19; 20; 21] due to its high electronic mobility [6; 22; 23], high thermal conductivity [24; 25], and optical and mechanical qualities [26; 27]. However, since the graphene conduction and valence bands touch each other, an energy gap must be opened to confine electrons in graphene and enable transistors to switch off [28; 29]. One of the techniques to open and control a gap is the application of an external electric field on AB bilayer graphene [30; 31; 32]. AB bilayer graphene results from the stacking of two layers of graphene in accordance with Bernal's method [33]. Similar to monolayer graphene, AB bilayer graphene has several intriguing characteristics. It does, in fact, display a peculiar quantum Hall effect [34; 35] that is distinct from the Hall effect seen in monolayer graphene. In contrast to monolayer graphene, electron tunneling in bilayer graphene is characterized by anti-Klein tunneling [2; 36; 37], i.e., a perfect reflection. Its energy spectrum shows four parabolic bands [30; 38]. Two of them touch each other at zero energy, and two others are separated by an energy equal to the interlayer coupling \(\gamma_{1}=0.4\) eV [30; 39]. The ability to open and control a gap makes AB bilayer graphene a major asset for nanoelectronic applications [39; 40]. Many studies on the transport properties of bilayer graphene for two-band (\(E<\gamma_{1}\)) and four-band (\(E>\gamma_{1}\)) tunneling have been published recently [41; 42; 43; 44; 45; 46]. Motivated by ref. [43], we intend to study the tunneling of electrons in bilayer graphene through a triple potential barrier system in the presence of a perpendicular magnetic field. First, we assessed the case where the energy is inferior to \(\gamma_{1}\). It was discovered that in the triple barrier case and for \(U_{2}<U_{4}\), a resonance appears at normal incidence for energies less than \(U_{4}\), which is not the case for the single barrier [43]. The number of resonances is found to increase as the value of \(U_{4}\) increases. Transmission is zero in the same region when \(U_{2}>U_{4}\), and anti-Klein tunneling increases with \(U_{2}\) but does not change when \(U_{3}\) changes. An interesting number of oscillations have appeared in the triple barrier cases in the energy region greater than \(U_{4}\), particularly when \(U_{3}>U_{2}=U_{4}\) and when \(U_{3}<U_{2}=U_{4}\). These oscillation numbers are higher than those obtained in refs. [42; 43; 44; 45; 46]. When we fix the value of energy at \(E=0.39\gamma_{1}\) and increase the barrier widths, Klein-tunneling decreases and the number of oscillations increases more, even at non normal incidence. When \(U_{2}<U_{3}<U_{4}\) and \(U_{3}>U_{2}=U_{4}\) are used, an interlayer bias opens a gap, whereas it has no effect when \(U_{2}>U_{4}\), \(U3<U_{2}<U_{4}\), and \(U3<U_{2}=U_{4}\) are used. Subsequently, we have considered the case of energy superior to \(\gamma_{1}\). The transmission decreased in the channels \(T_{+}^{+},\ T_{+}^{-}\) and \(\ T_{-}^{-}\) compared to the result in a single barrier[43] while in the \(T_{-}^{+}\) channel it increased. Transmission is zero in the gap region for a non-zero interlayer bias of (\(\delta_{3}=0.3\gamma_{1}\)), contrary to the findings in refs. [47; 48; 46]. This work is organized as follows. In Sec. II, we present the theoretical model and determine the eigenvectors and eigenvalues of the system. Sec. III deals with the determination of the transmission probability using the continuity conditions and the transfer matrix method. In Sec. IV, we numerically present and discuss the obtained results. Finally, we provide a summary of the results in Sec. V. ## II Theory and methods We consider a system of triple electrostatic barriers through which electrons in bilayer graphene can tunnel. As depicted in Fig. 1, our system is made up of five regions, each denoted by \(j\). In addition to the electric field, the central region (\(j=3\)) is subjected to a magnetic field. The following Hamiltonian can describe our system [4; 42]: \[H=\begin{pmatrix}V^{+}&v_{F}\pi^{\dagger}&-v_{4}\pi^{\dagger}&v_{3}\pi\\ v_{F}\pi&V^{+}&\gamma_{1}&-v_{4}\pi^{\dagger}\\ -v_{4}\pi&\gamma_{1}&V^{-}&v_{F}\pi^{\dagger}\\ v_{3}\pi^{\dagger}&-v_{4}\pi&v_{F}\pi&V^{-}\end{pmatrix} \tag{1}\] where \(\pi=p_{x}+i(p_{y}+eA_{y}(x))\) is the in-plan momenta, \(v_{F}=10^{6}\) m/s is the Fermi velocity. The potentials \(V^{+}\) and \(V^{-}\) on the first and second layers, respectively, are defined by \[V_{j}^{\pm}=\begin{cases}0,&j=1,5\\ U_{j}+\xi\delta_{j},&j=2,3,4\end{cases} \tag{2}\] such that \(\xi=+1\) and \(\xi=-1\) correspond to the first and second layers. The barrier strength is represented by the parameter \(U_{j}\), and the interlayer bias is represented by the parameter \(\delta_{j}\). The magnetic field is applied perpendicularly to the graphene layers, and it can be written in terms of Heaviside step function \(\Theta\) \[B(x,y)=B\Theta\left[(b-x)\left(c-x\right)\right] \tag{3}\] where \(B\) is a constant. The component of the vector potential \(A_{y}(x)\) in the Landau gauge is defined as \[A_{y}(x)=\frac{\hbar}{el_{B}^{2}}\begin{cases}b,&\text{if}\quad x<b\\ x,&\text{if}\quad b<x<c\\ c,&\text{if}\quad x>c\end{cases} \tag{4}\] Figure 1: Schematic representation of five regions, including the triple barrier and magnetic field. where \(l_{B}=\sqrt{\hbar/eB}\) is the magnetic length. The parameters \(v_{3}\) and \(v_{4}\) have been shown to have no effect on the band structure at high energy or on the transmission at low energy [30; 42; 49]. Then, we neglect them and write the Hamiltonian (1) as \[H_{j}=\begin{pmatrix}U_{j}+\delta_{j}&v_{F}\pi^{\dagger}&0&0\\ v_{F}\pi&U_{j}+\delta_{j}&\gamma_{1}&0\\ 0&\gamma_{1}&U_{j}-\delta_{j}&v_{F}\pi^{\dagger}\\ 0&0&v_{F}\pi&U_{j}-\delta_{j}\end{pmatrix} \tag{5}\] in the basis of four component spinor \(\psi^{j}(x,y)=\left[\psi^{j}_{A_{1}},\psi^{j}_{B_{1}},\psi^{j}_{A_{2}},\psi^{j }_{B_{2}}\right]^{\dagger}\) where the symbol \(\dagger\) denotes the transpose row vector. As a consequence of the conservation of momentum \(k_{y}\) along the \(y\)-direction, the spinor can be written as \[\psi^{j}(x,y)=e^{ik_{y}y}\left[\phi^{j}_{A_{1}},\phi^{j}_{B_{1}},\phi^{j}_{A_{ 2}},\phi^{j}_{B_{2}}\right]^{\dagger} \tag{6}\] In order to determine the eigenvalues and the eigenvectors of each region, we use \(H_{j}\psi_{j}=E_{j}\Psi_{j}\) to obtain four coupled differential equations \[-i\sqrt{2}\vartheta_{0}a\phi^{j}_{B_{1}} =(\varepsilon_{j}-\delta_{j})\phi^{j}_{A_{1}} \tag{7a}\] \[i\sqrt{2}\vartheta_{0}a^{\dagger}\phi^{j}_{A_{1}} =(\varepsilon_{j}-\delta_{j})\phi^{j}_{B_{1}}-\gamma_{1}\phi^{j} _{A_{2}}\] (7b) \[-i\sqrt{2}\vartheta_{0}a\phi^{j}_{B_{2}} =(\varepsilon_{j}+\delta_{j})\phi^{j}_{A_{2}}-\gamma_{1}\phi^{j} _{B_{1}}\] (7c) \[i\sqrt{2}\vartheta_{0}a^{\dagger}\phi^{j}_{A_{2}} =(\varepsilon_{j}+\delta_{j})\phi^{j}_{B_{2}} \tag{7d}\] where we have set \(\varepsilon_{j}=E_{j}-U_{j}\), \(\vartheta_{0}=\frac{\hbar v_{F}}{lb}\) is the energy scale, \(a\) and \(a^{\dagger}\) are respectively the annihilation and creation operators \[a=\frac{l_{B}}{\sqrt{2}}\left(\partial_{x}+k_{y}+\frac{eA_{y}(x) }{\hbar}\right) \tag{8a}\] \[a^{\dagger}=\frac{l_{B}}{\sqrt{2}}\left(-\partial_{x}+k_{y}+\frac {eA_{y}(x)}{\hbar}\right) \tag{8b}\] which satisfy the commutation relation \(\left[a,a^{\dagger}\right]=\mathbb{I}\). We eliminate the unknowns from (7a-7d) step by step and obtain for \(\phi_{B_{1}}\) \[\left[2\vartheta_{0}^{2}aa^{\dagger}-(\varepsilon_{j}+\delta_{j})^{2}\right] \left[2\vartheta_{0}^{2}a^{\dagger}a-(\varepsilon_{j}-\delta_{j})^{2}\right] \phi^{j}_{B_{1}}=\gamma_{1}^{2}(\varepsilon_{j}^{2}-\delta_{j}^{2})\phi^{j}_{B _{1}} \tag{9}\] ### Eigenvalues and eigenvectors in region 3 In region 3, where the vector potential is \(A_{y}(x)=\frac{\hbar x}{el_{B}^{2}}\), we introduce the variable \(z=\sqrt{2}(\frac{x}{l_{B}}+k_{y}l_{B})\) and solve (9) to obtain \(\phi_{B_{1}}\). Then, we substitute the result into (7a-7d) to determine the rest of the spinor components. For more simplicity, we write the solution in matrix form \[\psi^{3}(x,y)=G_{3}M_{3}(x)C_{3} \tag{10}\] where \(G_{3}=\mathbb{I}_{4}\) is \(4\times 4\) identity matrix, \(C_{3}=\left(c_{+},c_{-},d_{+},d_{-}\right)^{\dagger}\) is a constant, and the matrix \(M_{3}(x)\) is given by \[M_{3}(x)=\begin{pmatrix}\eta_{-}\lambda_{+}\chi^{+}_{+,-1}&\eta_{-}^{*}\lambda _{+}\chi^{+}_{-,-1}&\eta_{-}\lambda_{-}\chi^{-}_{+,-1}&\eta_{-}^{*}\lambda_{-} \chi^{-}_{-,-1}\\ \chi^{+}_{+,0}&\chi^{+}_{-,0}&\chi^{+}_{+,0}&\chi^{-}_{-,0}\\ \zeta^{+}\chi^{+}_{+,0}&\zeta^{+}\chi^{-}_{+,0}&\zeta^{-}\chi^{-}_{+,0}&\zeta^ {-}\chi^{-}_{-,0}\\ \eta_{+}^{*}\zeta^{+}\chi^{+}_{+,1}&\eta_{+}\zeta^{+}\chi^{-}_{+,1}&\eta_{+}^{* }\zeta^{-}\chi^{+}_{+,1}&\eta_{+}\zeta^{-}\chi^{-}_{-,1}\end{pmatrix} \tag{11}\] where \(\chi^{\tau}_{\pm,l}=D\left[\lambda_{\tau}\pm l,\pm z\right]\) are the parabolic cylindrical function with argument \(z\) and \(l=-1,0,1\). We have set the quantities \[\lambda_{\tau} =-\frac{1}{2}+\frac{\varepsilon_{3}^{2}+\delta_{3}^{2}}{2\vartheta _{0}^{2}}+\tau\frac{\sqrt{(\vartheta_{0}^{2}-2\varepsilon_{3}\delta_{3})^{2}+ \gamma_{1}^{2}(\varepsilon_{3}^{2}-\delta_{3}^{2})}}{2\vartheta_{0}^{2}} \tag{12}\] \[\eta_{\pm} =\frac{-i\sqrt{2}\vartheta_{0}}{\varepsilon_{3}\pm\delta_{3}}\] (13) \[\zeta^{\pm} =\frac{-2\vartheta_{0}^{2}\lambda_{\pm}+(\varepsilon_{3}-\delta_ {3})^{2}}{\gamma_{1}(\varepsilon_{3}-\delta_{3})} \tag{14}\] We solve (12) to obtain the energy spectrum in this region. It is given by \[\varepsilon_{\pm,3}^{\tau}=\pm\frac{1}{\sqrt{6}}\left[\mu^{\frac{1}{3}}+\nu \mu^{\frac{-1}{3}}+2A\right]^{\frac{1}{2}}+\tau\frac{1}{\sqrt{6}}\left[-6B \sqrt{6}\left(\mu^{\frac{1}{3}}+\nu\mu^{\frac{-1}{3}}+2A\right)^{\frac{-1}{2} }-\left(\mu^{\frac{1}{3}}+\nu\mu^{\frac{-1}{3}}-4A\right)\right]^{\frac{1}{2}} \tag{15}\] where we have defined the parameters \[\mu =-A^{3}+27B^{2}+9AC+\sqrt{\left(-A^{3}+27B^{2}+9AC\right)^{2}-\nu ^{3}} \tag{16}\] \[\nu =(A^{2}+3C)\] (17) \[A =\delta_{3}^{2}+(2n+1)\vartheta_{0}^{2}+\frac{\gamma_{1}^{2}}{2}\] (18) \[B =\vartheta_{0}^{2}\delta_{3}\] (19) \[C =\left((2n+1)\vartheta_{0}^{2}-\delta_{3}^{2}\right)^{2}-\vartheta _{0}^{4}+\gamma_{1}^{2}\delta_{3}^{2} \tag{20}\] and \(n=\lambda_{\tau}\) is an integer number. ### Eigenvalues and eigenvectors in regions 1, 2, 4, 5 In regions \(j=1,2,4,5\), the vector potential is constant and set to be \(A_{y}(x)=\frac{\hbar}{el_{B}^{2}}d_{j}\) where \[d_{j}=\left\{\begin{array}{ll}b,&\text{if}\quad x<b\\ c,&\text{if}\quad x>c\end{array}\right. \tag{21}\] By solving (9) for \(\phi_{B_{1}}\), and substituting the result into (7a-7d), we obtain a general solution in the matrix form \[\psi^{j}(x,y)=G_{j}M_{j}(x)C_{j}e^{ik_{y}y} \tag{22}\] where \(G_{j}\) and \(M_{j}(x)\) are given by \[G_{j}=\begin{pmatrix}f_{-}^{+}&-f_{+}^{+}&f_{-}^{-}&-f_{+}^{-}\\ 1&1&1&1\\ h^{+}&h^{+}&h^{-}&h^{-}\\ h^{+}g_{+}^{+}&-h^{+}g_{-}^{+}&h^{-}g_{+}^{-}&-h^{-}g_{-}^{-}\end{pmatrix} \tag{23}\] \[M_{j}(x)=\begin{pmatrix}e^{ik_{j}^{+}x}&0&0&0\\ 0&e^{-ik_{j}^{+}x}&0&0\\ 0&0&e^{ik_{j}^{-}x}&0\\ 0&0&0&e^{-ik_{j}^{-}x}\end{pmatrix} \tag{24}\] with the parameters \[f_{\pm}^{\tau}=\hbar v_{F}\frac{k_{j}^{\tau}\pm i\left(k_{y}+\frac{d _{j}}{l_{B}^{2}}\right)}{\varepsilon_{j}-\delta_{j}} \tag{25}\] \[h^{\tau}=\frac{(\varepsilon_{j}-\delta_{j})^{2}-(\hbar v_{F})^{2} \left[\left(k_{j}^{\tau}\right)^{2}+\left(k_{y}+\frac{d_{j}}{l_{B}^{2}}\right) ^{2}\right]}{\left(\varepsilon_{j}-\delta_{j}\right)\gamma_{1}}\] (26) \[g_{\pm}^{\tau}=\hbar v_{F}\frac{k_{j}^{\tau}\pm i\left(k_{y}+ \frac{d_{j}}{l_{B}^{2}}\right)}{\varepsilon_{j}+\delta_{j}} \tag{27}\] The wave vector \(k_{j}\) along the \(x\)-direction is expressed as \[k_{j}^{\tau}=\sqrt{\frac{\varepsilon_{j}^{2}+\delta_{j}^{2}+\tau\sqrt{4 \varepsilon_{j}^{2}\delta_{j}^{2}+\gamma_{1}^{2}\left(\varepsilon_{j}^{2}- \delta_{j}^{2}\right)}}{\left(\hbar v_{F}\right)^{2}}-\left(k_{y}+\frac{d_{j }}{l_{B}^{2}}\right)^{2}} \tag{28}\] giving rise to the eigenenergies \[\varepsilon_{\pm,j}^{\tau}=\pm\sqrt{\delta_{j}^{2}+(\hbar v_{F}k)^{2}+\frac{ \gamma_{1}^{2}}{2}+\tau\sqrt{(\hbar v_{F}k)^{2}\left(4\delta_{j}^{2}+\gamma_{ 1}^{2}\right)+\frac{\gamma_{1}^{4}}{4}}} \tag{29}\] where \(k=\left[\left(k_{j}^{\tau}\right)^{2}+k_{y}^{2}\right]^{\frac{1}{2}}\) is the wave vector. It is worth noting that in the incident and transmission regions \(j=1,5\), solutions are obtained by requiring \(U_{j}=\delta_{j}=0\). ## III Transport properties We will calculate the transmission probability corresponding to the present system. We do this by imposing continuity conditions at each interface of the triple barrier structure. Thereafter, we can use the transfer matrix method to make a connection between the coefficients of the incident region and those of the transmitted one. These coefficients are given by \[C_{1}^{\tau}=\begin{pmatrix}\delta_{r,1}\\ r_{+}^{\tau}\\ \delta_{\tau,-1}\\ r_{-}^{\tau}\end{pmatrix},\quad C_{5}^{\tau}=\begin{pmatrix}t_{+}^{\tau}\\ 0\\ t_{-}^{\tau}\\ 0\end{pmatrix} \tag{30}\] where \(\delta_{r,\pm 1}\) is the Kronecker delta symbol. The continuity at interfaces \(x=a,b,c,d\) gives rise to \[G_{1}M_{1}(a)C_{1} =G_{2}M_{2}(a)C_{2} \tag{31}\] \[G_{2}M_{2}(b)C_{2} =G_{3}M_{3}(b)C_{3}\] (32) \[G_{3}M_{3}(c)C_{3} =G_{4}M_{4}(c)C_{4}\] (33) \[G_{4}M_{4}(d)C_{5} =G_{5}M_{5}(d)C_{5}. \tag{34}\] We can now connect the coefficients \(C_{1}^{\tau}\) to \(C_{5}^{\tau}\) \[C_{1}^{\tau}=NC_{5}^{\tau} \tag{35}\] where the matrix transfer \(N\) takes the form \[N=\prod_{j=1}^{4}M_{j}^{-1}(x_{j})G_{j}^{-1}G_{j+1}M_{j+1}(x_{j}) \tag{36}\] After manipulation, we can write (35) as \[\begin{pmatrix}t_{+}^{\tau}\\ r_{+}^{\tau}\\ t_{-}^{\tau}\end{pmatrix}=\begin{pmatrix}N_{11}&0&N_{13}&0\\ N_{21}&-1&N_{23}&0\\ N_{31}&0&N_{33}&0\\ N_{41}&0&N_{43}&-1\end{pmatrix}^{-1}\begin{pmatrix}\delta_{r,1}\\ 0\\ \delta_{r,-1}\\ 0\end{pmatrix} \tag{37}\] where \(N_{ij}\) are the matrix elements of \(N\) (36). Then, we can easily derive the transmission coefficients from (37). They are given by \[t_{+}^{\tau} = \frac{N_{33}\delta_{\tau,1}-N_{13}\delta_{\tau,-1}}{N_{11}N_{33}-N_ {13}N_{31}} \tag{38}\] \[t_{-}^{\tau} = \frac{N_{11}\delta_{\tau,-1}-N_{31}\delta_{\tau,1}}{N_{11}N_{33}- N_{13}N_{31}}. \tag{39}\] At this stage, we have to introduce the density of current to find all transmission channels, which is \[\mathbf{j}=v_{F}\Psi^{\dagger}\vec{\alpha}\Psi \tag{40}\] where \(\vec{\alpha}\) is a \(4\times 4\) matrix with two Pauli matrices \(\sigma_{x}\) on the diagonal, and the rest is zero. Then, using (40), we can calculate the incident \(\mathbf{j}_{\text{inc}}\) and transmitted \(\mathbf{j}_{\text{tra}}\) current densities. As a result, we obtain the four transmission channels \[T_{\pm}^{\tau}=\frac{\left|\mathbf{j}_{\text{tra}}\right|}{\left|\mathbf{j}_{ \text{inc}}\right|}=\frac{k_{5}^{\pm}}{k_{1}^{\tau}}\left|t_{\pm}^{\tau} \right|^{2} \tag{41}\] ## IV Results and discussions In the following, we will compute and discuss our numerical results. To begin with, we will look at the case of energy less than the interlayer coupling \(\gamma_{1}\) (two-band tunneling). Here, only one mode of propagation is possible, so we have just one transmission channel. Next, we will consider energy greater than the interlayer coupling \(\gamma_{1}\) (four-band tunneling). In this case, two propagation modes are available, which result in four transmission channels. ### Two-band Tunneling Figure 2: (Color online): Transmission as a function of energy \(E\), at normal incidence (\(k_{y}=0\)), for barrier widths \(b_{1}=b_{2}=b_{3}=25\) nm, \(l_{B}=13.5\) nm, and \(\delta_{2}=\delta_{3}=\delta_{4}=0\). (a): \(U_{2}<U_{3}<U_{4}\) (green line) and \(U_{2}>U_{3}>U_{4}\) (red line). (b): As in (a) except that \(U_{4}=0.5\gamma_{1}\) (green line) and \(U_{2}=0.5\gamma_{1}\) (red line). (c): \(U_{3}<U_{2}<U_{4}\) (green line) and \(U_{2}>U_{4}>U_{3}\) (red line). (d): \(U_{3}<U_{2}=U_{4}\) (green line) and \(U_{3}>U_{2}=U_{4}\) (red line). The blue line corresponds to the result obtained in [43] for a single barrier. The transmission probability is shown in Fig. 2 as a function of incident energy \(E\) at normal incidence (\(k_{y}=0\)) for barrier widths of \(b_{1}=b_{2}=b_{3}=25\) nm. The magnetic length is set to \(l_{B}=13.5\) nm, as in the single barrier case discussed in [43], and the interlayer bias is set to \(\delta_{2}=\delta_{3}=\delta_{4}=0\). We plot the transmission for two configurations of the triple barrier system in Fig. 2(a), and compare it to the result found in [43] (blue line). In contrast to the single barrier case (blue line), a transmission resonance occurs for energy lower than \(U_{4}\) when \(U_{2}<U_{3}<U_{4}\) (green line). In the same energy range, however, transmission is zero (anti-Klein tunneling [2; 36]) when \(U_{2}>U_{3}>U_{4}\) (red line). Transmission oscillates faster in the triple barrier cases (green and red lines) than in the single barrier (blue line [43]) and double barrier for zero magnetic field [46] cases for energies greater than \(U_{4}\). Fig. 2(b) depicts the transmission with the same parameters as Fig. 2(a), but with \(U_{4}\) (green line) and \(U_{2}\) (red line) increased. The number of resonances increases as \(U_{4}\) increases for configuration \(U_{2}<U_{3}<U_{4}\) (green line). In the opposite configuration, we see that anti-Klein tunneling increases as \(U_{2}\) increases. In Fig. 2(c), we reduce \(U_{3}\) so that \(U_{3}<U_{2}<U_{4}\) (green line) and \(U_{2}>U_{4}>U_{3}\) (red line) are obtained. Transmission occurs for energies less than \(U_{2}\) for \(U_{3}<U_{2}<U_{4}\) (green line), but not for \(U_{3}\) (blue line see [43]). For \(U_{2}>U_{4}>U_{3}\) (red line), the transmission is nearly identical to that shown in Fig. 2(a). We conclude that when \(U_{2}>U_{4}\), the transmission decreases regardless of whether \(U_{3}\) is large or small. The transmission for \(U_{3}<U_{2}=U_{4}\) (green line) and \(U_{3}>U_{2}=U_{4}\) (red line) is shown in Fig. 2(d). In both cases, the number of transmission oscillations increases significantly when compared to the single barrier's result [43]. However, when \(U_{3}>U_{2}=U_{4}\), it is less than when \(U_{3}<U_{2}=U_{4}\). It is also worth noting that such a large number of oscillations are not observed in [44; 45; 46; 42]. To investigate the interlayer bias effect, we plot the transmission as a function of incident energy \(E\) in Fig. 3, with the same parameters as in Fig. 2 and \(\delta_{3}=0.1\gamma_{1}\) set. The presence of interlayer bias in Fig. 3(a,b) opens a gap for a triple barrier system when \(U_{2}<U_{3}<U_{4}\) (green line), as it does for a single barrier system (blue line [43]). In contrast, there is no gap when \(U_{2}>U_{3}>U_{4}\) (red line), and the transmission behaves similarly to the result in Fig. 2 with a minor difference. The interlayer bias has no meaningful effect on transmission in Fig. 3(c) when \(U_{3}<U_{2}<U_{4}\) or when \(U_{2}>U_{4}>U_{3}\), no gap is found. However, in Fig. 3(d), there is a gap when \(U_{3}>U_{2}=U_{4}\), but not when \(U_{3}<U_{2}=U_{4}\). Note that, in the triple barrier case, the transmission does not behave like the results in [43; 44; 45]. For instance, the transmission is not zero in the region preceding the gap in [43; 44; 45], while it is zero in the triple barrier case. Moreover, in the double barrier [46] case, the presence of bound states in the well results in the presence of resonances within the gap. As can be seen, there is no resonance in the induced gap in the triple barrier case. Let us also note that in the case of a triple barrier with zero magnetic field [44], it is necessary to apply an interlayer bias in at least two regions to obtain a gap, whereas in this case \(\delta_{3}\) can open a gap on its own. In Fig. 4 we show the transmission as a function of barrier width (\(b_{1}=b_{2}=b_{3}\)) and the transverse wave vector \(k_{y}\). The magnetic length is set at \(l_{B}=18.5\) nm, the interlayer bias at \(\delta_{1}=\delta_{2}=\delta_{3}=0\), and the energy at \(E=0.39\gamma_{1}\). When we compare Fig. 4(a) (single barrier case [43]) to Fig. 4(b) (\(U_{2}<U_{3}<U_{4}\)), we see that the number of oscillations increases significantly in the triple barrier case as the barrier widths increase. In Fig. 4(c) additional transmission resonances appear where \(U_{3}<U_{2}<U_{4}\), resulting in more oscillations than in Fig. 4(b). The number of resonances, on the other hand, decreases when \(U_{3}<U_{2}=U_{4}\), as shown in Fig. 4(d). As a result, the oscillations are less frequent than in Fig. 4(b). However, it is important to note that the number of oscillations is greater than that of the single barrier [43] and double-barrier [46] cases. Klein tunneling is found to be reduced in the triple barrier cases when compared to the single barrier [43]. In fact, at \(|k_{y}|=0.3\) nm\({}^{-1}\), anti-Klein tunneling begins at \(b_{1}=b_{2}=b_{3}=3\) nm in triple barrier cases, and at 8 nm in single barrier case. In addition, for barrier widths greater than 20 nm, the transmission narrows more in the triple barrier cases than it does in the single barrier case. As a consequence, Klein tunneling decreases while anti-Klein tunneling increases. ### Four-band Tunneling We plot the transmission as a function of incident energy \(E\) and the transverse wave vector \(k_{y}\) in Fig. 5 for the single barrier case [43] with \(U_{3}=2.5\gamma_{1}\) (left) and the triple barrier case with \(U_{2}<U_{3}<U_{4}\) (right). The barrier widths are set to \(b_{1}=b_{2}=b_{3}=15\) nm, the magnetic length is set to \(l_{B}=13.5\) nm, and the interlayer bias is set to \(\delta_{2}=\delta_{3}=\delta_{4}=0\). The cloak effect [42; 36] occurs in the \(T_{+}^{+}\) channel and at normal incidence in the energy region \(U_{3}-\gamma_{1}<E<U_{3}\) in the single barrier case, and for a wide range of energy \(U_{3}-2\gamma_{1}<E<U_{3}+0.5\gamma_{1}\) in the triple barrier case. As a result, the transmission resonances decrease for energies less than \(U_{3}\) when compared to the single barrier case. There are, however, more thin resonances than those obtained in [44; 46; 47]. Furthermore, the transmission exhibits symmetrical behavior with respect to normal incidence (\(k_{y}=0\)) in [42; 43; 44; 45; 46], whereas in the triple barrier it is more pronounced in the region \(U_{3}-2\gamma_{1}<E<U_{3}+0.5\gamma_{1}\) for \(k_{y}<0\). Transmission probability increases Figure 5: (Color online): Density plot of transmission as a function of incident energy \(E\) and the transverse wave vector \(k_{y}\). (Left): Single barrier case with \(U_{3}=2.5\gamma_{1}\). (Right): Triple barrier case with \(U_{2}=1.5\gamma_{1}\), \(U_{3}=2.5\gamma_{1}\) and \(U_{4}=3\gamma_{1}\). The barrier widths are set at \(b_{1}=b_{2}=b_{3}=15\) nm, \(l_{B}=13.5\) nm and \(\delta_{2}=\delta_{3}=\delta_{4}=0\). in the \(T_{-}^{+}\) channel compared to the single barrier case and approaches unit near \(E=U_{3}\) in the case of a triple barrier (right). In addition, the transmission oscillates, unlike [42; 43; 46]. In the \(T_{+}^{-}\) channel, the transmission probability decreases for the triple barrier case and gets close to zero, as found in [44]. In contrast, in the case of a single barrier [43], it is different from zero. The transmission in the \(T_{-}^{-}\) channel decreases from \(E=U_{3}\) in the single barrier case to \(E=U_{3}-\gamma_{1}\) in the triple barrier case. Therefore, the number of resonances diminishes in the triple barrier case, but the transmission probability still remains higher than in [45; 46]. Resonances appear in the case of a single barrier for energies greater than \(U_{3}+\gamma_{1}\), whereas transmission is zero in the case of a triple barrier. Fig. 6 shows the transmission channels for the same parameters as in Fig. 5, but with \(\delta_{3}=0.3\gamma_{1}\) and \(\delta_{2}=\delta_{4}=0\). We observe that the presence of the interlayer bias results in a gap region in all transmission channels, as found in the single barrier case [43]. In addition, no transmission is found in the gap region, in contrast to [46; 47; 48]. The cloak effect in the \(T_{+}^{+}\) channel is larger in the triple barrier case than in the single barrier case, as shown in Fig. 5. However, in the \(T_{-}^{+}\) channel, it occurs for the same energy region in both the triple barrier and single barrier cases. Furthermore, we observe two thin resonances in the \(T_{-}^{+}\) channel and triple barrier case that are not present in the single barrier case [43]. The transmission probability in the \(T_{+}^{-}\) channel and triple barrier case becomes non-zero in the region of energies greater than \(U_{4}\), in contrast to the result in Fig. 5, where it is close to zero. However, it still remains lower than in the case of the single barrier. In the \(T_{-}^{-}\) channel, the transmission in both single barrier and triple barrier cases is similar to the results obtained in Fig. 5, except that the number of resonances decreases in the single barrier case. ## V Conclusion In this paper, we have investigated the transmission of charge carriers in AB bilayer graphene through a triple potential barrier and in the presence of a perpendicular magnetic field. The eigenvalues and eigenvectors for each region were first determined using the four-band Hamiltonian. Afterwards, we calculated the transmission probability by applying the continuity conditions at the different edges of the system and using the transfer matrix method. The number of resonances increases with the value of \(U_{4}\) (when \(U_{2}<U_{3}<U_{4}\) ) in the two-band tunneling case at normal incidence and for \(E<U_{4}\), whereas none are present in the single barrier case. However, when \(U_{2}>U_{3}>U_{4}\), we discovered that anti-Klein tunneling increased as the value of \(U_{2}\) increased. Furthermore, we discovered that anti-Klein tunneling does not depend on the value of \(U_{3}\) when \(U_{2}>U_{4}\), but decreases when \(U_{3}<U_{2}<U_{4}\). For energy, \(E>U_{4}\), the transmission probability oscillates rapidly in the triple barrier cases compared to the single-barrier ones. A large number of oscillations were observed when \(U_{3}<U_{2}=U_{4}\) and when \(U_{3}>U_{2}=U_{4}\). We discovered that Klein tunneling decreases as the barrier widths and number of oscillations increase at energy \(E=0.39\gamma_{1}\). When an interlayer bias is introduced, as in the single barrier case, a gap region is found in cases \(U_{2}<U_{3}<U_{4}\) and \(U_{3}>U2=U_{4}\). In the four-band tunneling, it was seen that the transmission decreased in the channels \(T_{+}^{+}\), \(T_{+}^{-}\) and \(T_{-}^{-}\) when compared to the case of a single barrier. By contrast, in the \(T_{-}^{+}\) channel, there is an increase in the transmission probability in comparison with the single barrier case. Contrary to [46; 47; 48], we observed total suppression of transmission in the gap region in the presence of an interlayer bias, (\(\delta_{3}=0.3\gamma_{1}\)). To sum up, our findings revealed interesting oscillatory features that provide good regulation of charge carrier transmission in AB bilayer graphene. Our results, on the other hand, demonstrated that in the presence of a magnetic field, the triple barrier system is capable of confining electrons in AB bilayer graphene as well as providing a logic state on/off to graphene-based transistors.
2305.17790
Numerical explorations of solvent borne adhesives: A lattice-based approach to morphology formation
The internal structure of adhesive tapes determines the effective mechanical properties. This holds true especially for blended systems, here consisting of acrylate and rubber phases. In this note, we propose a lattice-based model to study numerically the formation of internal morphologies within a four-component mixture (of discrete particles) where the solvent components evaporate. Mimicking numerically the interaction between rubber, acrylate, and two different types of solvents, relevant for the technology of adhesive tapes, we aim to obtain realistic distributions of rubber ball-shaped morphologies -- they play a key role in the overall functionality of those special adhesives. Our model incorporates the evaporation of both solvents and allows for tuning the strength of two essentially different solvent-solute interactions and of the temperature of the system.
Vi Cecilia Erik Kronberg, Stela Andrea Muntean, Nils Hendrik Kröger, Adrian Muntean
2023-05-28T18:33:30Z
http://arxiv.org/abs/2305.17790v1
# Numerical explorations of solvent borne adhesives: A lattice-based approach to morphology formation ###### Abstract The internal structure of adhesive tapes determines the effective mechanical properties. This holds true especially for blended systems, here consisting of acrylate and rubber phases. In this note, we propose a lattice-based model to study numerically the formation of internal morphologies within a four-component mixture (of discrete particles) where the solvent components evaporate. Mimicking numerically the interaction between rubber, acrylate, and two different types of solvents, relevant for the technology of adhesive tapes, we aim to obtain realistic distributions of rubber ball-shaped morphologies -- they play a key role in the overall functionality of those special adhesives. Our model incorporates the evaporation of both solvents and allows for tuning the strength of two essentially different solvent-solute interactions and of the temperature of the system. **Keywords:** Phase separation, adhesive tapes, rubber morphologies, lattice-based simulations ## 1 Introduction An adhesive is usually a thin flexible layer that is applied on the boundary of objects aiming to join them together via an adhesive bonding process. The adhesive bonding is a complex process and its efficiency is directly influenced by the internal structure (here referred to as "morphology") of the layer. In this note, we focus our discussion on acrylic pressure sensitive adhesives (PSA), as they are one of the most important and widely used classes of adhesives. They have applications ranging from standard tapes and labels to special protective films (sealing, e.g., against oxidation or solar radiation). We refer the reader to [13] for a recent review of the classification of adhesive tapes and their applications. One of many possibilities of enhancing the properties of PSAs is blending the acrylic base formulation with rubber. Here, our main interest lies in understanding the phase separation properties of a highly interacting multi-component mixture that is prepared in the production phase of solvent-borne adhesives. Specifically, and as a simplification, we investigate a combination of four species (acrylate, rubber, and two distinct solvents), allowing for the possibility of evaporating the two solvents.1 A typical experimental picture we have in mind for this setting is shown in Figure 1. For this precise case, the two solvents are ethylacetate (solvent 1) and benzine (solvent 2), but the overall picture should be perceived in a generic way. Footnote 1: In industrial formulations additional ingredients are added like tackifying resins, wetting agents, anti-aging agents, plasticizers and more. The main question we pose here is twofold: What influence does the temperature during the evaporation process have on the distribution of the rubber phase and on the dispersion of the two solvents during the evaporation process? To address this question, we propose a study based on a stochastic lattice-type model that describes the initiation phase of the mixture, the actual dynamics leading to morphology formation (multi-component diffusion, interaction, and evaporation), and finally, a calibration phase -- referred here as migration phase. In this last phase, the obtained morphologies stabilize their terminal shape and any remaining solvent is allowed to disperse more evenly throughout the system with a switched-off evaporation process. Within this frame, we apply the methodology that we developed for a numerical investigation of morphology formation as it occurs in the case of organic solar cells (OSC); see our recent work reported in [1, 2, 3]. The main common feature of both OSC and PSA systems is that the phase separation property arises during the interplay of the diffusion transport of a highly-interacting mixture of polymers and solvent, with the solvent evaporating until a certain mass fraction is reached. Obviously, the type of polymers and choices of solvents are very different in PSA compared to OSC, but the essential conceptual difference relies on the fact that for PSA systems, the temperature (and eventually also temperature gradients) play a prominent role in the formation of the final film and moreover, more solvents can be present in the mixture. Our description is stochastic, it holds at a discrete level (the lattice), and handles the time evolution and spatial localization of all the mass fractions in the mixture. On the other hand, as we are tracing only the evolution of mass fractions, no information about the transfer of momentum can be captured at this level. Consequently, neither fluid dynamics effects nor macroscopic mechanical responses of the film can be explored within our framework. To this end, conceptually different approaches need to be taken; we refer the reader to [1] (adhesive materials and friction), [2, 3] (dynamical adhesive contact in visco-elastic materials) and [1] (conservation of historical heritage) for remotely related situations where linear momentum information is handled computationally without aiming to capture insights in the dynamics of the phase separation. The paper is organized as follows. In Section 2, we describe the lattice model, and in Section 3, we show our simulation results and then discuss their relevance. The conclusion of this study as well as an outlook of further possible questions to be investigated in this setting are the subject of Section 4. ## 2 The lattice-based model In this section, we give the details of our lattice-based model and describe the algorithm used to perform the simulations. The implementation of the algorithm is done in MATLAB and is publicly available at github.com/vcekron/solventAdhesive. ### The lattice Consider a rectangular two-dimensional lattice \(\Lambda=\{1,\ldots,L_{1}\}\times\{1,\ldots,L_{2}\}\), where \(L_{1}\geq L_{2}\). An element of \(\Lambda\) is called a _site_. Associated with each site are two _bonds_ -- one horizontal and one Figure 1: Left panel: Side view on the thin film. Right panel: Top view on the thin film. The ball-type structures are rubber parts; within an acrylate matrix. The two solvents, ethylacetate and benzine, will evaporate fully, if the process is not stopped, and are indistinguishable in the experimental pictures. We see that multiple rubber disks coalesce to form larger ones, much like in our simulation. vertical -- connecting each site to two neighbouring sites. The sites are populated by a species variable \(\sigma\in\{1,2,3,4\}\), where the meaning of each species variable is explained in Table 1, where the "color" property refers to the colors in the figures presented in the results section; see Section 3. ### The interaction matrix The interaction between two neighbouring sites depends on the two species -- captured in the interaction matrix, denoted \(J\), \[J:=\begin{pmatrix}J_{11}&J_{12}&J_{13}&J_{14}\\ J_{21}&J_{22}&J_{23}&J_{24}\\ J_{31}&J_{32}&J_{33}&J_{34}\\ J_{41}&J_{42}&J_{43}&J_{44}\end{pmatrix}, \tag{1}\] which incorporates a few specific features regarding the way the components of the mixture interact with each other. Besides the symmetry of the interaction matrix (compare [1]), we impose the following additional constraints on the entries of \(J\), namely \(J_{11}=J_{33}=J_{44}=0\) (no self interaction for the acrylate and the two solvents), \(J_{22}\ll 0\) (strong self-attraction for the rubber), \(J_{34}>0\) (the two solvents repel each other), \(J_{24}>0\) (rubber slightly repels solvent \(2\)), \(J_{23}>0\) (rubber strongly repels solvent \(1\), which instead is less repelled by acrylate), and \(J_{12}\gg 0\) (rubber and acrylate strongly repel each other). Furthermore, we only look to the case \(J_{13}<J_{23}\) (solvent \(1\) repels acrylate more than it repels rubber). In this study, we have fixed the interaction matrix to be \[J:=\begin{pmatrix}0&6&0.5&1.5\\ 6&-4&1&0.5\\ 0.5&1&0&0.75\\ 1.5&0.5&0.75&0\end{pmatrix}, \tag{2}\] in accordance with the description above. In order to simulate the dynamics of the lattice system, let \(\mathcal{H}\) represent the Hamiltonian \[\mathcal{H}:=\sum_{\langle i,j\rangle}J_{\sigma_{i},\sigma_{j}}, \tag{3}\] where the sum is carried out over nearest-neighbours, and \(J_{\sigma_{i},\sigma_{j}}\) represents the components of \(J\) in Eq. (2). The energy difference between two configurations is then \[\Delta E:=\mathcal{H}^{\text{proposed}}-\mathcal{H}^{\text{current}}. \tag{4}\] Here, the upper index "current" of \(\mathcal{H}\) refers to the evaluation of the Hamiltonian on the current configuration, while the upper index "proposed" points out the evaluation of the Hamiltonian for the next configuration, where a particle wants to move to another site. Furthermore, let \(\beta>0\) be the inverse temperature, i.e., smaller \(\beta\) is associated with a higher temperature, and _vice versa_. The precise choice of \(\beta\) is essential in the fifth step of the Metropolis algorithm explained next. ### The Metropolis algorithm The Metropolis algorithm used in the simulations presented in the next section can be explained in five steps: \begin{table} \begin{tabular}{c l l l} \hline Species & Component & Color \\ \hline 1 & Acrylate & Blue \\ 2 & Rubber & Yellow \\ 3 & Ethylacetate & Red \\ 4 & Benzine & Green \\ \hline \end{tabular} \end{table} Table 1: List of the mixture components with their coloring. 1. pseudo-randomly select a site on \(\Lambda\), 2. pseudo-randomly select a bond associated with the site (vertical or horisontal), 3. propose to switch the spins occupying the bonded sites, 4. evaluate the energy difference, \(\Delta E\) -- Eq. (4), associated with the switch, 5. accept the proposed move with probability \(1\) if \(\Delta E<0\) and \(\exp(-\beta\Delta E)\) otherwise. It is not difficult to see that higher temperature, i.e., smaller \(\beta\) will lead to more proposed moves being accepted in the Metropolis algorithm. It is beyond the scope of this work to enter here into the technical details behind the Monte Carlo method and the Metropolis algorithm. Instead, we refer the reader, for instance, to [1, 1] or to the textbook [1]. ## 3 Simulation results This section shows the time evolution of the systems with a focus on two effects: the addition of the so-called disks formation stage and varying the system temperature. For further insight into the dynamics of the system, please see the movies of these simulations publicly available at github.com/vcekron/solventAdhesive. ### The disks formation stage In order to study the effect of varying lengths of disk formation for fixed temperature, consider Fig. 2. Here, the same initial system was used for all three cases (top row). In the second row, the disks formation stage was carried out for zero iterations (first column), \(10^{8}\) iterations (second column; approximately \(3\cdot 10^{3}\) Monte Carlo Steps (MCS)) and \(10^{9}\) iterations (third column; approximately \(3\cdot 10^{4}\) MCS). As was mentioned before, during the disks formation stage, only the rubber-rubber (yellow-yellow) interaction is switched on, and periodic boundary conditions are enforced everywhere without evaporation of the solvents. That is, \[J_{\text{disks}}=\begin{pmatrix}0&0&0&0\\ 0&-4&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}. \tag{5}\] Once the disks formation stage is over, the full dynamics, including solvents evaporation from the top row, are switched on and the system is allowed to evolve. This evolution can be seen in consecutive rows after the second one, with snapshots at each \(10\%\) reduction in the total amount of solvent present in the system. Once the amount of solvent had reduced to \(10\%\) of the lattice sites (second to last row), the full dynamics were switched off and the so-called migration stage started. This stage lasted for \(10^{8}\) iterations. During this stage, periodic boundary conditions are once again enforced such that the solvent can readily migrate throughout the domain. The results of this stage is shown in the last row. Focusing our attention at the last row, we can see that the disks formation stage has a drastic effect on the sizes of the yellow domains and their dispersity. In particular, when the disks formation stage lasted for \(10^{9}\) iterations (third column), the final domains were significantly larger compared to the shorter disks formation stage (second column). Comparing the run without the disks formation stage to the middle one reveals less drastic changes in the end stage of the evolution, showing however clear differences in the initial stage, when there is significant amount of solvent in the system. The disks formation stage was introduced as a way to generate the rubber balls added to the mixture in the experimental setup2; compare Figure 1. With this in mind, and comparing the results obtained herein, we have opted to run the remaining simulations with \(10^{8}\) disks formation iterations as a compromise between omitting this stage and initialising with large yellow regions. Footnote 2: This stage can potentially be developed further by embedding an optimization step to search for optimal rubber distributions. This is though out of the scope of the current work. Figure 2: The effect of the disks formation stage at fixed temperature \(\beta=0.6\). Note that the disks formation stage has a drastic effect on the sizes of the yellow domains and their dispersity. Figure 3: The effect of changing the temperature. Note that increasing temperature is related to more domain growth and less condensation of the solvents. ### The effect of varying temperature The next effect we have explored is that of varying the temperature in the system, i.e., changing the parameter \(\beta\), directly proportional to the inverse temperature. Since we study a small section of a larger sample, the temperature within our domain will always be held constant. In the experimental setup, however, one often observes a temperature gradient; see [12] for a related setting. Whence, altering the temperature of our system could be viewed as observing different slices along the temperature gradient in the experimental setup. The simulations are shown in Figure 3, where the common initial configuration is shown in the top row. As before, successive rows show the systems at 10% reduction of solvent in the system, and the dynamics are once more switched off at 10% remaining solvent (second to last row). This time, each column represents different temperatures, going from cold (first column; \(\beta=0.9\)) to warm (third column; \(\beta=0.3\)) with an intermediate temperature of \(\beta=0.6\) in the second column. Increasing temperature does not seem to have a drastic effect on the domain growth, but it related to less condensation of the solvents. For example, in the first column (coldest system), the green solvent condenses around the rubber structures (yellow), and they stay condensed throughout the evolution of the system. Returning to the interaction matrix Eq. (2), this behaviour is easily understandable, since this solvent is less repelled by the yellow sites compared to the blue ones. The low temperature means that non-energetically favourable movements are significantly less likely to occur, so that the condensate is stable. In the second column, the green solvent still condenses around the yellow phase, but it does not produce "solvent bridges" between isolated yellow regions, like in the coldest case. Instead, the yellow regions connected via such solvent bridges before, now often merge to one larger yellow structure. Finally, for the warmest system in the third column, the green solvent no longer readily condenses around the yellow phase and instead evaporates together with the red solvent. Note that the red solvent evaporates more easily in general, since it is less repulsed by the blue phase, and hence is able to migrate to the top boundary, even at lower temperatures. ## 4 Conclusion and outlook We conclude our work with a few thoughts what concern further possible investigations in the same direction, viz. * The model captures the diffusion of the many components of the mixture, their interactions, as well as the evaporation mechanism of the solvents. The obtained morphologies are in the expected physical range. * The interplay between the temperature and the two solvents is very complex. Interestingly, looking at Figure 3, we can see that solvent 1 (the red solvent) can diffuse to the top boundary more easily, hence it can evaporate more rapidly. On the other hand, solvent 2 (the green solvent) tends to stay surrounding the rubber morphologies. If enough energy is available (i.e., at low values of \(\beta\), associated with high temperatures), then also this solvent can diffuse to the top boundary and then evaporate. * The disks formation phase and the migration (terminal) phase can potentially be used for the purpose of morphology design. This would natural involve not only a rather involved optimization step but also more information on the physics and chemistry of the involved components in the mixture. * As further study, we think it is worth estimating numerically the coarsening rates of the rubber balls and seeing how do they depend on the solvent-solute interaction parameters. Such study would need to involve a careful quantitative analysis of the obtained morphologies in terms of correlation and structure factors calculations; the procedure set up in [13] can be adapted to be applicable to the scenario presented here. Acknowledgments An initial formulation of the problem was posed as scientific challenge to MiMM Day(r) 2021 (Mathematics with Industry Day) by researchers from the company tesa SE (Germany). We thank our collaborators E. N. M. Cirillo (La Sapienza Univ., Rome, IT) and M. Colangeli (Univ. of L'Aquila, IT) for what we have learnt from them during the years concerning lattice-based modeling and simulation.
2303.04304
Phase Structure of Quantum Improved Schwarzschild-(Anti)de Sitter Black Holes
We study the phase structure of quantum improved Schwarzschild-(A)dS black holes in asymptotically safe gravity. Our results confirm some of the well-known properties of quantum black holes. For example, the quantum effect provides a repulsive force in the core region near singularity which stabilizes the thermodynamically unstable small black holes, and also creates a zero temperature state with finite size. We suggest that this could be a candidate for dark matter. We find a new second order phase transition between small and large black holes for quantum improved Schwarzschild-Anti de Sitter black holes. We also discuss the black holes with different spatial topologies and find a notable duality.
Chiang-Mei Chen, Yi Chen, Akihiro Ishibashi, Nobuyoshi Ohta
2023-03-08T00:54:43Z
http://arxiv.org/abs/2303.04304v2
# Phase Structure of Quantum Improved Schwarzschild-(Anti)de Sitter Black Holes ###### Abstract We study the phase structure of quantum improved Schwarzschild-(A)dS black holes in asymptotically safe gravity. The quantum effect provides a repulsive force in the core region near singularity which stabilizes the thermodynamically unstable small black holes. It also creates a zero temperature state with finite size which could be a candidate for dark matter. Moreover, there is a new second order phase transition between small and large black holes for quantum improved Schwarzschild-Anti de Sitter black holes. We also discuss the black holes with different spatial topologies and find a notable duality. ## I Introduction The black holes are known to have thermodynamic properties including a characteristic temperature and an intrinsic entropy [1]. The thermodynamics of Schwarzschild black holes encounter unstable phase with negative heat capacity. The phase structure becomes more fascinating when we take into account a negative cosmological constant. For such Schwarzschild-Anti de Sitter (SAdS) black holes, the temperature has a minimal value which corresponds to the divergent point of heat capacity, giving the border line of unstable small black holes with negative heat capacity and stable large black holes with positive heat capacity. In particular, there is a first order Hawking-Page (HP) transition [2] between large (stable) SAdS black holes and thermal Anti de Sitter (AdS) at the critical temperature where both the black hole and thermal AdS have the same free energy. Above this critical temperature, the free energy of a large black hole is lower than the free energy of thermal AdS of equal temperature, causing the decay of thermal AdS into black holes. These thermodynamic aspects of black holes were originally discovered by using quantum field theory on a classical background spacetime, or by applying euclidean quantum gravity approach initiated in [3]. No quantum effects of gravity are considered in these results. It is thus interesting to revisit these issues in black hole thermodynamics, in particular the phase structure, taking into account recent advances in quantum theory of gravity. Quantum effects of gravity are, in general, expected to play a significant role in the core region near the black hole singularity. This is indeed the case for asymptotically safe quantum gravity [4; 5; 6]; in the presence of the quantum improvement inspired by the asymptotically safe gravity, it turns out that quantum effects provide a "repulsive" force at the core of black hole which stabilizes the unstable phase of small black holes [7; 8]. The main purpose of this paper is to study the thermodynamic phase structure of quantum improved static black holes in the asymptotically safe quantum gravity. As the mass decreases, the temperature of small AdS black holes reduces to zero, instead of rapidly increasing to infinity. Consequently, as we will show, the quantum effects create a local maximal in all asymptotically flat and (A)dS black holes. The small black holes become stable and, therefore there exists a finite size zero temperature remnant which could be a candidate of dark matter. It raises an interesting question: _Is there a new phase transition, analogous to the HP transition, between stable quantum improved small black holes and thermal AdS?_ We will address this question and find that for the quantum improved SAdS black holes, when the quantum effects are not significantly large, there is an intermediate unstable phase between stable small and large black holes. Such kind of sandwich structure indicates a possibility of the existence of a second order phase transition from small to large black holes or vice versa. The evidence of such a phase transition is the presence of a swallow-tail pattern in free energy with respect to temperature. The phase structure of a thermodynamical system is encoded in the associated free energy. Unfortunately due to the lack of quantum action, we are not able to compute the free energy via the partition function. However, by assuming the thermodynamic free energy is still valid for quantum improved black holes, we can study the phase structure of quantum improved SAdS black holes. We find that the analogous HP transition can indeed occur only at unphysical negative temperature. Moreover, a new interesting swallow-tail pattern does appear when the strength of quantum effects are smaller than the critical value and the associated second order phase transition can happen. For the quantum improved asymptotically flat black holes, the heat capacity is independent of their spatial topologies, and thus they all share similar phase structures. When the cosmological constant is non-vanishing, we find that the heat capacity is invariant under sign change of the cosmological constant \(\Lambda\rightarrow-\Lambda\) and the spatial topologies \(\sigma\rightarrow-\sigma\). But, both the mass and temperature flip their signs. This means, for example, the phase structure of topological Schwarzschild-de Sitter (SdS) "black holes" is alike those of SAdS black holes. However, for the topological SdS black holes the mass is negative [9]. The solutions for \(\sigma\neq 1\) and \(\Lambda\geq 0\) correspond to cosmological solutions with positive mass parameter. This paper is organized as follows. In the next section, we will briefly describe how to quantum improved black hole geometries in the asymptotic safety scenario, and discuss the first law of a quantum improved Kerr-(A)dS black hole as an example. Then, in Sec. III, we will analyze quantum effects on thermodynamic phase structure of asymptotically AdS static black holes with different horizon topologies. We compute the Hawking temperature, heat capacity, and free energy for quantum improved AdS black holes, and for each type of horizon topology, we study in detail whether quantum improved AdS black holes exhibit the HP type phase transition. We find that for the spherical horizon case, a new type of second order phase transition can occur, due to the effects of quantum improvement, whereas for the hyperbolic and the planar horizon case, there are no phase transition analogous to the HP. We perform similar analyses of quantum improved black holes with positive cosmological constant in sec. IV and with vanishing cosmological constant in sec. V. Section VI is devoted to summary and discussions. In Appendix A, we briefly summarize how to compute free energy for SAdS black holes in general relativity. ## II Running Newton coupling in asymptotically safe gravity Let us first summarize some relevant results here in the asymptotically safe scenario for quantum generalization of the general relativity with cosmological constant, in the units \(c=\hbar=1,G_{0}=l_{P}^{2}=1/M_{P}^{2}\). The action is given by \[S=-\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}\left(R-2\Lambda\right). \tag{1}\] The asymptotically safe scenario proposes the Newton coupling \(G(k)\) and cosmological "constant" \(\Lambda(k)\) are energy dependent [4]. Assuming the cosmological constant is already at a fixed point which is sufficiently small, the associated renormalization group equations lead to the solutions for Newton coupling [7; 10; 11; 12; 13] \[G(k)=\frac{G_{0}}{1+\omega G_{0}k^{2}}, \tag{2}\] where \(\omega>0\) is the inverse of the fixed point value. How to identify the energy scale \(k\) with a spacetime distance scale is an essential issue for constructing quantum improved solutions [10]. See also related work [14; 15; 16; 17; 18; 19]. The first law of black hole thermodynamics, particularly for Kerr-(A)dS black holes, requires that a consistent identification near the horizon should be a function of the horizon area \(A_{h}\)[20]. The same consequence can be derived with the relation of entropy variation \(\delta S=\delta A_{h}/4G\)[21] in which it is implicitly presumed from the outset that the running coupling \(G\) is a function of the horizon area \(A_{h}\). Therefore, a physically admissible identification for energy scale is a function of area with given radius. According to dimensional counting, a naturally suggested scale identification for Kerr-(A)dS black holes at horizon is \[k(r_{h})=\xi\sqrt{\frac{1+\Lambda a^{2}/3}{r_{h}^{2}+a^{2}}}, \tag{3}\] with a dimensionless parameter \(\xi\). This identification is the simplest and natural one without introducing additional dimensional parameters. The resulting running Newton coupling is \[G(r_{h})=\frac{G_{0}(r_{h}^{2}+a^{2})}{r_{h}^{2}+a^{2}+\tilde{\omega}G_{0}(1+ \Lambda a^{2}/3)},\qquad\tilde{\omega}=\xi^{2}\omega. \tag{4}\] The black hole entropy can be computed by integrating the first law of thermodynamics which, up to an integrating constant \(S_{0}\), gives \[S=\frac{\pi(r_{h}^{2}+a^{2})}{G_{0}(1+\Lambda a^{2}/3)}+\pi\tilde{\omega}\ln \frac{\pi(r_{h}^{2}+a^{2})}{G_{0}(1+\Lambda a^{2}/3)}+S_{0}. \tag{5}\] The temperature determined by the surface gravity \(\kappa\) is \[T_{\rm H}=\frac{\kappa}{2\pi}=\frac{-\Lambda r_{h}^{4}+\left[1-\Lambda a^{2}/3- \tilde{\omega}G_{0}(1+\Lambda a^{2}/3)\Lambda/3\right]r_{h}^{2}-a^{2}-\tilde{ \omega}G_{0}(1+\Lambda a^{2}/3)}{4\pi r_{h}\left[r_{h}^{2}+a^{2}+\tilde{\omega} G_{0}(1+\Lambda a^{2}/3)\right]}. \tag{6}\] The quantum effects enlarge the size of extremal (zero temperature) state which can still exist in the static black holes with finite mass. The stable extremal black holes could be a candidate of dark matter. For non-rotating black holes \(a=0\), the vanishing condition of the temperature (6) gives the extremal limit with \[r_{\rm zero}^{2}=\frac{1-\tilde{\omega}G_{0}\Lambda/3\pm\sqrt{(1-\tilde{\omega }G_{0}\Lambda/3)^{2}-4\tilde{\omega}G_{0}\Lambda}}{2\Lambda}, \tag{7}\] where \(\pm\) corresponds to dS (\(\Lambda>0\))/AdS (\(\Lambda<0\)) and \(r_{\rm zero}^{2}=\tilde{\omega}G_{0}\) for \(\Lambda=0\). ## III Asymptotically anti-de Sitter black holes In this section we are going to analyze the quantum effects on the thermodynamical phase structure of the asymptotically AdS black holes with different horizon topologies, characterized by the unit sectional curvature \(\sigma\) of the horizon manifold, i.e. sphere (\(\sigma=1\)), flat (\(\sigma=0\)) and hyperbola (\(\sigma=-1\)): \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{\sigma}^{2},\qquad f(r)= \sigma-\frac{2G(r)M}{r}+\frac{r^{2}}{L^{2}}. \tag{8}\] (We also refer to the hyperbola (\(\sigma=-1\)) case as the _topological_ black hole.) The AdS radius-squared is \(L^{2}=-3/\Lambda\). The identification (4) for non-rotating black holes (\(a=0\)) reduces to \[G(r_{h})=\frac{G_{0}r_{h}^{2}}{r_{h}^{2}+\tilde{\omega}G_{0}}. \tag{9}\] Note that for the static black hole metric (8), the horizon area is given merely in terms of the radial coordinate \(r\), different from the Kerr-(A)dS case, and therefore any scale identification of the type \(k=\tilde{\xi}/r^{p}\) with \(p>0\) examined in [13] (with suitable dimension for \(\tilde{\xi}\)) is consistent with the black hole first-law discussed in [20]. Then, depending on the power \(p\), the quantum improved geometry near the center \(r=0\) will differ. In particular, for the identification (4) with \(a=0\), for which \(k\sim\xi/r\), the metric function behaves \(f(r)\sim\sigma-(2M/\tilde{\omega})r+O(r^{2})\) near the center and therefore the resultant quantum improved geometry admits a weak singularity at the center. However, since the temperature and heat capacity are computed at the horizon and also the euclidean path-integral for the free energy is performed outside the horizon as shown in Appendix A, the behavior of the quantum improved geometry (i.e., whether it is regular or not) near the center does not appear to affect much the evaluation of these thermodynamics quantities. For this reason, we will stick to the scale identification (4), which is the simplest and natural choice without any additional dimensional parameters, in the rest of this paper, despite possible more general dependence. The location of the horizon \(f(r_{h})=0\) gives the mass as \[M=\frac{(\sigma+r_{h}^{2}/L^{2})(r_{h}^{2}+\tilde{\omega}G_{0})}{2G_{0}r_{h}}. \tag{10}\] The mass parameter \(M\) is always positive for both \(\sigma=1\) and \(\sigma=0\). However, for \(\sigma=-1\) and the "horizon" radius smaller than AdS radius, i.e. \(r_{h}<L\), the mass parameter is negative. It is straightforward to compute the Hawking temperature \[T_{\rm H}=\frac{3r_{h}^{4}/L^{2}+(\sigma+\tilde{\omega}G_{0}/L^{2})r_{h}^{2}- \sigma\tilde{\omega}G_{0}}{4\pi r_{h}(r_{h}^{2}+\tilde{\omega}G_{0})}=\frac{ \sigma+3r_{h}^{2}/L^{2}}{4\pi r_{h}}-\frac{\tilde{\omega}G_{0}(\sigma+r_{h}^{2 }/L^{2})}{2\pi r_{h}(r_{h}^{2}+\tilde{\omega}G_{0})}, \tag{11}\] and the heat capacity \[C=\frac{2\pi(r_{h}^{2}+\tilde{\omega}G_{0})^{2}\left[3r_{h}^{4}+(\sigma L^{2} +\tilde{\omega}G_{0})r_{h}^{2}-\sigma L^{2}\tilde{\omega}G_{0}\right]}{G_{0} \left[3r_{h}^{6}-(\sigma L^{2}-8\tilde{\omega}G_{0})r_{h}^{4}+\tilde{\omega}G_ {0}(4\sigma L^{2}+\tilde{\omega}G_{0})r_{h}^{2}+\sigma L^{2}\tilde{\omega}^{2} G_{0}^{2}\right]}. \tag{12}\] The heat capacity, temperature and mass parameter have an interesting duality under transformation \(L^{2}\rightarrow-L^{2}\) and \(\sigma\rightarrow-\sigma\) \[C\to C,\qquad T\rightarrow-T,\qquad M\rightarrow-M. \tag{13}\] For studying the black holes thermodynamics, it is convenient to use the dimensionless variable \(\rho\) \[\rho\equiv\frac{r_{\rm h}^{2}}{L^{2}}\geq 0, \tag{14}\] and dimensionless parameter \(\zeta\) \[\zeta\equiv\frac{\tilde{\omega}G_{0}}{L^{2}}\geq 0. \tag{15}\] The divergent points of heat capacity play important role in studying the thermodynamic phase structure. This happens when the denominator of Eq. (12) vanishes. This leads to a cubic equation \[d_{\sigma}(\rho):=3\rho^{3}-(\sigma-8\zeta)\rho^{2}+\zeta(4\sigma+\zeta)\rho+ \sigma\zeta^{2}=0, \tag{16}\] and its discriminant is \[D_{\sigma}:=4\zeta^{2}(\zeta-\sigma)(13\zeta^{3}-303\sigma\zeta^{2}+327\sigma^ {2}\zeta-5\sigma^{3}). \tag{17}\] In the following we will discuss the phase structure for black holes with different spatial topology. ### \(\sigma=1\) For the typical black holes with \(\sigma=1\), it is straightforward to plot the mass (10) with respect to the horizon radius and the temperature (11) and the heat capacity (12) with respect to the mass parameter. These are given in Figs. 1 (a)-(c) for different choice of \(\tilde{\omega}\). We see from Fig. 1 (a) that the quantum improved black holes have two horizons for the mass parameter greater than a lower bound which corresponds to extremal zero temperature state \(T_{\rm H}=0\). The radius of zero temperature state is, from (7), \[r_{\rm zero}^{2}=\frac{\sqrt{L^{4}+14\tilde{\omega}G_{0}L^{2}+\tilde{\omega}^ {2}G_{0}^{2}}-L^{2}-\tilde{\omega}G_{0}}{6}, \tag{18}\] and the associated entropy is \[S_{\rm BH}=\frac{\pi r_{\rm zero}^{2}}{G_{0}}+\pi\tilde{\omega}\ln\frac{\pi r _{\rm zero}^{2}}{G_{0}}+\pi\tilde{\omega}S_{0}=\frac{\pi r_{\rm zero}^{2}}{G_ {0}}\left(1+\frac{\tilde{\omega}G_{0}}{r_{\rm zero}^{2}}\ln\frac{\pi r_{\rm zero }^{2}}{G_{0}}+\frac{\tilde{\omega}G_{0}}{r_{\rm zero}^{2}}S_{0}\right). \tag{19}\] In Fig. 1 (c), one can observe that the existence of \(\tilde{\omega}\) creates a thermodynamically "stable" (not run away) phase with positive heat capacity in the small \(M\) region. Consequently there are two interesting results: (i) The quantum improved black holes can have a zero temperature state with finite mass which could give a candidate for dark matter. (ii) Instead of the HP transition between large black holes and AdS thermal states, quantum effects may create new kind, either first order or second order, of phase transitions. In GR, there is a first order HP phase transition in the SAdS black holes. It is interesting to check how the quantum effects modify the "original" phase transition or even generate new kinds of phase transition. The number of divergent points of heat capacity is closely related to the phase structure of thermodynamics. The thermodynamic phase structure is determined by the divergent points of heat capacity (16), i.e. the roots of the following cubic equation for dimensionless variable \(\rho\): \[d_{+}(\rho):=3\rho^{3}-(1-8\zeta)\rho^{2}+\zeta(4+\zeta)\rho+\zeta^{2}=0, \tag{20}\] and its discriminant is \[D_{+}:=4\zeta^{2}(\zeta-1)(13\zeta^{3}-303\zeta^{2}+327\zeta-5). \tag{21}\] There are 5 real roots of \(D_{+}=0\), numerically given as \[\zeta_{0}=0,\quad\zeta_{1}=0.01551337272,\quad\zeta_{2}=1,\quad\zeta_{3}=1.118 084244,\quad\zeta_{4}=22.17409469. \tag{22}\] From the sign of the last term of Eq. (20), we see that the product of the 3 roots is negative, so Eq. (20) always has at least one negative root. For \(\zeta\) in the range \(\zeta_{0}<\zeta<\zeta_{1},\zeta_{2}<\zeta<\zeta_{3}\) and \(\zeta_{4}<\zeta\), the discriminant (21) is positive, and we have 3 real roots of Eq. (20). From the derivative of Eq. (20) \[d_{+}(\rho)^{\prime}=9\rho^{2}-2(1-8\zeta)\rho+\zeta(4+\zeta), \tag{23}\] we see that this is positive for \(\zeta>\frac{1}{8}\) and positive \(\rho\). This means that \(d_{+}(\rho)\) monotonically increases for positive \(\rho\), and together with \(d_{+}(0)>0\), there is no positive root for \(\zeta>\frac{1}{8}\). For \(\zeta_{1}<\zeta\leq\frac{1}{8}\), where the discriminant (21) is negative, there are two complex and one negative roots. We then find that there are two positive roots for \(\rho\) only when the value of parameter \(\zeta\) is in the region \(0<\zeta<\zeta_{1}\) (the region where the heat capacity is negative) and a degenerate positive root at \(\zeta=\zeta_{1}\) (region of negative heat capacity shrinking to a point). When the quantum effects are small (\(\zeta<\zeta_{1}\)), the two positive roots of \(\rho\), corresponding to two real and positive roots of \(r_{h}\), are the locations of local maximum and minimum of the temperature in accord with the divergent points of heat capacity. In between the two divergent points, the thermal black hole system is unstable with negative heat capacity. This case is depicted by blue solid lines, together with other values of \(\zeta\), in Figs. 1. The physical picture of SAdS black holes is the following: In GR, the black hole temperature has a minimum corresponding to the divergent point separating negative (unstable small black hole states) and positive (stable large black hole states) regions of heat capacity. Moreover, there is a first order HP phase transition between a stable large black hole phase and thermal AdS state. The quantum effects provide a "repulsive" force and generates stable phases in small mass regions. This gives two possible new phase transitions: (i) a first order phase transition (analogy of the HP transition in classical solutions) between small black holes and AdS thermal state, (ii) a second order phase transition in between stable small and large black holes. However, as the quantum effects become stronger, two divergent points become closer and degenerate at the critical Figure 1: (a) The horizon radius, (b) Hawking temperature, (c) heat capacity and (d) free energy of quantum improved SAdS black holes with \(G_{0}=1,L^{2}=100\). The red dash line represents the classic result without quantum improvement. The blue solid line corresponds to \(\tilde{\omega}=1\,(\zeta=0.01)\) in which there is a unstable phase with negative heat capacity. The green and purple solid lines correspond to \(\tilde{\omega}=2,10\,(\zeta=0.02,0.1)\) in which the phase is all stable. value \(\zeta=\zeta_{1}\), and beyond it the heat capacity will become smooth and the corresponding black hole thermal system is always stable (green solid lines in Fig. 1). In order to study the details of phase transition, we have to compute the free energy. In GR, as discussed in Appendix A, the difference in the free energies of SAdS black holes and AdS thermal state can be computed either by saddle approximation of partition function or directly by thermodynamic relation \(F=E-TS\). Since we do not have quantum action for asymptotically safe gravity to compute the associated partition function, we simply use the second relation \(F=E-TS\) to calculate the difference in the free energies of quantum improved SAdS black holes and thermal AdS state. The result is depicted in Fig. 1 (d) as a function of the temperature. We find a swallow-tail pattern. Such a pattern exists within the range of the dimensionless parameter \(0<\zeta<\zeta_{1}\) in which the temperature can have both the local maxima and minima. A second order phase transition between the small and large black holes can occur at the intersecting point. However, when \(\zeta\) is larger than the critical value \(\zeta_{1}\), both the local maximal and minimal temperatures disappear and all phases are stable. A similar swallow-tail pattern was observed in the asymptotically flat black holes [22] by using the identification proposed in [8]. If the first order transition of the quantum improved small black holes and AdS thermal state were to occur, it could occur only at unphysical negative temperature because the free energy is positive when temperature is smaller than HP temperature and is still positive at \(T=0\) so the free energy can have only one root ("original" HP) in positive \(T\) region. Indeed, if such kind of phase transition occurred at \(T>0\), then the free energy should be negative at zero temperature corresponding to \(r_{h}=r_{\rm zero}\). However, from (10) and (18), it is obvious that \(F(r_{h}=r_{\rm zero})=M(r_{h}=r_{\rm zero})\) is positive. Thus, no HP like first order phase transitions (in addition to the original HP transition) could be generated for \(T>0\). Mathematically, the free energy can be negative at both small and large \(r_{h}\). For small \(r_{h}\) the free energy becomes negative only at negative temperature region, i.e. \(r_{h}<r_{\rm zero}\) which is unphysical. For large \(r_{h}\) the quantum effects "reduce" the critical temperature of the "classical" HP transition, see also Fig. 1 (d). However, it is hard to find the explicit expression for critical temperature due to the logarithmic term in the entropy. ### \(\sigma=-1\) For topological black holes (\(\sigma=-1\)) in GR without quantum effects (\(\tilde{\omega}=0\)), we depict the mass (10) with respect to the horizon radius and the temperature (11) and heat capacity (12) with respect to the mass parameter in Fig. 2 by red dashed lines. We find that there is only one horizon for positive \(M\), and two horizons for negative \(M\) larger than the lower bound, as can be seen from Fig. 2 (a). The extremal limit can be found by imposing the degenerate condition on \(f(r)\) in (8) with constant \(G(r)=G_{0}\): \[r_{h}=\frac{L}{\sqrt{3}},\qquad M=-\frac{L}{3\sqrt{3}G_{0}}, \tag{24}\] which gives the lower bound of \(M\) of black hole solutions. Here the temperature is zero. Including the quantum effects, the allowed mass parameter is from \(M=-\infty\) (\(r_{h}=0\)) to \(M=\infty\) (\(r_{h}=\infty\)). The temperature in Fig. 2 (b) is plotted for full range of horizon radius \(r_{h}\) in order to illustrate the duality (13) by comparing with the plots of quantum improved de Sitter black holes. Thus the dot parts in Fig. 2 (b) and Fig. 2 (c) are the results of the "inner" horizon. In GR, the topological AdS black holes are always stable. For the quantum improved black holes, the present discriminant \(D_{-}\) [obtained by setting \(\sigma=-1\) in Eq. (17)] is always positive, so there are three real roots of \(\rho\) for the equation \(d_{-}(\rho)=0\). Moreover, the derivative of \(d_{-}(\rho)\) has two roots and the smaller one is always negative which corresponds to a local maximum. Together with the fact that \(d_{-}(-\infty)<0\), \(d_{-}(0)<0\) and \(d_{-}(\infty)>0\), this means that the two roots must be negative and one is positive. The heat capacity has only one divergent point which separates the stable and unstable phases. The temperature has only a local minimum corresponding to the divergent point of heat capacity, as shown in Fig. 2 (b). If the quantum effect is small, the unstable phase is located inside the horizon, but if the quantum effect is large enough, the small quantum improved black holes, with more negative mass parameter, can be unstable. For the topological AdS black holes, the hyperbolic surface with unit radius, defined by \(x^{2}+y^{2}-z^{2}=-1\), has infinite area given by \(\Omega_{-}=2\pi\int_{0}^{\infty}\sinh\nu d\nu\). The total energy and entropy are infinite large quantities. Thus we should consider their "densities" as (the integration constant term in the entropy vanishes because it is divided by the infinite area) \[\tilde{E}=\frac{4\pi}{\Omega_{-}}E=M=\frac{(-1+r_{h}^{2}/L^{2})(r_{h}^{2}+ \tilde{\omega}G_{0})}{2G_{0}r_{h}},\qquad\tilde{S}=\frac{4\pi}{\Omega_{-}}S= \frac{\pi r_{h}^{2}}{G_{0}}+\pi\tilde{\omega}\ln\frac{r_{h}^{2}}{4G_{0}}, \tag{25}\] and then the free energy density is \[\tilde{F}=\frac{4\pi}{\Omega_{-}}F=\tilde{E}-T\tilde{S}. \tag{26}\] From the free energy density plotted in Fig. 2 (d), we find that there is no analogous HP transition. ### \(\sigma=0\) For the planar black holes (\(\sigma=0\)), we plot the mass, temperature and heat capacity in Fig. 3. They are all positive, and the heat capacity is never divergent. Thus the quantum improved planar AdS black holes do not change the stable nature in GR. The phase structure does not have an obvious change due to the quantum effects. Figure 2: (a) The horizon radius, (b) Hawking temperature, (c) heat capacity and (d) free energy density for quantum improved “topological SAdS black holes” (\(\sigma=-1\)) with \(G_{0}=1,L^{2}=100\). The red dash line represents classic result without quantum improvement. The blue, green and purple solid lines correspond to \(\tilde{\omega}=1,2,10\,(\zeta=0.01,0.02,0.1)\). ## IV Asymptotically de Sitter black holes The thermodynamic quantities for de Sitter black holes can be obtained simply from the corresponding results in Anti-de Sitter cases by replacing \(L^{2}\rightarrow-L^{2}\), namely \[M=\frac{(\sigma-r_{h}^{2}/L^{2})(r_{h}^{2}+\tilde{\omega}G_{0})}{2 G_{0}r_{h}},\qquad T_{\rm H}=\frac{-3r_{h}^{4}/L^{2}+(\sigma-\tilde{\omega}G_{0}/ L^{2})r_{h}^{2}-\sigma\tilde{\omega}G_{0}}{4\pi r_{h}(r_{h}^{2}+\tilde{\omega}G_{0})},\] \[C=\frac{2\pi(r_{h}^{2}+\tilde{\omega}G_{0})^{2}\left[3r_{h}^{4}-( \sigma L^{2}-\tilde{\omega}G_{0})r_{h}^{2}+\sigma L^{2}\tilde{\omega}G_{0} \right]}{G_{0}\left[3r_{h}^{6}+(\sigma L^{2}+8\tilde{\omega}G_{0})r_{h}^{4}- \tilde{\omega}G_{0}(4\sigma L^{2}-\tilde{\omega}G_{0})r_{h}^{2}-\sigma L^{2} \tilde{\omega}^{2}G_{0}^{2}\right]}. \tag{27}\] Let us discuss the thermodynamic properties for three different \(\sigma\) separately. ### \(\sigma=1\) According to the duality, it is not a surprise that the figures of thermodynamic quantities in Figs. 4 are a kind of "reflection" of the related quantities in Figs. 2 for topological AdS black holes. In GR, the mass of the SdS black holes has an upper bound corresponding to the degeneracy of the black hole and cosmological horizons. Only when the black holes horizon radius is not larger than the radius of cosmological horizon, the Hawking temperature is positive and physical. With quantum effects, there is another zero temperature state whose radius is given by (18) with \(L^{2}\) replaced by \(-L^{2}\). Only the black hole mass radius is in between this zero temperature radius and cosmological radius, the Hawking temperature is positive, as shown by the solid lines in Fig. 4 (b). If quantum effects are large enough, there are no solutions with positive Hawking temperature. In GR, the de Sitter black holes is unstable with negative heat capacity. With the quantum effects, the small black holes can be stable, see Fig. 4 (c). There are no phase transitions in de Sitter black holes, as can be seen from Fig. 4 (d). ### \(\sigma=-1\) and \(\sigma=0\) For the cases \(\sigma=-1\) and \(\sigma=0\) of asymptotically dS, there is no "horizon" within positive value of \(M\). Moreover, the lapse function \[f(r)=\sigma-\frac{2G(r)M}{r}-\frac{r^{2}}{L^{2}}, \tag{28}\] Figure 3: (a) The Hawking temperature and (b) heat capacity for quantum improved “planar SAdS black holes” (\(\sigma=0\)) with \(G_{0}=1,L^{2}=100\). The red dash line represents classic result without quantum improvement. The blue, green and purple solid lines correspond to \(\tilde{\omega}=1,2,10\,(\zeta=0.01,0.02,0.1)\). is always negative. This means that the \(r\) is actually the time coordinate in these cases which would give cosmological solutions. Therefore, the black holes can exist if the mass parameter is negative. However, \(\sigma=-1\) solution is "dual" to anti-de Sitter black holes with \(\sigma=1\). From Fig. 4, according to the duality (13), we see that the solutions with negative mass have negative Hawking temperature. ## V Asymptotically flat black holes For this case, the heat capacity reduces to \[C=-\frac{2\pi(r_{h}^{2}+\tilde{\omega}G_{0})^{2}(r_{h}^{2}-\tilde{\omega}G_{0} )}{G_{0}(r_{h}^{4}-4\tilde{\omega}G_{0}r_{h}^{2}-\tilde{\omega}^{2}G_{0}^{2})}, \tag{29}\] which does not depend on the sign of \(\sigma\). The divergent point of the heat capacity is then determined by the equation: \[r_{h}^{4}-4\tilde{\omega}G_{0}r_{h}^{2}-\tilde{\omega}^{2}G_{0}^{2}=0. \tag{30}\] The only positive root is \(r_{h}^{2}=(2+\sqrt{5})\tilde{\omega}G_{0}\). The Schwarzschild black holes (\(\sigma=1\)) are always unstable in GR, and Fig. 5 shows that the quantum effects stabilize the small black holes. For \(\sigma=-1\), the black holes solutions may exist for negative mass parameter. Figure 4: (a) The horizon radius, (b) Hawking temperature, (c) heat capacity and (d) free energy for quantum improved SdS black holes with \(G_{0}=1,L^{2}=100\). The red dash line represents classic result without quantum improvement. The blue, green and purple solid lines correspond to \(\tilde{\omega}=1,2,10\,(\zeta=0.01,0.02,0.1)\). However, according to the duality, and the Fig. 5 (a), the associated Hawking temperature is negative. For \(\sigma=0\), there are no black hole solutions. This exhausts the quantum gravity effects in all possible cases. ## VI Conclusion We have studied the quantum effects of gravity on various black holes with and without cosmological constant. Our findings can be summarized as follows. * For SAdS black holes, there always exists one and only one divergent point in GR (boundary of stable and unstable phases). Quantum improvement, with small quantum-effect parameter \(\omega\), increases one more divergent point and creates a new stable phase in the small mass region (consequently generates a new zero temperature state with finite size). If \(\omega\) increases, the "distance" (in mass coordinate) between two divergent points reduces, and when \(\omega\) is larger than the critical value, both divergent points and unstable phase disappear, see Fig. 1 (b). If \(\omega\) is not too big, the black holes can have a second order phase transition. * For the topological AdS black holes, the mass parameter can be negative (black holes with negative parameter had been discussed in [9; 23]). The system is thermodynamically stable if the quantum effects are small. Otherwise, the small black holes become unstable when the quantum effects are big enough. * For the planar AdS black holes (\(\sigma=0\)), since the black holes is always stable in GR, the "repulsive" effects does not give an obvious change. * For the SdS black holes, there are no divergent point in GR (only unstable phase exists). The quantum improvement again creates a new stable phase in small mass region which then also generates a divergent point. If the quantum improvement is big enough, there are no black hole solution with positive Hawking temperature, see Fig. 4 (b). * There is a duality in black hole thermodynamics. The heat capacity is invariant under the transformation \(L^{2}\rightarrow-L^{2}\) and \(\sigma\rightarrow-\sigma\), but the mass and temperature change sign. The topological SdS black holes share various properties with SAdS black holes. As listed above, we found that quantum improved Schwarzschild black holes with negative cosmological constant have a richer phase structure than their classical counter-parts, as well as those with positive and vanishing cosmological constant. For this, we have however assumed that the thermodynamic free energy is valid for the black holes quantum improved at the solution level. It would be interesting to investigate whether one can also compute the free energy by considering quantum improvement of the reduced action for euclidean black holes. Figure 5: (a) The Hawking temperature and (b) heat capacity for quantum improved Schwarzschild black holes with \(G_{0}=1\). The red dash line represents classic result without quantum improvement. The blue, green and purple solid lines correspond to \(\tilde{\omega}=1,2,10\,(\zeta=0.01,0.02,0.1)\). In the cosmological context, we are more concerned with quantum improved Schwarzschild black holes with positive or vanishing cosmological constant. In general, primordial black holes of mass \(\sim 10^{14}\)g produced in the early universe are expected to be undergoing now their final stages of the Hawking evaporation. However, once the quantum effects are turned on, the evaporating process would stop at some point where their horizon size is still finite, but the Hawking temperature is vanishing as can be seen in Fig. 4 and 5. Thus, such primordial black holes eventually settle down to thermodynamically stable, zero-temperature remnants, rather than being completely evaporated away. Although they may typically be Planck-size objects, such remnants might possibly be a candidate of dark matter. It is worth pursuing such an intriguing possibility. In this paper, we have chosen the simplest identification (3) to study the phase structure of the black holes. In this case, we find that the singularity at the center of the black holes may not be resolved. Unfortunately, in the Kerr black holes, the consistency of the thermodynamics requires that the scale \(k\) should be a function of the horizon area, and naive extension of this away from the horizon does not allow such singularity resolution [20]. However, it is also possible to consider the possibility to include the mass parameter of the black hole in the identification. Indeed, with such dependence, it is possible that the singularity may be resolved [7]. It would be interesting to investigate what restriction emerges when we allow the mass parameter in the identification, and whether we may get the possibility that the singularity may be resolved. ###### Acknowledgements. The work of C.M.C. was supported by the National Science and Technology Council of the R.O.C. (Taiwan) under the grants NSTC 111-2112-M-008-012. The work of A.I. was supported in part by JSPS KAKENHI Grants No. 21H05182, 21H05186, 20K03938, 20K03975, 17K05451, and 15K05092. The work of N.O. was supported in part by the Grant-in-Aid for Scientific Research Fund of the JSPS (C) No. 16K05331, 20K03980, and Taiwan NSTC 111-2811-M-008-024. ## Appendix A Free Energy and Hawking-Page Transition For the classical SAdS black holes, the free energy can be computed from the action. In \((d+1)\)-dimensional spacetimes, the action is (\(\Lambda=-d(d-1)/2L^{2}\)) \[S=-\frac{1}{16\pi G}\int_{\mathcal{M}}d^{d+1}x\sqrt{-g}\left(R+\frac{d(d-1)}{L^ {2}}\right)+\frac{1}{8\pi G}\int_{\partial\mathcal{M}}d^{d}x\sqrt{-\gamma} \left(-K+\frac{d-1}{L}+\cdots\right), \tag{10}\] where \(K\) is the trace of the extrinsic curvature of the boundary \(\partial\mathcal{M}\). For the SAdS black holes \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{d-1}^{2},\qquad f=1-\frac {2G_{0}M}{r^{d-2}}+\frac{r^{2}}{L^{2}}, \tag{11}\] the horizon of black holes is located at \(2GM=r_{h}^{d-2}(1+r_{h}^{2}/L^{2})\) and the associated temperature is \[T=\frac{f^{\prime}(r_{h})}{4\pi}=\frac{d-2}{4\pi r_{h}}+\frac{dr_{h}}{4\pi L^ {2}}=\frac{dr_{h}^{2}+(d-2)L^{2}}{4\pi r_{h}L^{2}}. \tag{12}\] There is a minimal value of temperature when \(r_{h}=r_{\rm min}=\sqrt{\frac{d-2}{d}}\,L\): \[T_{\rm min}=\frac{\sqrt{d(d-2)}}{2\pi L}. \tag{13}\] The free energy in the saddle point approximation is \(F=-T\ln\mathcal{Z}\approx T\,S_{\rm on\ shell}^{E}\). The Euclidean on-shell action, \(S_{\rm on\ shell}^{E}\) for SAdS black hole and thermal AdS are obtained by the integration outside the horizon in the full spacetime with \(R=\frac{2(d+1)}{d-1}\Lambda\): \[S_{\rm SAdS} = \frac{d}{8\pi GL^{2}}\int_{0}^{\beta}d\tau\int d\Omega_{d-1}\int_ {r_{h}}^{r_{\infty}}r^{d-1}dr=\frac{\beta\Omega_{d-1}(r_{\infty}^{d}-r_{h}^{d} )}{8\pi GL^{2}}, \tag{14}\] \[S_{\rm AdS} = \frac{d}{8\pi GL^{2}}\int_{0}^{\beta_{0}}d\tau\int d\Omega_{d-1} \int_{0}^{r_{\infty}}r^{d-1}dr=\frac{\beta_{0}\Omega_{d-1}r_{\infty}^{d}}{8\pi GL ^{2}}. \tag{15}\] By matching the proper temperature \(T(r)=T/\sqrt{-g_{tt}}\) on the horizon, we have \[\beta\sqrt{f}=\beta_{0}\sqrt{f_{0}}\quad\rightarrow\quad\beta_{0}=\beta\sqrt{f/ f_{0}}. \tag{100}\] Using this relation, we can obtain the difference in the free energies: \[\Delta F = \frac{S_{\rm SAdS}-S_{\rm AdS}}{\beta}=\frac{\Omega_{d-1}}{8\pi GL }\left(r_{\infty}^{d}-r_{h}^{d}-r_{\infty}^{d}\sqrt{\frac{f(r_{\infty})}{f_{0} (r_{\infty})}}\right) \tag{101}\] \[\approx \frac{\Omega_{d-1}}{8\pi GL^{2}}\left[r_{\infty}^{d}-r_{h}^{d}-r_ {\infty}^{d}\left(1-\frac{GML^{2}}{r_{\infty}^{d}}\right)\right]=-\frac{\Omega _{d-1}r_{h}^{d-2}(r_{h}^{2}-L^{2})}{16\pi GL^{2}}.\] The critical temperature for the HP transition corresponding to the point with zero free energy difference \(r_{h}=r_{\rm HP}=L>r_{\rm min}\) is \[T_{\rm HP}=\frac{d-1}{2\pi L}. \tag{102}\] There is a first order phase transition, namely HP transition, indicating a phase change from AdS thermal spacetime (higher free energy) to large SAdS black hole (lower free energy) for \(T>T_{\rm HP}\). Lets consider another approach to deriving the free energy via the black hole thermodynamics. The related thermodynamic quantities, namely energy and entropy are given by \[E=\frac{(d-1)\Omega_{d-1}}{8\pi}M,\qquad S=\frac{\Omega_{d-1}r_{h}^{d-1}}{4G}. \tag{103}\] According to the first law \[\delta E=T\delta S, \tag{104}\] the "enthalpy" [24]\(E\) is identical with the internal energy \(U=E\). Therefore the Helmholtz free energy (identical with the Gibbs function) for SAdS black holes is \[F=U-TS=E-TS=-\frac{\Omega_{d-1}r_{h}^{d-2}}{16\pi GL^{2}}(r_{h}^{2}-L^{2}). \tag{105}\] This is exactly the \(\Delta F\) in (101).
2308.08894
Dynamic metastable vortex states in interacting vortex lines
The electron transport in current-biased superconducting nano-bridges is determined by the motion of the quantum vortex confined in the internal disorder landscape. Here we consider a simple case of a single or two neighbouring linear defects crossing a nano-bridge. The strong anharmonicity of the vortex motion along the defect leads, upon RF-excitation, to fractional Shapiro steps. In the case of two defects, the vortex motion becomes correlated, characterized by metastable states that can be locked to a resonant RF-drive. The lock-unlock process causes sudden voltage jumps and drops in the voltage-current characteristics observed in experiments. We analyze the parameters promoting these metastable dynamic states and discuss their potential applications in quantum devices.
Sergei Kozlov, Jérôme Lesueur, Dimitri Roditchev, Cheryl Feuillet-Palma
2023-08-17T10:01:17Z
http://arxiv.org/abs/2308.08894v1
# Dynamic metastable vortex states in interacting vortex lines ###### Abstract The electron transport in current-biased superconducting nano-bridges is determined by the motion of the quantum vortex confined in the internal disorder landscape. Here we consider a simple case of a single or two neighbouring linear defects crossing a nano-bridge. The strong anharmonicity of the vortex motion along the defect leads, upon RF-excitation, to fractional Shapiro steps. In the case of two defects, the vortex motion becomes correlated, characterized by metastable states that can be locked to a resonant RF-drive. The lock-unlock process causes sudden voltage jumps and drops in the voltage-current characteristics observed in experiments. We analyze the parameters promoting these metastable dynamic states and discuss their potential applications in quantum devices. Superconductivity, Abrikosov vortex, Time-Depended Ginzburg-Landau ## I Introduction Quantum vortices are famous topological objects - lines of \(2\pi\)-phase singularities in the many-body wave function of coherent quantum condensates. In superconductors, where condensed particles are electrically charged Cooper pairs, the phase gradients generate vortex currents circulating around singularities, and producing a magnetic flux. The vortices strongly influence the characteristics of superconductors, limiting their critical currents and fields. In fact, externally applied currents and fields interact with the vortex, forcing it to move. In the vortex centers - cores - the superconductivity is suppressed, and the normal state is recovered. The vortex motion is therefore dissipative, often triggering the transition to the normal state of the entire system. In their motion inside superconductors, vortices interact with various local and extended defects, as well as with other vortices and obstacles. The collection of defects along with other moving and pinned vortices form a potential landscape in which a given vortex evolves. This landscape is generally dynamic and intricate, comprising local minima and saddle points. Consequently, the formation of various metastable states can occur, with their characteristic energies and stability subject to perturbation by external magnetic fields, DC or AC currents. In this work, we focus on the over-critical behaviour of current-biased superconducting nano-bridges (see Fig.1a as an example) aiming to explain the spectacular results of recent experimental findings. In most of the cases, a nano-bridge behaves like a Josephson junction: when the critical current \(I_{c}\) is reached, the bridge transits to the normal state. However, this transition is not always abrupt: the voltage \(V(I>I_{c})\) across the bridge increases progressively, often over several orders of magnitude, before the device reaches a fully normal state. In this progressive transition, the differential resistance \(dV/dI(I)\) can exhibit drops, telegraphic "noise" behaviour, and even become negative [1; 2; 3]. When a microwave excitation is added, the nano-bridges display, similarly to Josephson junctions, the famous Shapiro steps in DC \(V(I)\) characteristics; both integer and fractional plateaus are observed [4; 5; 6; 7; 8]. To gain a microscopic insight into the physical origin of the observed phenomena, we provide Time-Dependent Ginzburg-Landau (TDGL) calculations in a superconducting nano-bridge in which only a few typical vortex pinning centers - grain boundaries - are present [9]. We show that such a minimalistic disorder landscape is enough to explain several experimental results related to correlated vortex motion in disordered nano-bridges. ## II Results ### Model The model system is a rectangular superconducting sample representing the central part of a typical nano-bridge, as shown in Fig. 1. The model bridge has a length of \(L=60\xi\) and a width of \(W=40\xi\), where \(\xi\) represents the Ginzburg-Landau (GL) coherence length. Within the TDGL framework, the temporal and spatial evolution of the complex superconducting order parameter \(\psi(t,\mathbf{r})\) can be expressed as [11] (see Methods): \[\begin{split}&\partial_{t}\psi=\epsilon(\mathbf{r})\psi-|\psi|^{2} \psi+(\nabla-\imath\mathbf{A})^{2}\psi\\ &\varkappa^{2}\nabla\times(\nabla\times\mathbf{A})=\mathbf{J}_{ \mathrm{S}}+\mathbf{J}_{\mathrm{N}},\end{split} \tag{1}\] where \(\mathbf{A}\) is the vector potential due to magnetic field, \(\mathbf{J}_{S}\) and \(\mathbf{J}_{N}\) are superconducting and normal components of the electric current. The GL parameter \(\varkappa=\lambda/\xi\) (\(\lambda\) is the field penetration depth) is taken equal to \(\varkappa=4\), meaning that the bridge is in the type-II regime (see Methods). The defects, as depicted in Fig.1b,c, are introduced by spatially varying the parameter \(\epsilon(\mathbf{r})\), which is associated with the local critical temperature \(T_{c}(\mathbf{r})\) and the global sample temperature \(T\): \[\epsilon(\mathbf{r})=\frac{T_{c}(\mathbf{r})-T}{T} \tag{2}\] The superconducting part of the bridge is described by \(\epsilon\)=1, while the defects are characterized by a locally reduced critical temperature \(T_{c}(\mathbf{r})\), and are described by a lower \(\epsilon(\mathbf{r})\). For instance, the linear (grey) defect of width \(1\times\xi\) crossing the bridge is characterized by \(\epsilon=0.5\). It represents an extended structural defect - a grain boundary crossing the real sample or an artificial weak-link (possible experimental realizations are discussed in Sec.III). At a given temperature \(T\), this defect is superconducting, but its local critical temperature is \(3/4\) of the critical temperature \(T_{c}\) in the rest of the sample. The two point defects at the edges, presented by black rectangles \(2\xi\times 5\xi\), are characterized by \(\epsilon=0\), that corresponds to a fully suppressed superconductivity. These edge defects appear at the ends of the grain boundaries as a result of a damage caused during the nano-bridge fabrication processes. When simulating the Shapiro step experiments, the microwave illumination is added as an AC-current of amplitude \(I_{{}_{AC}}\) and frequency \(f_{{}_{AC}}\). The total transport current through the bridge is therefore: \[I_{tr}=I_{{}_{DC}}+I_{{}_{AC}}\sin(2\pi f_{{}_{AC}}t), \tag{3}\] The state of the bridge is determined by calculating the voltage \(V\) between \(y=0\) and \(y=L\) boundaries for each value of transport current (see Methods). By averaging this voltage over the sample width and time one gets the DC-voltage \(\langle V\rangle\) measured in experiments. ### Single linear defect As a starting point, we consider a single linear defect, as shown in Fig.1b, that simulates a grain boundary crossing the bridge. Additionally, two point defects are introduced at the ends of the linear defect, representing suppressed superconductivity in the locations where the grain boundary reaches the sample edges. The results of calculations are presented in Fig.2. When the transport current \(I_{tr}\) is well below a critical value \(I_{c}\), the order parameter in the bridge is steady. It is depleted at the two local edge defects and at the linear defect, Fig.2a, following the imposed \(\epsilon(\mathbf{r})\). At \(I_{tr}\lesssim I_{c}\), there is already one vortex and one anti-vortex pinned at the two edge defects. The entire bridge remains in the superconducting state, with \(V\)=0, as expected. Figs.2c-e are snapshots of the temporal evolution of the order parameter amplitude when a constant \(I_{tr}=0.10>I_{c}\) is applied. Under this condition, one vortex and one anti-vortex simultaneously enter the bridge, Fig.2c. They accelerate towards each other under the action of the Lorenz force and experience mutual attraction, Fig.2d, and annihilate, Fig.2e. The process is periodic, with the period and details of the vortex-antivortex dynamics depending on the TDGL parameters. Moving vortices dissipate energy and generate an instantaneous voltage \(V(t)\) proportional to the relative vortex velocity. Fig.2f illustrates the evolution of \(V(t)\), with points c-e correspond to snapshots in Figs.2c-e. At the moment (c), the voltage Figure 1: **Defects in superconducting nano-bridges.****a** Scanning Electron Microscope image of a typical nano-bridge studied in experiments [10]. **b** and **c** Two sample geometries studied theoretically, representing the central (narrowest) part of the real device. They contain one (**b**) or two (**c**) linear defects (grey regions). Edge defects situated at the ends of linear defects are indicated by black rectangles. The direction of the transport current is showed by arrows. The voltage is calculated between the two blue dashed lines. Further details are provided in the text. Figure 2: **Vortex dynamics in single linear defect with no AC current applied (\(I_{{}_{AC}}=0\)).****a** Static map of the order parameter amplitude \(|\psi(\mathbf{r})|\) at low transport currents \(I_{{}_{DC}}\ll I_{c}\). **b** Static \(|\psi(\mathbf{r})|\) map at \(I_{{}_{DC}}\lesssim I_{C}\) indicates the presence of one vortex and one anti-vortex at the edge defects, ready to enter. **d-e** Snapshots of \(|\psi(\mathbf{r})|\) at different moments of vortex propagation for a fixed \(I_{{}_{DC}}=0.10>I_{c}\). **f** Periodic temporal evolution of the instantaneous voltage \(V(t)\) at the same conditions. The dots \(c\), d and \(e\) on the graph correspond to snapshots **c**, **d** and **e**. The period of \(V(t)\) oscillations provides the fundamental frequency \(f_{1}\) of the process. **g** Fourier spectrum of \(V(t)\). rapidly rises as vortices accelerate due to their interaction with the edges and the transport current. In (d), \(V\) crosses a local minimum as the vortex velocity drops in a region where interaction with the edge is already sufficiently small, and the transport current is reduced on the scale of \(\sim\lambda\). A sharp increase in the vortex velocity due to the vortex-antivortex attraction just before annihilation produces a peak in \(V(t)\) at moment (e). The Fourier spectrum of \(V(t)\) is presented in Fig.2g. It contains the fundamental frequency of an amplitude \(V_{1}\) and several harmonics with comparable amplitudes. Both \(V(t)\) and spectrum indicate the strong anharmonicity of the vortex motion. It is important to note that the fundamental frequency is not fixed but grows with \(I\)[12] as the increasing Lorentz force pushes vortices to move and annihilate faster. By repeating the calculations for different \(I_{{}_{DC}}\) and time-averaging \(\langle V(t)\rangle\), we retrieve \(\langle V\rangle(I_{{}_{DC}})\) dependencies measured in experiments. The dark green curve in Fig.3 is the result of these calculations for the range of \(I_{{}_{DC}}\) around the transition from the non-dissipative to the dissipative state. The plots are presented in reduced coordinates \(\langle V\rangle/\mu_{0}\) vs \(I_{{}_{DC}}/(J_{0}W)\) (see Methods). The shape of the curve resembles the \(V(I)\) characteristics of an ordinary Superconductor - Normal metal - Superconductor (SNS) Josephson junction. The latter, represented by the dashed line in Fig.3, was calculated using the Restively Shunted Junction (RSJ) model. Both curves exhibit a non-dissipative branch at low currents, a rise at some critical current, and a smooth increase at higher currents. However, the resemblance is limited. First, in the present case, the SNS junction does not exist, and the intrinsic critical (depairing) current of the bridge \(\sim J_{0}W\) is much higher than the calculated value \(I_{c}\simeq 0.072J_{0}W\). Second, in SNS junctions, the \(\langle V\rangle(I_{{}_{DC}})\) curve asymptotically approaches the normal branch \(\langle V\rangle=R_{N}I_{{}_{DC}}\) as \(I_{{}_{DC}}\) increases, while the bridge "resistance" \(\langle V\rangle/I_{DC}\times(J_{0}W/\mu_{0})\) remains much lower than its normal state resistance \(R_{N}\) (this is why the vertical scale of the RSJ curve was reduced to fit it in the plot window). Third, the RSJ model fails in reproducing an almost linear rise of \(\langle V\rangle(I_{{}_{DC}})\) for \(I>I_{c}\). The three deviations originate from the fact that, contrary to SNS junctions in which the voltage appears as a result of the suppression of the proximity-induced superconducting correlations in the N-part, in the nano-bridges it is due to individual vortex motion inside a still superconducting device. This difference is essential, leading to unique transport properties that we focus on in this work. The SNS-like behaviour of the bridge is further evidenced by simulating its response to microwave illumination. When an AC-current is added, the oscillating voltage \(V(t)\) can be locked to the frequency \(f_{{}_{AC}}\) of this external drive, resulting in plateaus of constant voltage on \(\langle V\rangle(I_{{}_{DC}})\) curve, as shown in Fig.3. This effect resembles the well-known Shapiro steps observed in ordinary Josephson junctions under microwave illumination, where the \(n^{th}\) voltage plateau is defined by the locking condition \(f_{1}=n\cdot f_{{}_{AC}}\) (where \(n\) is an integer) and the second Josephson relation \(\langle V(t)\rangle=hf_{1}/2e\), where \(f_{1}\) is the fundamental (Josephson) frequency. In addition to the integer Shapiro plateaus, the fractional ones are also revealed. These plateaus appear in Fig.3 at voltages satisfying the condition \(\frac{n}{k}\cdot hf_{{}_{AC}}=2e\langle V(t)\rangle\), where \(n\) and \(k\) are integers. The presence of fractional plateaus is directly linked to a high anharmonicity of \(V(t)\) oscillations in Fig.2f. The Fourier spectrum of \(V(t)\), presented in Fig.2g, is indeed characterized by high amplitudes \(V_{k}\) of \(k^{th}\) voltage harmonics (at a frequency \(f_{k}\)) which are comparable to the amplitude \(V_{1}\) at its fundamental frequency \(f_{1}\). This enables an efficient locking of these harmonics to the AC-drive when \(f_{k}=kf_{1}=nf_{{}_{AC}}\). In Fig.4, the instantaneous voltage \(V(t)\) and its spectrum for the Shapiro plateau \(1/3\) are shown. When the frequency \(f_{{}_{AC}}\) is slightly detuned from \(f_{k}/n\), a low-frequency envelope of a beat frequency \(|f_{{}_{AC}}-f_{k}/n|\) is observed. This effect allows for the detection of higher harmonics experimentally, even when their magnitude is small and imperceptible in \(\langle V\rangle(I_{{}_{DC}})\) curves in the Shapiro step experiment. Consequently, by tuning the amplitude and frequency of the AC excitation, it becomes possible to induce fractional Shapiro step when \(f_{{}_{AC}}\) is not only a multiple of \(f_{1}\) but a multiple of higher \(f_{k}\) harmonics. All these features reveal a rich spectral characteristic of the considered system. It should be mentioned that when the amplitude \(I_{{}_{AC}}\) becomes comparable to \(I_{{}_{DC}}\), the AC-excitation cannot be considered as a perturbation anymore. Instead, one should think of a complex Figure 3: **Transport properties of the nano-bridge with one linear defect.** Solid lines - normalized \(\langle V\rangle(I_{{}_{DC}})\) characteristics calculated for different values of the AC component \(I_{{}_{AC}}\) of the total transport current. The right vertical axis displays the numbers of Shapiro plateaus. Dashed line - \(V(I)\) characteristic of a SNS Josephson junction calculated within the RSJ model [13; 14]. To fit the curve into the plot window, its vertical scale was divided by a factor of 10. dynamical system whose spectrum (amplitudes \(V_{k}\) and frequencies \(f_{k}\)) depends on both components of \(I_{tr}\). Concluding this section, it is important to recall pioneering works [15; 16] that predicted similarities between the transport properties of Josephson junctions and those of nano-bridges crossed by vortices (or phase-slips). These predictions were later confirmed in several experiments, where both integer [5; 6; 8; 17] and fractional [4] Shapiro steps were observed. This analogy was also explored, both experimentally and theoretically, in the case of vortices jumping between pinning sites [18; 19; 20; 21; 12]. ### Two neighbouring linear defects In experimentally studied nano-bridges, the disorder is rarely represented by only one grain boundary. Most non-epitaxial superconducting films exhibit granularity on a scale of 20-200 nm, which can be significantly shorter than the nano-bridge width \(W\). For instance, nano-meanders studied in [22] were elaborated out of thin YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) films. They possess a specific morphology [23] and form a network of grain boundaries. Statistically, several such boundaries can cross the bridge. The vortex motion in these networks is much more complex than in a single grain boundary studied above. While the vortex cores are confined within the grain boundaries [9], the vortex currents extend far beyond; they circulate on a scale of \(\lambda\) or, in ultrathin films, on an even larger scale of the Pearl penetration depth [24]. This results in a mutual interaction between vortices present in different grain boundaries, affecting their collective motion. As a step towards accounting for this complexity, we consider now two linear defects (grain boundaries) characterized by \(\epsilon\)=0.5. The defects are separated by a distance \(l=5\xi\sim\lambda\), Fig.1c, thus inducing an interline vortex-vortex interaction. The calculated \(\langle V\rangle(I_{{}_{DC}})\) characteristics in the case of two identical linear defects is presented as a dark green solid line in Fig.5. The shape of this curve is almost identical to that obtained in the case of a single linear defect. As in the previous case, at DC-currents just above the critical one \(I_{{}_{DC}}\gtrsim I_{c}\), one vortex-antivortex pair enters the nano-bridge and moves along one of the two linear defects, thus generating a non-zero voltage. The only difference with the single defect case is that after vortex-antivortex annihilation in one line, a new vortex-antivortex pair enters the other line, and the process repeats. Further increase of the DC-current leads to an acceleration of vortices and, consequently, to an increase of voltage \(\langle V\rangle\). At a high enough DC-current, the system enters a new state in which the second vortex-antivortex pair enters into the second line before the first pair annihilates in the first one. This moment is witnessed by a slight inflexion of \(\langle V\rangle(I_{{}_{DC}})\) curve at \(I_{{}_{DC}}\simeq\)0.084. In this state, there are two vortex-antivortex pairs in the nano-bridge at the same time. Due to the mutual repulsion of vortices of the same sign, they try to position themselves as far from each other as possible, while remaining inside linear defects. This leads to a lateral \(x\)-shift of the vortex positions in neighbouring lines, as shown in Fig.6a. This dynamic vortex pattern is reminiscent of the static Abrikosov vortex lattice. As time advances, the vortex-antivortex pair in the bottom line annihilates, the one in the top line advances towards the center, and a new one enters the bottom line. By adding a low AC-current, one gets the Shapiro plateaus that also look very similar to the single defect case (brown line in Fig.5). Though, at higher AC-currents new features appear. These are large 2/3 Shapiro plateaus on \(\langle V\rangle(I_{{}_{DC}})\) curves with a rapid voltage raise on their left side and a voltage drop on their right side (hand-added smooth dashed lines help to appreciate the amplitude of the effect). Unlike other plateaus, the width of the 2/3 plateau rapidly grows with the AC-current (compare the curves at \(I_{{}_{AC}}=\)0.02, 0.03, and 0.05). To understand the origin of this phenomenon, let us consider the dynamics of the system close to the voltage drops. In the specific case of \(I_{{}_{AC}}=0.03\), this occurs at \(I_{{}_{DC}}^{drop}\simeq 0.076\), as indicated by the arrow on the \(\langle V\rangle(I_{{}_{DC}})\) curve in Fig.5. The calculations show that just above \(I_{{}_{DC}}^{drop}\), the vortex-antivortex motion in the two defects is sequential, as presented in Fig.6a, while just below \(I_{{}_{DC}}^{drop}\) (that is on the plateau) it is synchronous: Vortex-antivortex pairs enter the defects simultaneously, move in parallel to each other (see the snapshot Fig.6b), and Figure 4: **Evolution of the instantaneous voltage \(V(t)\) for the Shapiro plateau \(1/3\) of Fig.3.****a**\(V(t)\) at constant \(I_{{}_{DC}}\) and \(I_{{}_{AC}}\) for detuned frequency \(f_{{}_{AC}}=0.625\,f_{3}\). The low-frequency envelope due to the beat effect is visible. **b**\(V(t)\) at the resonance \(f_{{}_{AC}}=f_{3}\). **c** Frequency spectrum of \(V(t)\) in the case (**b**). annihilate at the same time. This leads to high peak-to-peak voltage spikes in \(V(t)\), as those visible on the left side of Fig.6c. The synchronous configuration is not stable itself. Indeed, when a vortex in one line is located under a vortex in the other, the projection of a vortex-vortex repulsion force on a \(x\)-axis is zero, and any \(x\)-shift of their position gives rise to the \(x\)-axis component of vortex-vortex repulsion which drives the system out of this unstable balance towards a more stable checkerboard configuration, Fig.6a. Thus, the metastable configuration of Fig.6b is stabilized by the external AC-drive that works as a periodic force; if the amplitude of this force (proportional to \(I_{{}_{AC}}\)) is sufficient, the configuration is stabilized, in some range of external parameters, giving rise to a plateau on \(\langle V\rangle(I_{{}_{DC}})\) curve. When the DC-current is slightly increased above \(I_{{}_{DC}}^{drop}\), the Lorenz force increases, and the system jumps down to the stable configuration of Fig.6a. The corresponding evolution of \(V(t)\) is presented in Fig.6c. One can observe that after a few periods of high peak-to-peak voltage oscillations, the system transits to oscillations with a nearly twice lower peak-to-peak voltage (compare left and right parts of Fig.6c). This change is due to the fact that in the metastable state, the vortex-antivortex annihilation takes place simultaneously in the two lines, while in the stable configuration the process is sequential. The system is no more locked to the 2/3 Shapiro step in the stable configuration. A movie illustrating the oscillatory dynamics of this transition is provided in the Supplementary Material. The motion of vortices in the two close linear defects can be seen as a system of two coupled identical anharmonic oscillators. In this representation, the two oscillation patterns of Fig.6 can be seen as two modes, one of which is low in energy (\(E_{0}\)) and therefore stable, while the other, at higher energy \(E_{1}\), is metastable. Each of these modes depends on DC current \(I_{{}_{DC}}\), and the evolution of the lowest mode corresponds to \(\langle V\rangle(I_{{}_{DC}})\) curve at \(I_{{}_{AC}}=0\). Another mode can only be achieved with an external excitation, in a certain range of pumping powers and frequencies. The calculated Fourier spectra of \(V(t)\) in the states \(E_{0}\) and \(E_{1}\) are presented in Fig.7. In the metastable configuration \(E_{1}\), the AC-drive locks to the third harmonic of the system as \(2f_{{}_{AC}}=3f_{1}\). The Josephson frequency is \(f_{1}=(2/3)f_{{}_{AC}}\) and, consequently, the DC-voltage measured in the experiment is \(\langle V(t)\rangle=(2/3)hf_{{}_{AC}}/2e\). This voltage remains constant as long as the system is locked to the drive, resulting in the unusual 2/3 Shapiro plateau in Fig.5. Immediately after the drop, the drive locks to the second harmonic as \(f_{{}_{AC}}=2f_{1}\), that is \(f_{1}=(1/2)f_{{}_{AC}}\), resulting in a lower DC-voltage \(\langle V(t)\rangle=(1/2)hf_{{}_{AC}}/2e\). In principle, it could be the usual 1/2 Shapiro plateau, due to anharmonicity. Though, when the AC-current increases and the width of the unusual 2/3 plateau rapidly grows, the plateau 1/2 shrinks and disappears (compare the curves at \(I_{{}_{AC}}=0.02\), 0.03 and 0.05 in Fig.5). Note that as \(I_{{}_{DC}}\) is further increased above \(I_{{}_{DC}}^{drop}\), the Lorentz force rises pushing vortices to move faster, the corresponding frequencies grow, and the lock to the fixed frequency of the AC-drive is lost. This roller-coaster ride between different metastable, stable locked and unlocked states is reflected in voltage spectra and as a consequence Figure 5: \(\langle V\rangle(I_{{}_{DC}})\) **characteristics for different values of \(I_{{}_{AC}}\) in the case of two identical linear defects (displayed in fig.1c).** The right vertical axis displays the numbers of Shapiro steps. Dashed lines are used as eye-guides (see in the text). Figure 6: **Vortex dynamics in the case of two identical linear defects of fig.1c.** **a** and **b** Snapshots of the order parameter amplitude in the stable (**a**) and metastable (**b**) states near the transition (see in the text). **c** Evolution of \(V(t)\) at the transition from the metastable to the stable state at \(I_{{}_{AC}}=0.03\). The initial DC-current \(I_{{}_{DC}}=0.076\) switches to \(I_{{}_{DC}}=0.077\) at the moment \(t\)=500. in a \(\langle V\rangle(I_{{}_{DC}})\) curve. Till now we have considered a very idealistic case where the two coupled linear defects were identical. This situation could be realized in artificial stacks of SNS junctions [25], periodic pinning arrays [26; 20] but not in nano-bridges made of films in which the intrinsic pinning landscape is aperiodic and the inter-grain coupling varies from one grain boundary to the other. To account for this diversity, we also studied asymmetric linear defects. In Fig.8, we show the \(\langle V\rangle(I_{{}_{DC}})\) characteristics for the case of two linear defects located as in Fig.1c, but characterized by different \(\epsilon\) parameters: \(\epsilon=0.5\) and \(\epsilon=0.42\). The curves differ significantly from the previous case, even without AC-excitation (green curve). The critical current is lower, and for \(0.074<I_{{}_{DC}}<0.0805\), the voltage appears exclusively due to the vortex motion in the \(\epsilon=0.42\) line; the \(\epsilon=0.5\) line contains no vortices. Above \(I_{{}_{DC}}\simeq 0.0805\), the vortices start to penetrate the second line as well, and at high enough currents, \(I_{{}_{DC}}\gtrsim 0.085\), their motion becomes mutually synchronized, similarly to the previous case displayed in Fig.6a. Remarkably, in the intermediate current region, \(0.0805<I_{{}_{DC}}<0.085\), the two anharmonic oscillators have very different spectral fingerprints and, as a result, there is no clear synchronization of the vortex motion in the two lines; in this region, the \(\langle V\rangle(I_{{}_{DC}})\) characteristics demonstrates a bump with several local maxima and minima. When AC-excitation is added, integer and fractional Shapiro steps are observed, the latter stemming from the anharmonic nature of vortex motion. The transitions to/from metastable modes are also observed, although their number is larger, their shape more complex and intricate than in the case of identical defects. Clearly, the vortex dynamics in the presence of asymmetric defects leads to a greater variety of collective motion modes. ## III Discussion The evolution of Shapiro features in Figs.5,8 with increasing \(I_{{}_{AC}}\) is not trivial. At low AC-excitation, \(I_{{}_{AC}}\ll I_{{}_{DC}}\), conventional Shapiro plateaus are narrow, and no signatures of metastable states are seen. In this regime, the AC-component acts as a probe that locks, at a fixed \(f_{{}_{AC}}\), onto the spectrum of the vortex motion, solely determined by the main driving (Lorentz) force \(\sim I_{{}_{DC}}\). As \(I_{{}_{DC}}\) increases, the vortices move faster, \(f_{1}\) and \(f_{k}\) increase. At some \(I_{{}_{DC}}\), a given \(f_{k}\) gets close enough to \(nf_{{}_{AC}}\), and the motion locks to \(f_{{}_{AC}}\); \(f_{1}\) remains fixed in some range of \(I_{{}_{DC}}\). As \(I_{{}_{DC}}\) further increases, the locking effect is lost. This results in a series of integer and fractional Shapiro plateaus visible on \(\langle V\rangle(I_{{}_{DC}})\) curve at \(I_{{}_{AC}}\)=0.02. When \(I_{{}_{AC}}\) is increased and becomes comparable with \(I_{{}_{DC}}\), two phenomena appear. The first one is the well-known enlargement of Shapiro plateaus, due to a stronger locking effect at higher AC-currents. The second one is related to the perturbation of the vortex motion spectrum by the oscillatory force \(\sim I_{{}_{AC}}\), whose amplitude becomes comparable to the Lorentz force due to DC-current. The combined action of \(I_{{}_{DC}}\) and \(I_{{}_{AC}}\) enables the existence of metastable states. They can be locked to \(f_{{}_{AC}}\), resulting in jump-plateau-drop features as observed in Fig.5. The same phenomenon takes place in Fig.8, where many more voltage bumps and drops are observed (some jumps to metastable states are indicated by black arrows) as compared to Fig.5. The lift of degeneracy, resulting in a more rich and complex metastable state spectrum, is certainly behind these differences. Finally, the DC-current range where the feature appears rapidly extends with increasing \(I_{{}_{AC}}\). Figure 7: **Schematic energy diagram of the states considered in Fig.6**. Stable (\(E_{0}\)) and metastable (\(E_{1}\)) states are presented along with their frequency spectra. Figure 8: \(\langle V\rangle(I_{{}_{DC}})\) characteristics for different \(I_{{}_{AC}}\) in the case of two different linear defects, \(\epsilon=0.5\) and \(\epsilon=0.42\). The right vertical axis displays the numbers of corresponding Shapiro steps. In the limit of a dense, on the scale of \(W\), network of defects, one would expect a huge number of apparently chaotically arranged voltage jump-bump-drops to appear on \(\langle V\rangle(I_{{}_{DC}})\) curves, reflecting a vast number of accessed vortex motion modes and the complexity of the related spectra. The term "chaotic" is justified here due to a high sensitivity of the accessed metastables configurations to external parameters such as \(I_{{}_{DC}}\), \(I_{{}_{AC}}\), \(f_{{}_{AC}}\), the disorder landscape, etc. Indeed, after unlocking from one metastable state, the system can jump down to a more stable configuration or lock up to another metastable state, from the available set. As a result, the position and shape of bumps-drops on \(\langle V\rangle(I_{{}_{DC}})\) curves would appear arbitrary (see Fig.8), while they are deterministic. The revealed voltage drops correspond to a negative dynamic resistance \(dV/dI(I_{{}_{DC}})\). The latter has been experimentally observed in periodic pinning arrays subject to a specific external magnetic field [27; 28; 29], where a complex collective dynamics of vortices led to multiple phase transitions in their collective motion, with no need for additional AC-drive, resulting in various features in the \(V(I)\) characteristics [27; 28; 29]. Another system is a perforated Nb film put in an external magnetic field, where the negative dynamic resistance can appear due to the Ratchet effect under AC-drive [3]. More recently, both Shapiro steps and negative dynamic resistance were observed in a MoN strips with an artificial cut [30]. The authors attributed the negative dynamic resistance to the chaotic aperiodic vortex motion at high AC-excitation amplitude. The ability to use AC-excitation both as a pump and as a probe opens up interesting possibilities for realization, spectroscopy and control of metastable states in superconducting weak-links. The obtained results demonstrate the potential for designing artificial disorder landscapes to achieve desired responses to AC-amplitude and/or frequency. The general nature of weak-links suggests that there would be multiple ways of experimental realization of these functionalities. One of straightforward routes is to engineer superconducting films with a controlled disorder by using Focused Ion Beam approaches [3] or to deposit superconducting materials onto faceted structures [31]. By carefully designing the spatial distribution of defects or grain boundaries, one could tailor the response of the weak-links to both DC- and AC-excitation. Another avenue is to overlap superconducting weak-links by ferromagnetic strips that can locally suppress the superconducting order parameter due to the inverse proximity effect [32; 33; 34; 35; 36]. This can introduce additional complexity in the vortex dynamics and lead to novel effects under microwave excitation. ## IV Conclusion In this work, we numerically studied transport properties of current-carrying superconducting nano-bridges subject to microwave illumination. The granularity of experimentally measured devices was accounted for by introducing one or two linear defects (simulating grain boundaries) which were directed perpendicularly to the applied current. We revealed a rich and complex dynamics of the vortex motion along these defects. Its strong anharmonicity enabled us to lock the spectrum of the system to an external periodic drive, and to obtain both integer and fractional Shapiro plateaus in DC voltage-current characteristics. In the case of two close linear defects, the inter-vortex coupling leads to the appearance of collective modes of correlated motion, with multiple stable and metastable states. These transitions are revealed in the current-voltage characteristic as regions of negative differential resistance \(dV/dI(I_{{}_{DC}})\). By playing with the external drive amplitude and frequency it becomes possible to pump the system to higher-resistance metastable modes and stabilize it there, in a finite range of DC transport currents. A step out of this range leads to a relaxation to a lower-resistance modes. The ability to control and stabilize different modes of the vortex motion opens up new possibilities for designing superconducting devices with tunable transport properties and novel functionalities. ## V Methods Within the TDGL framework, the temporal and spatial evolution of the complex superconducting order parameter \(\psi(t,\mathbf{r})\) can be expressed as [11]: \[\begin{split}& u\left(\partial_{t}+\imath\mu\right)\psi=\epsilon( \mathbf{r})\psi-|\psi|^{2}\psi+\left(\nabla-\imath\mathbf{A}\right)^{2}\psi \\ &\varkappa^{2}\nabla\times\left(\nabla\times\mathbf{A}\right)= \mathbf{J}_{\mathrm{S}}+\mathbf{J}_{\mathrm{N}},\end{split} \tag{4}\] where \(\psi\) is in units of \(\psi_{0}=\sqrt{\frac{|a|}{b}}\), with \(a\) and \(b\) being phenomenological parameters of the GL theory. The parameter \(u\)=1 is taken since we focus only on vortex motion but not on its nucleation dynamics. The coordinates \(\mathbf{r}=(x,y)\) are in units of \(\xi\). The scalar potential \(\mu\) is measured in units of \(\mu_{0}=\frac{\hbar}{2e\tau_{GL}}\), where \(\tau_{GL}=\frac{4\pi\sigma\lambda^{2}}{c^{2}}\) denotes the GL relaxation time, and \(\lambda\) is the London penetration depth. The parameter \(\sigma\) corresponds to the normal state conductivity of the material. The variable \(t\) is measured in units of \(\tau_{{}_{GL}}\), while the vector potential \(\mathbf{A}\) is in units of \(H_{c_{2}}\xi\), with \(H_{c_{2}}=\frac{\hbar c}{2c\xi^{2}}\) representing the upper critical field. The parameter \(\epsilon(\mathbf{r})\) is associated with the local critical temperature \(T_{c}(\mathbf{r})\) through Eq.(2); it enables spatially modulating the strength of the order parameter. In the second GL equation, the total current \(\mathbf{J}\) has superconducting (\(\mathbf{J}_{S}\)) and a normal (\(\mathbf{J}_{N}\)) components; it can be expressed in units of \(J_{0}=\frac{c\mathbf{b}_{0}}{8\pi^{2}\lambda^{2}\xi}\) as: \[\mathbf{J}=\mathbf{J}_{\mathrm{S}}+\mathbf{J}_{\mathrm{N}}=\mathrm{Im}\left[ \psi^{*}(\nabla-\imath\mathbf{A})\psi\right]-(\nabla\mu+\partial_{t}\mathbf{ A}) \tag{5}\] As TDGL equations are invariant under a gauge transformation, we use the zero scalar potential \(\mu=0\) gauge to eliminate the scalar potential from both equations. For simplicity, we set the order parameter equal to zero \(\psi=0\) on boundaries \(y=0,L\). To apply external transport current \(I_{tr}\), we use boundary conditions for the vector potential on boundaries \(x=0,W\) as \(\nabla\times\mathbf{A}=\mathbf{H}_{I}\), where \(H_{I}=2\pi I_{tr}/c\) represents the magnetic field induced by the transport current. On the other boundaries, we set \(\nabla\times\mathbf{A}=\mathbf{0}\). Additionally, we impose the superconductor-vacuum boundary condition \(\mathbf{n}\cdot(\nabla-\imath\mathbf{A})\psi=0\) on boundaries \(x=0,W\), where \(\mathbf{n}\) is the normal vector to the boundaries. The state of the bridge is determined by calculating the voltage \(V\) between \(y=0\) and \(y=L\) boundaries for each value of transport current. In the chosen gauge, the electric field is written as \(\mathbf{E}=-\partial_{t}\mathbf{A}\). The corresponding instantaneous voltage drop \(V_{y_{1},y_{2}}\) between two arbitrary points \(y_{1},y_{2}\) in \(y\)-direction can be calculated as \[V_{y_{1},y_{2}}(x,t)=-\int_{y_{1}}^{y_{2}}E_{y}(x,y,t)\,dy=\int_{y_{1}}^{y_{2} }\partial_{t}A_{y}(x,y,t)\,dy. \tag{6}\] By averaging this voltage over the sample width and time we get the DC-voltage \(\langle V\rangle\) measured in experiments. To avoid the voltage drops at \(y=0,L\) boundaries, we calculate the voltage inside the bridge where the order parameter is fully restored (\(\psi=1\)), as indicated by the blue dashed lines in Fig.1. When simulating the Shapiro step experiments, we consider the microwave illumination as an additional time-dependent transport current of amplitude \(I_{{}_{AC}}\) and frequency \(f_{{}_{AC}}\). The total transport current through the bridge is given by Eq.(3). In the model, all non-equilibrium quasiparticle processes are omitted, and for all considered frequencies, the microwave illumination acts on vortices only as an additional periodic Lorenz force. The system of Eqs.(1), with the above-described boundary conditions, was solved using the commonly used link-variable method [37; 38; 39; 11] on the finite-difference grid. Spatial derivatives were approximated using the central difference method, and for time integration, the forward Euler method was employed [40]. In all calculations of Shapiro steps, we set \(f_{{}_{AC}}\tau_{{}_{GL}}=0.03\). ###### Acknowledgements. We thank A. Gurevich for fruitful discussions and L. di Medici for sharing his computational power. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 754387. This work has been supported by the ANR JCJC (HECTOR ANR-21-CE47-0002-01), by Thales through a Co-fund PhD fellowship and was granted access to the HPC resources of MesoPSSL financed by the Region Ile de France the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche. ## VI Author contributions S.K, C.F.P. and D.R. proposed the idea, S.K. conducted all the numerical simulations in the Time Dependant Ginzburg-Landau framework under the guidance of C.F.P, D.R. All the authors discussed the numerical results and participated in writing of the manuscript.
2302.03742
Predicting Stellar Mass Accretion: An Optimized Echo State Network Approach in Time Series Modeling
Modeling the dynamics of the formation and evolution of protostellar disks as well as the history of stellar mass accretion typically involve the numerical solution of complex systems of coupled differential equations. The resulting mass accretion history of protostars is known to be highly episodic due to recurrent instabilities and also exhibits short timescale flickering. By leveraging the strong predictive abilities of neural networks, we extract some of the critical temporal dynamics experienced during the mass accretion including periods of instability. Particularly, we utilize a novel form of the Echo-State Neural Network (ESN), which has been shown to efficiently deal with data having inherent nonlinearity. We introduce the use of Optimized-ESN (Opt-ESN) to make model-independent time series forecasting of mass accretion rate in the evolution of protostellar disks. We apply the network to multiple hydrodynamic simulations with different initial conditions and exhibiting a variety of temporal dynamics to demonstrate the predictability of the Opt-ESN model. The model is trained on simulation data of $\sim 1-2$ Myr, and achieves predictions with a low normalized mean square error ($\sim 10^{-5}$ to $10^{-3}$) for forecasts ranging between 100 and 3800 yr. This result shows the promise of the application of machine learning based models to time-domain astronomy.
Gianfranco Bino, Shantanu Basu, Ramit Dey, Sayantan Auddy, Lyle Muller, Eduard I. Vorobyov
2023-02-07T20:46:23Z
http://arxiv.org/abs/2302.03742v2
# Predicting Stellar Mass Accretion: An Optimized Echo State Network Approach in Time Series Modeling ###### Abstract Modeling the dynamics of the formation and evolution of protostellar disks as well as the history of stellar mass accretion typically involve the numerical solution of complex systems of coupled differential equations. The resulting mass accretion history of protostars is known to be highly episodic due to recurrent instabilities and also exhibits short timescale flickering. By leveraging the strong predictive abilities of neural networks, we extract some of the critical temporal dynamics experienced during the mass accretion including periods of instability. Particularly, we utilize a novel form of the echo state neural network (ESN), which has been shown to deal efficiently with data having inherent nonlinearity. We introduce the use of optimized-ESN (Opt-ESN) to make model-independent time series forecasting of mass accretion rate in the evolution of protostellar disks. We apply the network to multiple hydrodynamic simulations with different initial conditions and exhibiting a variety of temporal dynamics to demonstrate the predictability of the Opt-ESN model. The model is trained on simulation data of \(\sim 1-2\) Myr, and achieves predictions with a low normalized mean square error (\(\sim 10^{-5}\) to \(10^{-3}\)) for forecasts ranging between 100 and 3800 yr. This result shows the promise of the application of machine learning based models to time-domain astronomy. Subject headings:Stellar accretion (1578) -- Neural networks (1933) -- Star formation (1569) + Footnote †: slugcomment: Version November 8, 2021 ## 1. Introduction We are entering a new era of rapid advance in time-domain astronomy that promises to revolutionize our understanding of transient astrophysical phenomena. These advances will occur through a variety of instruments that span the electromagnetic and gravitational-wave spectrum. It is important to push forward the development of analysis techniques that can in principle be utilized to model all transient phenomena regardless of the signal source or parameters. In this paper, we focus on modeling luminosity variations in the evolution of young stellar objects (YSOs). These are precursors of main-sequence stars that form from the collapsing dense regions of interstellar molecular clouds. Stars in their early stages of evolution accumulate materials via mass accretion from the surrounding accretion disk. Mass accretion from the disk to the central object in the early evolutionary stage is likely driven by gravitational torques arising from nonaxisymmetric spiral waves (Vorobyov & Basu, 2007, 2009). Other drivers such as disk winds (Bai, 2013; Suzuki et al., 2016), hydrodynamic and/or magnetohydrodynamic turbulence (Balbus, 2003), etc. also lead to redistribution of angular momentum, enabling accretion of disk material onto the central star. However, continuing mass infall onto the disk from larger distances often leads to sustained or recurring gravitational instability (GI) (Vorobyov & Basu, 2005, 2006) in the disk. The GI further triggers the formation of gas clumps that move inward, resulting in bursts of mass accretion onto the central object. Such episodic accretion onto YSOs is frequently observed and is well known in the form of FU Orionis objects (FUors) and EX Lupi objects (EXors). Recently, episodic accretion bursts have also been detected in young massive protostars (Caratti et al., 2017). The FUors show a rapid rise of luminosity from a few \(L_{\odot}\) to \(100-300\)\(L_{\odot}\)(Audard et al., 2014). This typically corresponds to an increase in mass accretion rate from \(10^{-7}\)\(M_{\odot}\) yr\({}^{-1}\) to a few times \(10^{-4}\)\(M_{\odot}\) yr\({}^{-1}\). The subsequent decline of luminosity after the initial burst occurs over a timescale of many decades. Due to the long timescale of decline, no FUor has ever been observed to have more than one burst. In contrast, the EXors exhibit smaller luminous amplitudes (up to a few tens \(L_{\odot}\)) in repetitive outbursts with durations of several months. It is still uncertain whether these two phenomena are related and if FUors correspond to an early stage of evolution with EXors representing smaller amplitude bursts at a later stage (e.g., Contreras Pena et al., 2019). Furthermore, the mass accretion rates of YSOs are highly episodic due to recurrent instabilities. They exhibit short timescale flickering due to inherent nonlinearity and inhomogeneity in the disk structure (Elbakyan et al., 2016). This makes forecasting burst events particularly challenging. Developing analysis techniques using present-day simulation data is key to advancing the study of such observations even if the underlying dynamics are not well known. The last decade has seen phenomenal growth in adaptations of various machine learning (ML) techniques in analyzing astronomical data (Auddy et al., 2021; Auddy et al., 2022) and making predictions in time-domain astronomy (Bloom and Richards, 2012; Rocha-Solache et al., 2022). Neural network (NN) based models are particularly powerful as they are not tied to a specific set of physical equations and assumptions. They can be trained on data (both from simulation and observation) to capture the nonlinear physics of the system and to make predictions (Auddy and Lin, 2020). The objective of this paper is to demonstrate that NN-based models can be used to forecast the evolution of transient phenomena in real-time. We introduce the use of an echo state neural network (ESN) (Lukosevicius, 2012; Kim and King, 2020) to make robust predictions of stellar mass accretion of evolving YSOs. The model is trained on time-series data obtained from hydrodynamical simulations (see for example Vorobyov and Basu, 2010) which capture the evolution of such complex nonlinear star-disk systems. A series of simulations (Vorobyov and Basu, 2005, 2006, 2010, 2015; Meyer et al., 2017) have demonstrated the prevalence of such episodic accretion driven by mass infall onto a nascent protostellar disk. In order to deal with the nonlinearity we use a novel approach of dividing the (simulation) data into a slowly-varying ("deterministic") component and a more rapidly-varying ("fluctuating" or "chaotic") component. We train the ESN-based model on each component of the data to make the subsequent prediction of the burst events. This ESN-based framework lays the foundation for analyzing such transient phenomena from upcoming surveys, like wide-field optical wavelength mapping with frequent time sampling by the Zwicky Transient Facility (ZTF) and Vera C. Rubin Observatory (VRO). This paper is organized as follows. In Section 2 we discuss the hydrodynamic simulations that capture the mass accretion in disk evolution. Section 3 gives an overview of the ESN architecture. In Section 4 we introduce the Opt-ESN model and outline the data preparation procedure. Results are presented in Section 5. A further discussion is in Section 6 and conclusions are in Section 7. A reader who is focused on the astrophysical consequences of this work can choose to read Section 2 and then move ahead to Section 5. ## 2 Hydrodynamic Simulations Numerical simulations of disk evolution can be done using a set of hydrodynamic equations that are vertically integrated along the direction of the rotation axis and follow the nonaxisymmetric evolution of physical variables in polar \((r,\phi)\) coordinates. This is viable in the expected scenario where the disk vertical scale height is significantly less than its radial extent. A series of papers have employed the thin-disk approximation to model the long term evolution of protostellar disks over several Myr timescales (e.g., Vorobyov and Basu, 2006, 2010, 2015; Vorobyov et al., 2017, 2020). It is still challenging to model the full temporal range of disk evolution using three-dimensional simulations, and state-of-the-art models that resolve the central protostar can advance as far as \(\sim 10^{3}\) yr past protostar formation (see Machida and Basu, 2019) We train the ESN on the long-term (\(\sim 10^{6}\) yr) disk simulations presented by Vorobyov et al. (2017). The simulations calculate the self-consistent disk formation and evolution. This is done by starting from the hydrodynamic collapse of a prestellar cloud core and continuing into the protostellar phase with a central protostar and surrounding disk. The basic equations and numerical finite difference numerical methods are described in Vorobyov and Basu (2010) and Vorobyov et al. (2017). A numerical solution is found to the partial differential equations describing the time and space evolution of the mass surface density, the planar momentum components, and the internal energy per unit area. Additional equations are employed to calculate self-gravity, viscosity, and heating and cooling rates due to multiple processes. A central sink cell of radius 5 au is adopted at the coordinate origin in order to avoid very small time steps imposed by the Courant-Friedrichs-Levy condition, so that the long-term evolution of the remaining region (radius \(\sim 10^{4}\) au) can be calculated. The solution of the disk evolution after protostar formation consists of a highly episodic accretion process. While some features of the episodes can be understood in a deterministic manner using the criterion for gravitational instability (Das and Basu, 2022), the nonaxisymmetry and nonlinearity of the problem lead to a time evolution of accretion rate that has stochastic and chaotic features. Each simulation is quite costly in terms of run time (up to several months on a single computer node with 48 physical cores), and a typical parameter survey consists of \(\sim 10\) models. What we explore here is the possibility of taking a set of simulation models with different initial conditions as input and training a neural network on a portion of the time evolution in order to extract some intrinsic and underlying dynamics of the system. We can then see how far in time the neural network can forecast the solution into a regime where it was not trained. ### Hydrodynamic Simulation Outputs We utilize the simulation outputs in 6 of the 35 models presented in Vorobyov et al. (2017). These six models differ in their initial conditions, which are described here. The initial axisymmetric radial profiles for the gas surface density \(\Sigma\) and angular velocity \(\Omega\) for the initial prestellar collapsing core are \[\Sigma =\frac{r_{0}\Sigma_{0}}{\sqrt{r^{2}+r_{0}^{2}}}\,, \tag{1}\] \[\Omega =2\,\Omega_{0}\left(\frac{r_{0}}{r}\right)^{2}\left[\sqrt{1+ \left(\frac{r}{r_{0}}\right)^{2}}-1\right]\,, \tag{2}\] where \(\Sigma_{0}\) and \(\Omega_{0}\) are the surface mass density and angular velocity at the center of the core, respectively. These are power-law profiles with asymptotic dependence \(\propto r^{-1}\) and have a central plateau radius \(r_{0}\) that is the length scale over which thermal pressure can smoothen the density profile (for details, see Vorobyov et al., 2017). To generate a gravitationally unstable core, each model is characterized by the ratio \(r_{\rm out}/r_{0}=6\), where \(r_{\rm out}\) is the core's outer radius. The cloud core mass \(M_{\rm cl}\) is found using the initial radial profile for the gas surface density \(\Sigma\). The quantity \(\Omega_{0}\) is selected such that the models have an intial ratio of rotational to gravitational energy \(\beta_{0}\) in the range of \(\approx 10^{-4}\) to 0.07. We summarize the initial model conditions in Table 1. The simulation outputs of the mass accretion rate to the central sink \(\dot{M}(t)\) are shown in Figure 1 for each model. Comparison with the Table 1 parameter values shows that there is a general increase of the variability amplitude as \(M_{\rm core}\) and/or \(\beta_{0}\) increase. Increasing mass or angular momentum leads to more massive protostellar disks and greater activity of GI induced bursts. ## 3. Echo State Neural Networks Neural networks (NN) have demonstrated the capability to approximate continuous functions and are often referred to as universal approximators (Schafer & Zimmermann, 2006). In the context of time series analysis, this gives them the ability to estimate the underlying dynamical processes governing the system. This is done by using the NN as a mapping function between the inputs and targeted outputs, allowing them to extract complex temporal relationships within the time-series data. The NN architecture is based on a collection of interconnected nodes, or "neurons", as shown schematically in Figure 2. The nodes are often arranged in layers from input to output, as shown in Figure 3. These architectures can be further extended to include recurrent units that maintain a network's hidden state during model training. These hidden states allow the network to recognize temporal sequences in data, which can make a recurrent neural network (RNN) (Salehinejad et al., 2017) particularly useful in the context of time series analysis. An ESN can be considered to be a sparsely connected RNN where the hidden layers along with the weights act as a "reservoir". This reservoir functions as a nonlinear temporal kernel, embedding the dynamics of the input data onto a higher dimensional computation space. For an ESN architecture only the reservoir-to-output weights are trainable while the input-to-hidden and hidden-to-hidden weights are chosen randomly and kept fixed during the training process. The sparsity of the ESN architecture and the fact that the hidden layer weights are not updated during the training process, automatically addresses the problem of vanishing gradients, as seen typically for a more conventional RNN based model (Jaeger, 2007; Lukosevicius, 2012). Since only the output weights are trainable, which is a simple linear regression task compared to the slow convergence of tuning the parameters of other networks, ESNs are much faster to train compared to other RNNs. For chaotic time-series prediction, ESNs have shown exceptionally good performance as these networks can capture the nonlinear dynamics of the system efficiently. In order to effectively model the chaotic dynamics that govern the hydrodynamic simulations, we implement the use of an ESN based model. An ESN architecture having a input, reservoir and output layer with its corresponding weights is demonstrated in Figure 4. For an input time series \(\mathbf{x}(t)\), we begin by defining the following input and target time sequences: \[\mathbf{x}_{1}(t) =[x_{1}(t),x_{2}(t),\cdots,x_{n-1}(t)]\quad\text{ Input Sequence}\] \[\mathbf{x}_{2}(t) =[x_{2}(t),x_{3}(t),\cdots,x_{n}(t)]\quad\quad\text{ Target Sequence}\] In order to utilize the neural network \(\mathcal{N}\) as a predictive time series model, we form the mapping \(\mathcal{N}:\mathbf{x}_{1}(t)\mapsto\mathbf{x}_{2}(t)\) to extract the relationship between the quantities \(x_{i}(t)\) and \(x_{i+1}(t)\). To do so, we train the ESN using the following steps: * Randomly generate the the input weight matrix \(\mathbf{W}_{\rm input}\) and the reservoir weight matrix \(\mathbf{W}_{r}\). * For each quantity \(x_{i}(t)\) in \(\mathbf{x}_{1}(t)\), construct an \(N_{r}\times 1\) reservoir state vector \(\mathbf{v}_{i}\), initialized to \(\mathbf{v}_{1}=\mathbf{0}\). Let \[\mathbf{v}_{i+1}=(1-\alpha)\mathbf{v}_{i}+\alpha f_{\rm act}( \mathcal{W}_{i})\,,\] (3) \[\text{with}\quad\mathcal{W}_{i}=\mathbf{W}_{\rm input}x_{i}(t)+ \mathbf{W}_{r}\mathbf{v}_{i}+\mathbf{W}_{b}\,,\] where \(N_{r}\) is the reservoir size and \(0<\alpha<1\) is the leaking rate. Equation (3) includes a randomly generated bias term \(\mathbf{W}_{b}\) and adopts the activation function \(f_{\rm act}(x)=\tanh(x)\). * Define a washout quantity \(\omega<n\) as an initially discarded transient and for every \(i>\omega\), construct the internal state \[\mathbf{X}=\begin{bmatrix}1&1&\cdots&1\\ x_{i}(t)&x_{i+1}(t)&\cdots&x_{n-1}(t)\\ \mathbf{v}_{i}&\mathbf{v}_{i+1}&\cdots&\mathbf{v}_{n-1}\end{bmatrix}.\] (4) * Finally, compute the output matrix using the Moore-Penrose inverse on the set \(\overline{\mathbf{x}}_{2}(t)=[x_{\omega+2}(t),x_{\omega+3}(t),\cdots,x_{n}(t)]\), yielding \[\mathbf{W}_{\rm output}=\overline{\mathbf{x}}_{2}^{*}\left(\mathbf{X}^{*} \mathbf{X}\right)^{-1}\mathbf{X}^{*}.\] (5) Note that if the matrix \((\mathbf{X}^{*}\mathbf{X})\) is near-singular, it is recommended to regularize the regression by adding a constant \(\lambda\) along the diagonals (known as Tikhonov regularization). Once \(\mathbf{W}_{\rm output}\) has been calculated, the output can be computed as \[\mathbf{y}(t)=\mathbf{W}_{\rm output}\mathbf{X}\,. \tag{6}\] This effectively represents an estimate to the mapping of one point in time to the next. Therefore, in order to predict the \((i+1)\)th time step (i.e., estimate the quantity \(x_{i+1}\)), we construct the internal state \(\mathbf{v}\) with \(x_{i}\) and use \(\mathbf{W}_{\rm output}\) to compute the output using Equation (6). The hyperparameters used for defining the reservoir and characterizing the network are described below: * The reservoir size \(N_{r}\) Determines the number of units in the reservoir (or in turn the size of the reservoir). * Spectral Radius \(\rho\) This is a global parameter that determines the maximal eigenvalue of the \(\mathbf{W}_{r}\) matrix. In other words it scales the reservoir connection matrix and controls the width of the distribution of the nonzero elements present in \(\mathbf{W}_{r}\). In most cases, \(\rho(\mathbf{W})<1\) maintains the echo state property. * Input scaling \(\varrho\) This parameter determines the scaling of the input weight matrix. It also controls the amount of nonlinearity in the dynamics of the reservoir. * Connectivity \(c_{r}\) Controls the degree of sparsity in the reservoir weight matrix. * Leaking Rate \(\alpha\) Controls the speed of the reservoir dynamics in reaction to the input. ## 4. Optimized Echo State Neural Networks ### Network Architecture Liu et al. (2018) introduced a parallel series approach where one stacks a series of reservoirs by generating \(L\) independent input and reservoir matrices. The time series is trained and validated through each reservoir to form \(L\) output matrices, Finally, the model's output \(\hat{\mathbf{y}}(t)\) is taken to be the mean of all \(L\) realizations so that \[\hat{\mathbf{y}}(t)=\frac{1}{L}\sum_{j}\mathbf{y}^{(j)}(t)\,. \tag{7}\] The optimized-ESN (Opt-ESN) extends Equation (7) to being a weighted sum rather than the standard mean. That is, the output realizations from each reservoir are Figure 1.— Mass accretion rate evolution from hydrodynamic simulations for each of the six models labeled 26 through 32 (with the exception of model 28) in Vorobyov et al. (2017). The mass accretion rate is shown in units of \(M_{\odot}\) yr\({}^{-1}\). Figure 4.— General architecture of an echo state neural network, shown with sparsely connected random reservoir units. Here, \(\mathbf{W}_{\mathrm{input}}\) and \(\mathbf{W}_{r}\) are randomly generated sparse matrices. Figure 3.— Schematic diagram of a multilayer perceptron neural network with several hidden layers. The neurons in the hidden layers take a linear combination of the inputs from the previous layer and passes it through an activation function to generate an output, which is further passed to the set of neurons in the next layer. Figure 2.— Schematic diagram of a single layer perceptron. The input data and bias are given as \(\mathbf{x}=[b,x_{1},x_{2},\ldots,x_{n}]\) while the weights are given as \(\mathbf{W}=[w_{b},w_{1},w_{2},\ldots,w_{n}]\). \(f_{\mathrm{act}}\) is the activation function through which \(\sum_{i}W_{i}x_{i}\) is passed to return an output \(y=f_{\mathrm{act}}\left(\sum_{i}W_{i}x_{i}\right)\). weighted by a set of optimal coefficients. As such, the final output of the Opt-ESN is given by \[\hat{\mathbf{y}}(t;\hat{\boldsymbol{\beta}})=\sum_{j}\hat{\beta}_{j}\mathbf{y}^{ (j)}(t)\,, \tag{8}\] where the coefficients \(\hat{\boldsymbol{\beta}}\) are found by minimizing the squared residuals over the input's validation segment \(\mathbf{x}_{\text{val}}(t)\). This is done by solving the linear optimization problem to find the minimum value of the loss function \[\mathcal{L}=\left\|\mathbf{x}_{\text{val}}(t)-\hat{\mathbf{y}}(t;\hat{ \boldsymbol{\beta}})\right\|^{2}\,. \tag{9}\] Here, the validation segment is defined as the in-time portion of the data set used to validate the model's output (see Section 4.3.1 for more details). ### Data Preparation As the input to the Opt-ESN, we used a portion of the simulated data for each model, corresponding to times of vigorous episodic accretion. We divide the data into segments of various lengths having different time steps and use this for training, validation and testing. Furthermore, as an aid to assessing the quality of our forecasts, we characterize the time scale in terms of the Lyapunov exponent. We define the dimensionless time length \(\Lambda\cdot N_{t}\) as the Lyapunov time, where \(N_{t}\) is the observation number. Here, the quantity \(\Lambda\) is taken to be the maximum Lyapunov exponent which characterizes the rate of separation between close trajectories in phase space and effectively quantifies the degree of chaos present. That is, if two trajectories are initially separated by some infinitesimal amount \(\Delta_{0}\), then the rate of divergence as a function of time \(t\) is approximately \[|\Delta(t)|\approx|\Delta_{0}|\cdot\exp(\Lambda t). \tag{10}\] It becomes clear that for \(\Lambda>0\), the separation \(|\Delta(t)|\) grows exponentially with time (Vulpiani et al., 2009). We estimate \(\Lambda\) using the algorithm in Eckmann et al. (1986) where our outputs are given in the fourth column of Table 2. Our calculations demonstrate that each value of \(\Lambda\) is greater than zero, indicating that each of the simulation models can be considered to be a chaotic system. We preprocess the simulation data \(\dot{M}(t)\) by assuming that it is separable in the form \[\dot{M}(t)=\dot{M}_{d}(t)+\dot{\mathcal{M}}(t)\,, \tag{11}\] where \(\dot{M}_{d}(t)\) and \(\dot{\mathcal{M}}(t)\) represent the data's deterministic and fluctuating components, respectively. In order to extract the fluctuating component, we pass \(\dot{M}(t)\) through a high pass filter1. The deterministic component can then be extracted by subtracting \(\dot{\mathcal{M}}(t)\) from \(\dot{M}(t)\). Furthermore, we normalize each component with respect to the standard deviation \(\sigma\) of \(\dot{M}(t)\). In order to make predictions, we feed \(\dot{M}_{d}(t)\) and \(\dot{\mathcal{M}}(t)\) into the Opt-ESN separately, run the forecasts and sum the outputs to get the final prediction on \(\dot{M}(t)\). We found that decomposing and processing the simulation data in this fashion gives better performing output than inputting \(\dot{M}(t)\) directly. The proposed network architecture is given in Figure 6. The network has two important layers contributing to the output. At the stacked reservoir layer, the output weight matrices \(\mathbf{W}_{\text{output}}\) are determined using regression techniques over the training segment. The weight matrices are then used to output a series of unique paths over the validation segment where the coefficients \(\hat{\beta}_{j}\) are found by solving a linear optimization problem. Footnote 1: We utilized Matlab’s highpass function with passband frequency fpass = 2000 and sampling rate f = 50000. The units are the inverse observation number. ### Hyperparameter Selection Selecting the optimal set of hyperparameters can be difficult given the size of the hyperparameter space. As such, for each component of the individual simulation data set (fluctuating and deterministic), we perform a hybrid, discrete-stochastic search over the entire hyperparameter space. That is, we prespecify the search space for each hyperparameter, construct a nested loop searching over all possible combinations, build a model and compute the mean square error (MSE) on the validation segment. In addition to this, within each iteration of the loop, we generate a random number for each hyperparameter that is within the search space, construct and validate a second model, and compute the respective MSE. These two models are then compared and the one with the lowest MSE is kept. This process continues Figure 5.— Input to output sequence for both the validation (left) and testing (right) sets. The validation set is considered “in-time” because each output value \(y_{i}\) is calculated using an input \(x_{i}\) taken directly from the given data set. The testing set however, is considered “out-of-time” because the model output is recursively served as an input to the following time step. That is, given the data set’s terminal point \(x_{n}\), the prediction of \(x(t)\) at the following point in time is given by the network output \(y_{n}\). As we step beyond the horizon of the given data set, \(y_{n}\) will be used as the input to calculate the prediction \(y_{n+1}\) and so on. Figure 6: Schematic figure representing the proposed architecture of the Opt-ESN. The input is separated into fluctuating and deterministic components and fed through an Opt-ESN. Each of the individual forecasts are summed to produce the estimated forecast to \(\hat{M}(t)\). Here, the neural network is shown to contain \(L\) reservoirs stacked into a single layer used in formulating the linear programming problem. Each reservoir is taken to be sparse and randomly connected. until the entire hyperparameter space has been searched and the model with the lowest MSE on the validation segment is selected. #### 4.3.1 Training and Data Segmentation In order to train the Opt-ESN model, the input data must be split into three segments: * Training Segment (\(t_{0}<t\leq t_{\text{train}}\)) Used to train the model and compute the output matrix \(\mathbf{W}_{\text{output}}\). * Validation Segment (\(t_{\text{train}}<t\leq t_{\text{val}}\)) Used as an in-time assessment on how well \(\mathbf{W}_{\text{output}}\) maps a single input to an output. * Testing Segment (\(t_{\text{val}}<t\leq T\)) Used as a blind test for the model to assess how well the out-of-time predictions perform. The difference between the validation and testing segment is that the model is dynamic within the validation segment. That is, each output is calculated using an input that is directly from the validation segment of the data. In the testing segment however, the model is using it's own output as an input for the next time step. This is demonstrated in Figure 5 where the validation use is given on the left side diagram and the testing use is given on the right side diagram. ## 5 Results ### Opt-ESN Outputs In order to train the model, the simulation data for each model is decomposed into deterministic and fluctuating components and standardized as discussed in Section 4.2. Forecasting either component of the simulation data requires the time series to be stationary. Nonstationary dynamics risk consequences such as spurious correlations and heavily biased mean and variance estimates (see Appendix for more details). We assess the stationarity of each component using the augmented Dickey-Fuller (ADF) test at 5% significance (Patterson, 2011). Ultimately, we find that in each model, \(\hat{\mathcal{M}}(t)\) follows a stationary process while \(\hat{M}_{d}(t)\) does not. As such, we applied first order differencing on the deterministic component to enforce stationarity. Network hyperparameters are selected according to the methodology introduced in Section 4.3. We utilize a stack of \(L=10\) reservoirs for size \(N_{r}=250\) neurons and solve the optimization problem for Equation (9) to generate the final output. In Figure 7, the Opt-ESN forecast is given for all six simulation models. The outputs are found by summing the individual forecasts of \(\hat{M}_{d}(t)\) and \(\hat{\mathcal{M}}(t)\) (the individual forecasts are shown in the Appendix). We assess the Opt-ESN performance using the dimensionless normalized mean square error \[\text{NMSE}=\frac{1}{n}\sum_{i=1}^{n}\frac{(y_{i}-\hat{y}_{i})^{2}}{\max(\hat {y})-\min(\hat{y})}\,, \tag{12}\] where \(y_{i}\) and \(\hat{y}_{i}\) are the observed and predicted values, respectively, for \(n\) data points. Here, we take \(y=\hat{M}/\sigma\), which is a dimensionless quantity. Lower NMSE values indicate stronger performance and \(\text{NMSE}=0\) equates to perfect accuracy. We consider any model having NMSE \(<10^{-2}\) to indicate good performance. The performance assessments are summarized in Table 2 in the Appendix. Additionally, in the upper half of Figure 8, the rolling NMSE on each model component is given as a function of Lyapunov time. The Lyapunov time is the characteristic timescale on which a system is chaotic and typically limits the predictability of the said system. Here, the Lyapunov time is calculated using the maximum Lyapunov exponent, \(\Lambda=\Lambda_{\text{max}}\). If the model is well specified, the residuals between the observed and predicted values are expected to be approximately normally distributed and thus largely attributed to white noise. Over the validation set, our goodness-of-fit assessment on the residuals utilize the autocorrelation function. In the lower portion of Figure 8, the autocorrelations are given for lagged residuals adopting a 99% and 95% confidence interval represented by the solid boundary lines and dashed boundary lines, respectively. Given that at least 90% of lagged residuals lie within the confidence interval, we assume the process to be approximately white noise. Furthermore, the residuals over the training phase are used to assess the goodness-of-fit of the Opt-ESN. We achieve this using the one-sample \(t\)-test for the null hypothesis that the residuals are sampled from a normal distribution with zero mean. The test statistic is \[t^{*}=\frac{\bar{x}-\mu}{\sigma_{s}/\sqrt{n}}\,,\] where \(\bar{x}\) is the sample mean, \(\mu\) is the hypothesized mean, \(\sigma_{s}\) is the sample standard deviation and \(n\) is the sample size. The statistic follows a \(t\)-distribution with \(n-1\) degrees of freedom. Our results show that for each model, we cannot reject the null hypothesis at the 5% significance, indicating that the residuals are approximately normal with zero mean. Figure 9 shows the residual distributions for each model. #### 5.1.1 Episodic Bursts and Stability Assessment The simulation data across each model has demonstrated strong episodic behavior where the dynamics exhibit multiple high magnitude bursts that are driven largely by mass infall. The Opt-ESN has shown strong forecasting capabilities on segments of the data where episodic bursts have not occurred. An interesting hypothesis to test is whether the neural network can adequately resolve the occurrence of a burst over a short time interval. To do so, we follow the same protocol for data preparation and hyperparameter selection. We isolate the occurrence of the first large burst in Model 27 \begin{table} \begin{tabular}{l l l l l} \hline \hline **Model** & \(M_{\text{core}}\) & \(\hat{\beta}_{0}\) & \(r_{0}\) & \(M_{*,\hat{\text{fn}}}\) \\ \hline Model 26 & \(1.245M_{\odot}\) & \(1.27\%\) & \(2777\) au & \(0.753M_{\odot}\) \\ Model 27 & \(1.076M_{\odot}\) & \(0.56\%\) & \(2400\) au & \(0.801M_{\odot}\) \\ Model 29 & \(0.999M_{\odot}\) & \(0.28\%\) & \(2229\) au & \(0.818M_{\odot}\) \\ Model 30 & \(1.537M_{\odot}\) & \(1.27\%\) & \(3429\) au & \(0.887M_{\odot}\) \\ Model 31 & \(1.306M_{\odot}\) & \(0.28\%\) & \(2915\) au & \(1.031M_{\odot}\) \\ Model 32 & \(1.383M_{\odot}\) & \(0.56\%\) & \(3086\) au & \(1.070M_{\odot}\) \\ \hline \hline \end{tabular} \end{table} Table 1Initial conditions for each of the seven simulation models. Figure 7.— The Opt-ESN predictions of the simulation data. The solid black line shows a fraction of the simulation data used for training/validation, the dotted black is the out-of-time testing segment of the data and the red line is the network’s prediction. and produce forecasts over 50 time steps (\(\approx 0.31\) kyr). To reflect some of the challenges in modeling observational data, we train the Opt-ESN under various conditions: 1. Training with 10000 data points. 2. Training with 5000, 2500 and 1000 data points. 3. Training with 10000 data points with 1% and 5% noise added. These conditions additionally serve to assess the stability in the model's predictive power. That is, under nonideal training conditions, we aim to demonstrate that the network can still forecast meaningful outputs. Our training conditions vary in severity including scenarios with added noise and lack of data availability. We enumerate the training conditions below. Training condition (1) represents an ideal scenario with no noise and the entire dataset and has an NMSE of \(8.29\times 10^{-3}\). Condition (2) varies the training length, and condition (3) includes a moderate and high degree of noise in the data. The Opt-ESN performance and summary for each condition is given in Table 3 and demonstrated in Figure 10. It is evident that the Opt-ESN demonstrates an ability to resolve the presence of an episodic burst with noisy training data. With increased noise the magnitude suffers and the NMSE increases relative to condition (1) by \(\approx 37\%\,\text{and}\,88\%\) when we included 1% and 5% noise, Figure 8.— Performance assessments on the predictions made for the simulation data. The upper portion of the figure demonstrates the rolling NMSE as a function of Lyapunov time for \(\dot{M}(t)\), \(\dot{\mathcal{M}}(t)\) and \(\dot{M}_{d}(t)\). The lower portion of the figure gives the residual autocorrelation function on the validation segment. respectively. Condition (2) gives insight into the importance of data availability with respect to model performance. In this scenario, compared to condition (1), the NMSE increases by \(\approx 10\%\) and \(40\%\) when the training length is reduced to \(5000\) and \(2500\) data points, respectively. ## 6. Discussion As we enter a new era in time domain astronomy, leveraging robust predictive models to make meaningful data inference is increasingly valuable. Neural networks became an attractive class of algorithms that can be used to model nonlinear data due to the universal approximation theorem (Cybenko, 1989; Schafer & Zimmermann, 2006). In particular, the reservoir computing framework that governs the ESN is ideal in modeling time series that possess chaotic (nonlinear) temporal structure. We used the Opt-ESN model to predict/forecast the protostellar mass accretion rate by training it on simulated hydrodynamical time series data. To avoid nonstationary dynamics, first-order differencing was used on the deterministic components of the data. Our methodology has generated robust and accurate outputs over several sets of chaotic data (i.e., having a high degree of nonlinear temporal dynamics). ### Short-term Burst Predictions and Data Availability In Section 5.1.1, we utilized the Opt-ESN to predict an episodic burst under various conditions. Our findings demonstrate that the main driver in model degradation is a lack of data. Even with a relatively high degree of noise, the network was able to resolve the presence of a burst within a 50 time step forecast. The Opt-ESN forecasts are aimed to provide a introductory framework as to how ML based models can be used to make inference on burst occurrences and forecast mass accretion. This can possibly help in establishing any potential relation between FUor and EXor phenomena. In practice, however, having thousands of years of observational data on a single object is not feasible. We see in Figure 11 that the model maintains relatively strong predictive performance in scenarios with training lengths as low as 2500 data points (0.3092 kyr). However, below this number, we see a significant decrease in model performance. Therefore, addressing data availability becomes a critical component in developing a robust neural network framework. In these scenarios, we propose two approaches: 1. Simulating synthetic data. As demonstrated in our approach, we can leverage simulation data to train neural networks in conjunction with observational data. Here, the simulated data would be integrated with the observational data as part of the training/validating segments of the model. The simulated data however should reflect the physical properties of the observed object and should include several occurrences of episodic bursts to assure proper coverage in the possible dynamics. 2. Aggregating data from similar objects. An alternative to generating synthetic data can be to aggregate data across several observed objects. That is, the neural network is trained across multiple similar objects. An advantage in this approach is that no portion in the training data is synthetic and as such, the forecasts will reflect the true dynamics of the observations. This approach can alleviate some of the data availability issues in practice, however it is crucial that each of the observed objects reflect the physical properties of the entire system being modeled. ### Effective Forecast Horizon Long forecast horizons in any statistical model can be challenging. This is due to the recursive dependence that is common in time series. That is, models typically will recycle outputs as future inputs in any out-of-sample forecasts. As such, the error in any given point estimate Figure 9.— Distribution of residuals for each simulation model after network training. Goodness-of-fit assessed using the one-sample \(t\)-test for the null hypothesis that the residuals are sampled from a normal distribution with mean equal to zero. will propagate to future estimates, which limits the predictability of a given model. In the context of chaotic time series, the Lyapunov time becomes a natural scale on which to characterize an effective forecast horizon. That is, the time length on which one can achieve sufficiently reliable forecasts. Typically, data sets that exhibit more chaos in temporal dynamics will have a larger maximum Lyapunov exponent, which effectively limits the real time range in predictive power. Pathak et al. (2018) utilized the reservoir computing framework in predicting spatio-temporally chaotic systems for the Kuramoto-Sivashinsky equation. Their scheme achieves low prediction error for roughly 8 Lyapunov times. As such, for comparative purposes we used \(\Lambda t_{\max}=8\) as a benchmark. Our Opt-ESN forecasts achieved low NMSE beyond the 8 Lyapunov time benchmark in each of the six simulation models (summarized in Table 2). Beyond this point, it becomes more likely that the forecasts become unreliable. In future work however, we may look into quantifying an asymptotic upper limit on the model's effective forecast horizon in terms of the Lyapunov time. ## 7. Conclusion We introduced the Opt-ESN model to forecast the mass accretion in protostellar disk evolution. This model exploits the stochastic nature of echo state networks and introduces its use in time-domain astronomy. We applied our model to a series of synthetic mass accretion data sets simulated by solving the hydrodynamical equations for a protostellar disk. The model achieved predictions with a low normalized mean square error (NMSE) (\(\sim 10^{-5}\) to \(10^{-3}\)) for forecasts ranging between 0.099 to 3.793 kyr. Additionally, the model successfully resolved the occurrence of an episodic burst with low NMSE when we added 1% and 5% of noise to the data. However, our findings also suggest that the model is not immune to degradation under scenarios of data limitation. Our implementation demonstrates the predictive capabilities of Opt-ESN when applied to time series data. As we transition into a new era of time domain astronomy, understanding and developing robust statistical time series models is becoming increasingly important. Importantly, the scientific return of our work goes much beyond its application to observations in the optical domain. There may be synergies with observations Figure 11.— Comparison of burst forecasts across all training conditions (excluding 1000 training length for visualization purposes). Condition (1) gives the lowest NMSE (\(8.29\times 10^{-3}\)), and is highlighted with increased line width. Figure 10.— Episodic burst forecasts under noisy training conditions. Conditions range in severity from no noise (left panel) to 5% noise addition (right panel). Here, the degree of noise is taken as a fraction (1 or 5 percent) of the maximum value of mass accretion rate. The performance results are given from left to right as \(8.29\times 10^{-3}\), \(1.84\times 10^{-2}\), and \(1.47\times 10^{-1}\), respectively. This demonstrates that the model trained under condition (1) is performing the best. made in the radio domain (e.g., fast radio bursts) with facilities like the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Australian Square Kilometer Array Pathfinder (ASKAP). Likewise in the gravitational-wave arena, Opt-ESN can play a crucial role in detecting/denoising black hole and neutron star merger signals observed from facilities like the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo interferometer as well as the upcoming Kamioka Gravitational Wave Detector (KAGRA) and LIGO India. ## Acknowledgments S.B. is supported by a Discovery Grant from NSERC. S.A. is supported by the NASA Postdoctoral Program (NPP). S.A. acknowledges that a portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). E.I.V. acknowledges support by the Ministry of Science and Higher Education of the Russian Federation (State assignment in the field of scientific activity 2023, GZ0110/23-10-IF). ## Appendix Supplemental material is provided here for reference. This includes some background discussion on stationarity, as well as additional figures and tables that summarize particular sets of results. ### Stationarity A process is considered to be stationary if the underlying distribution is constant over time (Brockwell & Davis, 2002; Hamilton, 2020). This effectively translates to having constant values of its first four moments. Alternatively, we define a time series to be covariance-stationary if it is only constant in its first and second moments. More formally, let \(X_{t}\) be a time series with \(\mathbb{E}(X_{t}^{2})<\infty\). The first and second moments are the mean and covariance functions, respectively: \[\mu_{X}(t) =\mathbb{E}(X_{t}), \tag{1}\] \[\Gamma_{X}(r,s) =\text{Cov}(X_{r},X_{s}),\quad\forall r,s\,. \tag{2}\] Thus, the process \(X_{t}\) is said to be covariance stationary if \(\mu_{X}(t)\) is independent of \(t\) and \(\Gamma_{X}(t+h,t)\) is independent of \(t\) for all \(h\). Furthermore, the autocovariance function must be even and nonnegative definite (Brockwell & Davis, 2002). That is, for a real-valued vector \(\mathbf{A}\) having components \(a_{i}\), we have \[\sum_{i,j}a_{i}\Gamma_{X}(i-j)a_{j}\geq 0\,. \tag{3}\] Additionally, a stationary process will have the roots of its characteristic equation lie inside the unit circle (Patterson, 2011). That is, if the underlying process has a unit root \(\geq 1\), then it is nonstationary. Assume the variable \(X_{t}\) can be written as a \(p^{\text{th}}\) order autoregressive process: \[X_{t}=\alpha_{1}X_{t-1}+\alpha_{2}X_{t-2}+\cdots+\alpha_{p}X_{t-p}+\epsilon_{ t}\,, \tag{4}\] where the innovations \(\epsilon_{t}\) are uncorrelated with mean zero and constant variance. If the characteristic equation \[\lambda^{p}-\alpha_{1}\lambda^{p-1}-\alpha_{2}\lambda^{p-2}-\cdots-\alpha_{ p}=0 \tag{5}\] has roots \(\lambda\geq 1\) of multiplicity \(m\), then the process is nonstationary with integration order \(m\), denoted \(I(m)\). There are several approaches to assessing whether a time series is stationary or not. The augmented Dickey-Fuller (ADF) test is among a popular set of unit root tests for time series data. It involves testing the null hypothesis that a unit root is present in the underlying process, making the time series nonstationary (Patterson, 2011). The ADF test assumes the underlying process can be modeled by \[\Delta X_{t}=\alpha_{0}+\alpha_{1}t+\rho X_{t-1}+\gamma_{1}\Delta X_{t-1}+ \cdots+\gamma_{p-1}\Delta X_{p-t+1}+\epsilon_{t}\,. \tag{6}\] The process would have a unit root if \(\rho=1\) or alternatively be considered stationary if \(\rho<1\). As such, the test is carried out under the null hypothesis that \(\rho=1\) against the alternative that \(\rho<1\), where the test statistic \(\text{DF}_{\rho}\) is given as \[\text{DF}_{\rho}=\frac{\hat{\rho}}{\text{SE}(\hat{\rho})}\,. \tag{7}\] Here, \(\text{SE}(\hat{\rho})\) is the standard error and the value \(\text{DF}_{\rho}\) is compared to the respective critical value in the Dickey-Fuller distribution. In our implementation, we take stationarity to mean covariance-stationary and utilize the ADF test at 5% significance with \(p=12\times(N_{t}/100)^{1/4}\) (default setting in Python), where \(N_{t}\) is the number of observations. Figure 12 demonstrates the Opt-ESN prediction of the fluctuating component of the simulation data. Figure 12: The Opt-ESN prediction on the fluctuating component of the simulation data. The solid black line is representative of the simulation data used in training/validation, the dotted black is the out-of-time testing segment of the data and the red line is the network’s prediction. Figure 13 demonstrates the Opt-ESN prediction of the deterministic component of the simulation data. Figure 13: The Opt-ESN prediction on the deterministic component of the simulation data. The solid black line is representative of the simulation data used in training/validation, the dotted black is the out-of-time testing segment of the data and the red line is the network’s prediction. _Performance Assessment_ The tables below summarize the model performance metrics. Table 2 contains the hyperparameters and forecast performances for each model component (fluctuating, deterministic, and overall). Furthermore, Table 3 demonstrates the performance metrics on forecasting the episodic burst under each of the three training conditions. \begin{table} \begin{tabular}{l l c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Component**} & \multicolumn{6}{c}{**Hyperparameters**} & \multirow{2}{*}{\(\Lambda\)} & \multirow{2}{*}{\(\Lambda\cdot N_{\mathrm{t}}\)} & \multirow{2}{*}{\(t\) (kyr)} & \multicolumn{2}{c}{**NMSE**} \\ \cline{3-10} & & \(\varrho\) & \(c_{r}\) & \(\alpha\) & \(\rho\) & & & Validation & Out-of-Time \\ \hline \multirow{3}{*}{Model 26} & Fluctuating & 1.0000 & 0.1000 & 0.2000 & 0.5000 & & & & 1.00e\(-\)4 & 4.05e\(-\)5 \\ & Deterministic & 0.1000 & 0.1000 & 0.2000 & 0.7000 & 0.208 & 10.41 & 1.978 & 1.33e\(-\)5 & 1.05e\(-\)5 \\ & Overall & & & & & & & 5.11e\(-\)5 & 1.86e\(-\)5 \\ \hline \multirow{3}{*}{Model 27} & Fluctuating & 1.0000 & 0.1000 & 0.9000 & 0.9900 & & & & 3.75e\(-\)4 & 2.91e\(-\)4 \\ & Deterministic & 0.1886 & 0.1000 & 0.4655 & 0.1957 & 0.201 & 10.07 & 0.975 & 3.15e\(-\)4 & 1.01e\(-\)3 \\ & Overall & & & & & & & 3.84e\(-\)4 & 8.15e\(-\)4 \\ \hline \multirow{3}{*}{Model 29} & Fluctuating & 0.0769 & 0.0100 & 0.1117 & 0.6774 & & & & 1.68e\(-\)3 & 2.32e\(-\)3 \\ & Deterministic & 0.5000 & 0.1000 & 0.2000 & 0.5000 & 0.184 & 9.22 & 0.341 & 1.99e\(-\)4 & 2.94e\(-\)4 \\ & Overall & & & & & & & 1.34e\(-\)3 & 2.28e\(-\)3 \\ \hline \multirow{3}{*}{Model 30} & Fluctuating & 0.0298 & 0.0100 & 0.2680 & 0.3306 & & & & 1.78e\(-\)4 & 7.97e\(-\)5 \\ & Deterministic & 0.0783 & 0.0100 & 0.1970 & 0.2960 & 0.162 & 8.12 & 3.793 & 1.53e\(-\)5 & 2.15e\(-\)5 \\ & Overall & & & & & & & 1.83e\(-\)4 & 5.93e\(-\)5 \\ \hline \multirow{3}{*}{Model 31} & Fluctuating & 0.5000 & 0.1000 & 0.5000 & 0.8500 & & & & 1.06e\(-\)3 & 1.23e\(-\)3 \\ & Deterministic & 0.5000 & 0.1000 & 0.5000 & 0.2000 & 0.199 & 9.94 & 0.312 & 1.58e\(-\)4 & 8.95e\(-\)4 \\ & Overall & & & & & & & 1.15e\(-\)3 & 1.91e\(-\)3 \\ \hline \multirow{3}{*}{Model 32} & Fluctuating & 0.0673 & 0.0100 & 0.3224 & 0.2933 & & & & 7.64e\(-\)4 & 9.50e\(-\)4 \\ & Deterministic & 0.0481 & 0.0100 & 0.1590 & 0.3532 & 0.192 & 9.61 & 0.099 & 1.02e\(-\)4 & 9.98e\(-\)4 \\ \cline{1-1} & Overall & & & & & & & 2.85e\(-\)4 & 1.18e\(-\)3 \\ \hline \hline \end{tabular} \end{table} Table 2Opt-ESN hyperparameter settings and performance summary over each simulation model. \begin{table} \begin{tabular}{c l c c c c c c c c} \hline \hline \multirow{2}{*}{**Condition**} & \multirow{2}{*}{**Component**} & \multicolumn{6}{c}{**Hyperparameters**} & \multirow{2}{*}{**NMSE**} & \multirow{2}{*}{**Training length**} & \multirow{2}{*}{**Level of noise**} \\ \cline{3-10} & & \(\varrho\) & \(c_{r}\) & \(\alpha\) & \(\rho\) & & & \\ \hline \multirow{4}{*}{(1)} & Fluctuating & 0.6661 & 0.1000 & 0.1836 & 0.6394 & 1.64e\(-\)2 & & \\ & Deterministic & 0.8185 & 0.1000 & 0.2583 & 0.2953 & 2.46e\(-\)4 & & \\ & Overall & & & & & 8.29e\(-\)3 & & \\ \hline \multirow{4}{*}{(2)} & Fluctuating & 0.0390 & 0.1000 & 0.1535 & 0.4175 & 2.24e\(-\)2 & & \\ & Deterministic & 0.2091 & 0.1000 & 0.3273 & 0.5917 & 1.81e\(-\)3 & 5000 & 0\% \\ & Overall & & & & & 1.07e\(-\)2 & & \\ \cline{1-1} & Fluctuating & 0.0474 & 0.1000 & 0.4446 & 0.1319 & 5.16e\(-\)2 & & \\ \cline{1-1} & Deterministic & 0.6935 & 0.1000 & 0.7389 & 0.3642 & 3.10e\(-\)3 & 2500 & 0\% \\ \cline{1-1} & Overall & & & & & 2.25e\(-\)2 & & \\ \cline{1-1} \cline{2-10} & Fluctuating & 0.7489 & 0.1000 & 0.0102 & 0.9848 & 1.14e\(+\)5 & & \\ \cline{1-1} & Deterministic & 0.0184 & 0.1000 & 0.4612 & 0.1193 & 1.74e\(+\)0 & 1000 & 0\% \\ \cline{1-1} & Overall & & & & & 6.38e\(+\)4 & & \\ \hline \multirow{4}{*}{(3)} & Fluctuating & 0.3026 & 0.1000 & 0.1755 & 0.8945 & 2.58e\(-\)2 & & \\ \cline{1-1} & Deterministic & 0.6681 & 0.1000 & 0.4042 & 0.2133 & 1.27e\(-\)3 & 10000 & 1\% \\ \cline{1-1} & Overall & & & & & 1.84e\(-\)2 & & \\ \cline{1-1} \cline{2-10} & Fluctuating & 1.0000 & 0.1000 & 0.3000 & 0.9900 & 6.81e\(-\)2 & & \\ \cline{1-1} & Deterministic & 0.7527 & 0.1000 & 0.0296 & 0.8296 & 1.36e\(-\)1 & 10000 & 5\% \\ \cline{1-1} & Overall & & & & & 1.47e\(-\)1 & & \\ \hline \hline \end{tabular} \end{table} Table 3Opt-ESN model settings used in episodic burst predictions.
2302.12897
Neutron Star Mergers and the Quark Matter Equation of State
As neutron stars merge they can approach very high nuclear density. Here, we summarized recent results for the evolution and gravitational wave emission from binary neutron star mergers using a a variety of nuclear equations of state with and without a crossover transition to quark matter. We discuss how the late time gravitational wave emission from binary neutron star mergers may possibly reveal the existence of a crossover transition to quark matter.
Grant J. Mathews, Atul Kedia, Hee Il Kim, In-Saeng Suh
2023-02-24T21:16:39Z
http://arxiv.org/abs/2302.12897v1
# Neutron Star Mergers and the Quark Matter Equation of State ###### Abstract As neutron stars merge they can approach very high nuclear density. Here, we summarized recent results for the evolution and gravitational wave emission from binary-neutron star mergers using a a variety of nuclear equations of state with and without a crossover transition to quark matter. We discuss how the late time gravitational wave emission from binary neutron star mergers may possibly reveal the existence of a crossover transition to quark matter. ## 1 Introduction In recent work [1] we have explored the effects of a crossover transition to quark matter on the emergent gravitational waves from binary neutron star mergers. In this paper we summarize that work and other efforts toward unraveling the effects of the formation of quark-matter during neutron-star mergers. Neutron stars (NSs) and NS binaries can probe the equation of state (EOS) at supra-nuclear densities (for recent reviews see Refs. [2; 3]). Indeed, the detection of gravitational waves (GWs) from the GW170817 event by the LIGO-Virgo Collaboration [4; 5] provided new insights into the properties of neutron-star matter [6]. Beyond that, determinations of NS masses and radii by the NICER mission also constrain the EOS of nuclear matter [7; 8; 9]. Tidal effects can be inferred from the signal in ground-based GW observatories [10; 11; 12]. In the LIGO-Virgo events tidal deformability (\(\Lambda\)) of a NS of mass \(M=1.4\) M\({}_{\odot}\) have also been inferred \(\Lambda_{1.4}<800\) at (90% C.L.) for a low-spin prior [4] and the radius constraint for a \(M=1.4\) M\({}_{\odot}\) NS was deduced to be \(R_{1.4}<13.6\) km. Subsequently, this has been further constrained to be \(R_{1.4}=11.9\pm 1.4\) km [5]. Also, newer constraints on the maximum NS mass and a lower limit of the tidal deformability were also inferred [13; 14]. Adding the requirement that the equation of state asymptotically approach the regime of perturbative QCD [14; 15; 16; 17; 18; 19], leads to constraints on the radius of a maximum-mass NS of \(R_{\rm max}<13.6\) km and \(\Lambda_{1.4}>120\)[14]. It has also been shown that an EOS with a phase transition can imply \(8.53\) km \(<\) R\({}_{1.4}<13.74\) km and \(\Lambda_{1.4}>35.5\) at the 3 \(\sigma\) level [13]. There is currently much interest in the fact that a phase transition in the EOS can produce a variety of dynamical collapse patterns (cf. [20]). As explained below, such changes in the EOS can produce a shift of the maximum peak frequency (\(f_{peak}\) (sometimes denoted as \(f_{2}\)) in the detected power spectral density (PSD) [21; 22; 23]. Such a shift can violate the universal relation between \(f_{peak}\) and tidal deformability that has been noted for pure hadronic EOSs [24]. However, an EOS with a phase transition may not conform to the same empirical universal relations [25; 26; 27; 28; 29]. Hence, an observed shift might indicate the formation of quark matter. This conclusion, however, is model dependent (e.g. [30; 31]) and also depends upon the duration of merger remnant [20; 32; 33]. A number of recent works have discussed EOS effects on the GW signal. Some of them have also considered the formation of quark matter [20; 22; 23; 30; 32; 33; 34; 35]. Most of these studies, however, have considered a first-order phase transition. In this case a mixed quark-hadron phase forms which can remove pressure support leading to a prompt collapse. However, since the strength of the order parameter for the QCD phase transition is not known, a simple crossover or a weakly first-order transition is possible [36; 37; 38; 39; 40]. The pressure in the regime of the crossover could be large compared to a hadronic or a first-order transition. This could extend the postmerger phase. Hence, an observation of a long-duration post-merger GW event, could possibly indicate both the order of the transition and the coupling strength of quark-matter in the crossover regime [1]. In Ref. [1] we examined the crossover to the formation of quark-gluon plasma during the postmerger and demonstrated that the GW signal from the postmerger phase is indeed sensitive to the quark-matter EOS. It was shown that that the properties of quark matter in the non-perturbative crossover regime of QCD increases the pressure of the postmerger remnant. This leads to a longer duration of the late time gravitational radiation such that the GW emission might become a means to probe the non-perturbative regime of quark matter. In particular, in Ref. [1] various parameterizations of the quark-hadron crossover (QHC19) EOS of [41] were investigated. A complementary study has also been made in Ref. [42] based upon the newer (QHC21) version with similar conclusions. As the density increases, a critical point is thought to appear. Above that density a weak first-order chiral transition may occur [43]. In the QHC19 EOS the transition from hadronic to quark matter is treated as a continuous crossover parameterized with a 5th order polynomial. The observational constraints on the NS mass (\(>2\) M\({}_{\odot}\)) [44; 45; 46] and the radius bounds from the LIGO-Virgo analysis are satisfied in all versions of this EOS. Within this context the tidal deformability, maximum chirp frequency \(f_{max}\), and power spectral density frequency peak \(f_{peak}\) were analyzed in [1] as a means to identify observational signatures of the crossover to quark matter during binary NS mergers. The crucial postmerger GW emission occurs in a high frequency range (1-4 kHz). Although this frequency is outside the current LIGO/aVirgo/KAGRA window, it is anticipated that next generation of GW observatories such as the Einstein Telescope [47] and the Cosmic Explorer [48] will be sensitive in this frequency range. We argue that observations of such higher frequency gravitational wave emission in the next generation detectors may have the possibility to characterize both the order of the transition and the physics of the crossover regime of quark matter. ## 2 Equations of state At high baryon density and chemical potential the QCD strong coupling \(\alpha_{\rm s}\) approaches unity. A non-perturbative approach to QCD is then necessary to describe the generation of constituent quark masses, chiral symmetry breaking [49], quark pairing, and color superconductivity [50], etc. For our studies we utilized various parameterizations of the QHC19 EOS [41]. In that work, the low-density hadronic regime (i.e. less that twice the nuclear saturation density, \(<2\)\(n_{0}\)) utilized the Togashi EOS [51; 52]. This is an extended version of the relatively soft APR EoS [53]. Our study [1] instead utilized the SLy [54] and the GNH3 [55] EoSs as bracketing the physics of a soft and stiff EoS, respectively. The QHC19 EOS accounts for the non-perturbative QCD effects n the context of the Nambu-Jona-Lasinio model (see Refs. [56; 57; 58]). The Lagrangian contains four coupling constants. These are: 1) the scalar coupling (\(G\)); 2) the coefficient of the Kobayashi-Maskawa-'t Hooft vertex (\(K\)); 3) the vector coupling for universal quark repulsion (\(g_{v}\)); and 4) the diquark strength (\(H\)). In the QHC19 EoS, only two coupling constants (\(g_{v}\) and \(H\), scaled to \(G\)) are varied to construct versions of the model. The matter pressure increases as these couplings increase [40; 41]. In [1] we utilized three parameter sets from [41], identified as QHC19B [\((g_{V},H)=(0.8,1.49)\)], QHC19C [\((g_{V},H)=(1.0,1.55)\)], and QHC19D [\((g_{V},H)=(1.2,1.61)\)]. The pressure in the crossover regime (\(2~{}n_{0}<n<5~{}n_{0}\)) is given analytically with fifth-order polynomials in terms of the baryonic chemical potential. The tidal deformability (\(\Lambda<800\) for \(M_{0}=1.4\) M\({}_{\odot}\)) [4], the maximum mass [44; 45; 46], and radius of neutron stars are all satisfied with these parameterizations of the QHC19 EOS. For numerical speed we implemented the QHC19 EOSs using piecewise-polytropic fits as described by Ref. [59] and modified as discussed in [1]. ## 3 Simulations In [1] binary merger simulations were evolved using the numerical relativity software platform, the Einstein Toolkit (ET) [60]. This platform incorporates full general relativity in three spatial dimensions based upon the BSSN-NOK formalism [61; 62; 63; 64; 65]. The hydrodynamics was evolved with the use of the GRHydro code [66; 67; 68] based on the Valencia formulation [69; 70]. The initial conditions were generated using LOREME[71; 72]. The thorn Carpet[73; 74] was used for adaptive mesh refinement based upon six mesh refinement levels and a minimum grid of \(0.3125\) in Cactus units (\(\approx 461\) m). The thermal pressure component was implemented in GRHydro using a constant adiabatic index \(\Gamma_{\rm th}=1.8\) as in Ref. [75]. The GWs emitted during the binary merger were extracted using the Newman-Penrose formalism which is based upon a multipole expansion of the spin-weighted spherical harmonics of the Weyl scalar \(\Psi_{4}^{(J,m)}(\theta,\phi,t)=\ddot{h}_{+}^{(l,m)}(\theta,\phi,t)+\ddot{ih}_ {\times}^{(l,m)}(\theta,\phi,t)\). The two polarizations of the strain \(h_{+}(\theta,\phi,t)\) and \(h_{\times}(\theta,\phi,t)\) were calculated by summing over the \((l,m)\) modes and integrated twice. The isolated NS models involved baryonic masses of \(M_{B}=1.45\), \(1.50\), \(1.55\) M\({}_{\odot}\), with similar gravitational masses \(\sim 1.35-1.4\). These were placed on the grid with an initial coordinate separation between centers of \(45\) km. Figure 1 (from Ref. [1]) illustrates the evolution of the maximum density during the simulations. This figure shows that the densities in the NSs even before the merger are well into the crossover range (\(2\)-\(5~{}n_{0}\)). The NS core densities remain in the crossover domain at a densities of about \(n\sim 2.95-3.15~{}n_{0}\) during the approach to merger. Subsequently, the maximum density rises until the maximum density exceeds \(\sim 5-6~{}n_{0}\). At this point the core of the system collapses to the central black hole as evidenced by a density spike in this figure. Figure 2 illustrates the evolution of the strain for various equations of state as labelled, but for nearly identical initial conditions. The striking feature is that the GW signal endures for a much longer time for the cases with a QHC EOS. Moreover, the larger the quark coupling, i.e. going from QHCB to QHCC, the longer the duration of the postmerger GW emission. This suggests that one might learn the strength of the non-perturbative quark-matter couplings from the observation of an extended post merger phase. Indeed, the postmerger duration, i.e., the lifetime of the hyper-massive neutron star (HMNS), strongly depends on the EOS stiffness at the crossover densities. When densities in excess of the nuclear saturation density are achieved in the core for the hadronic EOSs it is impossible to stop the merger remnant from collapsing into a black hole. The postmerger remnants from binaries based upon the QHC19 EOS, however, have sufficient pressure to delay gravitational collapse. As the EOS stiffness within the QHC models increases, longer lifetimes of their HMNS remnants are apparent. Even the QHC19B EOS produces a much longer postmerger duration than the hadronic EOSs. For the case of QHC19D, even the highest-mass case fails to collapse. Of course, what is actually detected in GW observatories is not the strain, but its fourier transform. In particular, an effective fourier amplitude can be deduced \[\tilde{h}_{+,\times}(f)=\int h_{+,\times}(t)e^{-i2\pi f/t}dt\ \, \tag{1}\] This is usually plotted as a normalized power spectral density (PSD) given by \(2\tilde{h}(f)f^{1/2}\)[76]. Figure 3 shows some PSD spectra deduced from the simulations in Ref. [1]. The upper green curve shows the LIGO sensitivity while the lower blue and orange curves show the expected sensitivity of the future Einstein Telescope and Cosmic Explorer, respectively. The first peak at around 1 kHz for all of the simulations corresponds to the initial contact of the merging neutron stars, while the second peak near 2 kHz corresponds to the maximum chirp strain, \(f_{max}=\frac{1}{2\pi}\frac{d\phi}{dt}|_{max}\), where \(\phi\) is the phase of the strain (see [76]). Of particular interest for probing quark matter, however, is the third peak, \(f_{peak}\), at around 3 kHz corresponding to the long postmerger phase. What can be noted in this figure is that the amplitude of \(f_{peak}\) directly correlates with the duration of the postmerger system, and therefore, relates to the strength of the coupling constants in the QHC19 EOS parameterizations. In spite of the promising feature that \(f_{peak}\) PSD becomes large for a crossover to quark matter, it is possible that other equations of state can lead to such a peak [76]. What is Figure 1: Evolution of maximum rest-mass density vs time for several equations of state from [1]. Numbers next to the EOS label indicate the gravitational mass of an isolated neutron star for each case. The blue band indicates QHC-crossover densities. In all cases, the NSs start in the crossover density range (2–5 \(n_{0}\)) followed by a rise in density, leading to a collapse to a black hole (in all except the bottom-right panel). The bottom-right case (QHC19D 1.399) does not form a black hole within the simulation time. needed, therefore, is another unique signature to specifically identify quark matter. In [1] it was suggested that the softness of the QHC equations of state at lower densities, \(\sim 3n_{0}\), is apparent in their pre-merger \(f_{max}\) frequency, whereas the stiffness at higher densities is indicated in the postmerger \(f_{peak}\) frequency. This dual nature of the QHC equations of state (having both softness and stiffness) might be revealed by observations of both \(f_{max}\) and \(f_{peak}\) in a single GW event. This is illustrated in figure 4 from Ref. [1]. The upper panel shows that \(f_{max}\) values for the QHC equations of state in our simulations obey the scaling relations with tidal deformability as noted in [77]. This also shows that the QHC simulations all cluster with a soft EOS like the SLy in the initial chirp. However, the lower panel shows the relation between \(f_{peak}\) and the pseudo-averaged rest-mass density. Such a correlation was suggested in Refs. [25; 76]. This figure shows that in the later 3 kHz post-merger phase, the \(f_{peak}\) frequencies cluster somewhere between a soft and stiff EOS. Hence, observing such a transition in the PSD between soft to stiffness, as evidenced in the different behaviors of \(f_{max}\) and \(f_{peak}\), may indicate the formation of quark matter. Once the existence of quark matter is confirmed the amplitude of the PSD at \(f_{peak}\) might be suggestive the strength of the quark couplings. ## 4 Acknowledgements Work at the Center for Astrophysics of the University of Notre Dame is supported by the U.S. Department of Energy under Nuclear Theory Grant No. DE-FG02-95-ER40934. This research was supported in part by the Notre Dame Center for Research Computing through the high performance computing resources. H.I.K. graciously thanks Jinho Kim and Chun Figure 2: Evolution of the GW strain \(h_{+,\times}\) vs time for several representative simulations with nearly identical starting conditions, but for different equations of state as labelled. The numbers indicate the isolated neutron star mass for each EOS as indicative of the similarity of initial conditions. The upper two curves are for parameterizations of the QHC19 EOS, while the bottom two curves are for a soft and stiff pure hadronic EOS. Note the the signal continues for a much longer duration in the cases with a crossover to quark matter. glee Kim for continuous support. The work of H.I.K. was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education through the Center for Quantum Spacetime (CQUeST) of Sogang University (Grant No. NRF-2020R1A6A1A03047877). This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. Notice: This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The publisher, by accepting the article for publication, acknowledges that the U.S. Government retains a non-exclusive, paid up, irrevocable, world-wide license to publish or reproduce the published form of the manuscript, or allow others to do so, for U.S. Government purposes. The DOE will provide public access to these results in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)). Software used was as follows The Einstein Toolkit (Ref. [60]; https://einstein toolkit.org), LORENE (Refs. [71; 72]), PyCactus ([https://bitbucket.org/GravityPR/pycactus](https://bitbucket.org/GravityPR/pycactus)), and TOVsolver ([https://github.com/amotormenko/TOVsolver](https://github.com/amotormenko/TOVsolver)).
2310.13998
Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabeled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabeled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multiclass classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting.
Pierre Colombo, Victor Pellegrain, Malik Boudiaf, Victor Storchan, Myriam Tami, Ismail Ben Ayed, Celine Hudelot, Pablo Piantanida
2023-10-21T12:47:10Z
http://arxiv.org/abs/2310.13998v1
# Transductive Learning for Textual Few-Shot Classification ###### Abstract Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabeled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabeled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multi-class classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting. ## 1 Introduction Recent advances in Natural Language Processing (NLP) have been largely driven by the scaling paradigm Kaplan et al. (2020); Rosenfeld et al. (2019), where larger models with increased parameters have been shown to achieve state-of-the-art results in various NLP tasks Touvron et al. (2023); Radford et al. (2019). This approach has led to the development of foundation models such as ChatGPT Lehman et al. (2023); Kocon et al. (2023); Brown et al. (2020), GPT-4 (OpenAI, 2023), GPT-3 Brown et al. (2020), T5 Raffel et al. (2020), and BERT Devlin et al. (2018), which have achieved unprecedented performance in text classification Liu et al. (2019), language modeling, machine translation Fan et al. (2021), and coding tasks Chen et al. (2021). Despite the success of the scaling paradigm, significant challenges still exist especially when the many practical constraints of real-world scenarios have to be met: labeled data can be severely limited (_i.e._, few-shot scenario Song et al. (2022); Ye et al. (2021)), data privacy is critical for many industries and has become the subject of increasingly many regulatory pieces Commission (2020, 2016), compute costs need to be optimized Strubell et al. (2019). Furthermore, these challenges are made even more complex as stronger foundation models are now available only through APIs (_e.g._, OpenAI's GPT-3, GPT-4 or ChatGPT, Anthropic's Claude or Google's PaLM Chowdhery et al. (2022)) which has led to some of their parameters being concealed, presenting new challenges for model adaptation Solaiman (2023). This paper is centered on the fundamental task of few-shot text classification, specifically focusing on cloud-based/API access. Specifically, we formulate three requirements for API-based few-shot learning (FSL) (see Fig. 1): **(R1) Black-box scenario.** We focus on learning from models that are opaquely deployed in production to the end-user, who only has access to the end-point of the encoder, _i.e._, the resulting text embedding produced by the final layer of the network. **(R2) Low resources / computation time.** AI systems are often required to make rapid predictions at high frequencies in various real-world applications. Therefore, any few-shot classifier used in such scenarios should have a low training and inference time, as well as require minimal computational resources. **(R3) Limited Data Sharing.** When utilizing API models, data sharing becomes a major concern. In the current landscape, providers are increasingly offering less transparent procedures for training their networks. As a result, users prefer sharing as little information as possible, such as labeling schema and annotated data, to safeguard their data privacy. **Shortcomings of Existing Works.** While numerous previous studies have addressed the popular _few-shot_ classification setting, to our knowledge no existing line of work adequately satisfies the three API requirements described above. In particular, prompt-based FSL (Schick and Schutze, 2020) and parameter-efficient fine-tuning FSL (Houlsby et al., 2019) both require access to the model's gradients, while in-context learning scales poorly with the task's size (_e.g_ number of shots, number of classes) (Chen et al., 2021; Min et al., 2021, 2022; Brown et al., 2020) and requires full data sharing. Instead, we focus on methods that can operate within API-based constraints. Under **R1**, **R2**, and **R3** requirements, the standard inductive learning (Liu et al., 2022) may be quite limiting. To mitigate the labeled data scarcity while retaining API compliance, we revisit transduction (Vapnik, 1999) in the context of textual few-shot classification. Specifically, in the context of FSL, transductive FSL (Liu et al., 2019) advocates leveraging unlabeled test samples of a task as an additional source of information on the underlying task's data distribution in order to better define decision boundaries. Such additional source essentially comes for free in many _offline_ applications, including sentiment analysis for customer feedback, legal document classification, or text-based medical diagnosis. Our findings corroborate recent findings in computer vision (Liu et al., 2019; Ziko et al., 2020; Lichtenstein et al., 2020; Boudiaf et al., 2020; Hu et al., 2021), that substantial gains can be obtained from using transduction over induction, opening new avenue of research for the NLP community. However, the transductive gain comes at the cost of introducing additional hyperparameters, and carefully tuning them. Motivated by Occam's razor principle, we propose a novel hyperparameter-free transductive regularizer based on Fisher-Rao distances and demonstrate the strongest predictive performances across various benchmarks and models while keeping hyperparameter tuning minimal. We believe that this parameter-free transductive regularizer can serve as a baseline for future research. ### Contributions In this paper, we make several contributions to the field of textual FSL. Precisely, our contributions are threefold: **A new textual few-shot scenario:** We present a new scenario for FSL using textual API-based models that accurately capture real-world constraints. Our new scenario opens up new research avenues and opportunities to address the challenges associated with FSL using API-based models, paving the way for improved performance in practical applications. **A novel transductive baseline.** Our paper proposes a transductive FSL algorithm that utilizes a novel parameter-free Fisher-Rao-based loss. By leveraging only the network's embedding **(R1)**, our approach enables fast and efficient predictions **(R2)** without the need to share the labeling schema or the labels of few-shot examples making it compliant with **(R3)**. This innovative method marks a significant step forward in the field of FSL. **A truly improved experimental setting.** Previous studies on textual few-shot classification (Schick and Schutze, 2022, 2020; Mahabadi et al., 2022; Tam et al., 2021; Gao et al., 2020) have predominantly assessed their algorithms on classification tasks with a restricted number of labels (typically less than five). We take a step forward and create a benchmark that is more representative of real-world scenarios. Our benchmark relies on a total of eight datasets, covering multiclass classification tasks with up to 151 classes, across four different languages. Moreover, we further enhanced the evaluation process by not only considering 10 classifiers trained with 10 different seeds (Logan IV et al., 2021; Mahabadi et al., 2022), but also by relying on episodic evaluation on 1,000 episodes (Hospedales et al., 2021). Our results clearly demonstrate the superiority of transductive methods. ## 2 Related Work ### Few-shot learning in NLP Numerous studies have tackled the task of FSL in Natural Language Processing (NLP) by utilizing pre-trained language models (Devlin et al., 2018; Liu et al., 2019; Radford et al., 2019; Yang et al., 2019). These methods can be classified into three major categories: prompt-based, parameter-efficient tuning, and in-context learning. **Prompt-based FSL**: Prompt-based FSL involves the use of natural language prompts or templates to guide the model to perform a specific task [14, 15]. For example, the seminal work [13] proposed a model called PET, which uses a pre-defined set of prompts to perform various NLP tasks as text classification. They also impose a choice of a verbalizer which highly impacts the classification performances [11, 12]. However, recent studies have questioned the benefits of prompt-based learning due to the high variability in performance caused by the choice of prompt [15]. To address this issue, researchers have proposed prompt tuning which involves a few learnable parameters in addition to the prompt [13]. Nevertheless, these approaches face limitations when learning from API: (i) encoder access for gradient computation is infeasible (as in **R1**), (ii) prompting requires to send data and label which raises privacy concerns (as in **R3**), and (iii) labeling new points is time-consuming (see in **R3**) and expensive due to the need to send all shots for each input token1. Footnote 1: The cost of API queries is determined by the number of input tokens that are transmitted. **Parameter-efficient fine-tuning.** These methods, such as adapters [1, 16], keep most of the model's parameters fixed during training and only update small feed-forward networks that are inserted within the larger model architecture. A recent example is T-FEW [15], which adds learned vectors that rescale the network's internal activations. Additionally, it requires a set of manually created prompts for each dataset making it hard to use in practice. Relying on parameter-efficient fine-tuning methods with an API is not possible due to the need to compute gradients of the encoder (as per **R1**) and the requirement to send both the labeling schema and the labels, which violates **R3**. **In Context Learning (ICL).** In-context learning models are models that utilize input-to-output training examples as prompts to make predictions, without any parameter updates [12]. These models, such as text-davinci, rely solely on the provided examples to generate predictions, without any additional training. However, a significant drawback of this approach is that the user must supply the input, label examples, and task description, which becomes prohibitively expensive when the number of classes or shots increases, is slow [15] (**R2**) and raises data privacy concerns (as highlighted in **R3**). Additionally, the inability to reuse text embeddings for new tasks or with new labels without querying the model's API limits practicality and scalability, making reusable encoding unfeasible for in-context learning models2. Footnote 2: Furthermore, as the number of considered classes increases, the fixed size of the transformer limits the number of possible shots that can be fed to the model. Previous studies have often neglected this limitation by focusing on a few numbers of labels. **Meta-learning.** Meta-learning approaches have for quite long stood as the _de-facto_ paradigm for FSL ([13, 12, 14, 15, 16, 17, 18, 19]). In meta-learning, the objective is to provide the model with the intrinsic ability to learn in a data-efficient manner. For instance, MAML ([14, 15]), arguably the most popular meta-learning method, tries to train a model such that it can be fine-tuned end-to-end using only a few supervised samples while retaining high generalization ability. Unlike the three previous lines of work, meta-learning methods operate by modif Figure 1: API-based FSL scenario. The black-box API provides embeddings from the pretrained encoder \(f_{\theta}\). The black-box scenario discards existing inductive approaches and in-context learning methods due to the inaccessible of the model’s parameters (**(**R1**)) and privacy concerns (**(**R3**)). This scenario, allows tuning a classification head \(g_{\phi}\) (using induction or transduction) at low computational cost (**R2**) while retaining all support labels locally. cedure and therefore assume access to both the training data and the model, which wholly breaks both **R1** and **R3**. ### Inductive vs transductive learning Learning an inductive classifier on embeddings generated by an API-based model, as proposed by Snell et al. (2017), is a common baseline for performing FSL. This approach is prevalent in NLP, where a parametric model is trained on data to infer general rules that are applied to label new, unseen data (known as inductive learning Vapnik (1999)). However, in FSL scenarios with limited labeled data, this approach can be highly ambiguous and lead to poor generalization. Transduction offers an attractive alternative to inductive learning Sain (1996). Unlike inductive learning, which infers general rules from training data, transduction involves finding rules that work specifically for the unlabeled test data. By utilizing more data, such as unlabeled test instances, and aiming for a more localized rule rather than a general one, transductive learning has shown promise and practical benefits in computer vision Boudiaf et al. (2020, 2021); Ziko et al. (2020). Transductive methods yield substantially better performance than their inductive counterparts by leveraging the statistics of the query set Dhillon et al. (2019). However, this approach has not yet been explored in the context of textual data. ## 3 API-based Few-shot Learning ### Problem Statement Let \(\Omega\) be the considered vocabulary, we denote \(\Omega^{*}\) its Kleene closure. The Kleene closure corresponds to sequences of arbitrary size written with tokens in \(\Omega\), _i.e._, \(\Omega^{*}=\bigcup\limits_{i=0}^{\infty}\Omega^{i}\). Given an input space \(\mathcal{X}\) with \(\mathcal{X}\subseteq\Omega^{*}\) and a latent space \(\mathcal{Z}\), we consider a pre-trained backbone model \(f_{\theta}:\mathcal{X}\rightarrow\mathcal{Z}=\mathcal{R}^{d}\), where \(\theta\in\Theta\) represents the parameters of the encoder and \(d\) is the embedding dimension size. In the API-based setting, we assume that we are unable to access the exact structure of \(f_{\theta}\) as mentioned in **R1**. However, we do have access to the last encoder embedding which is available for our use (see **R1**). The objective of few-shot classification is to learn a classifier from limited labeled data and generalize it to new, unseen tasks or classes. To accomplish this, randomly sampled few-shot tasks are created from a test dataset \(\mathcal{D}_{test}:=\{(x_{i},y_{i})\}_{i=1}^{N_{test}}\) that has a set of unseen classes \(\mathcal{Y}_{test}\). Each task involves a few labeled examples from \(K\) different classes chosen at random among \(\mathcal{Y}_{test}\). These labeled examples constitute the support set \(S=\{x_{i},y_{i}\}_{i\in\mathcal{I}_{S}}\), with a size of \(|S|=N_{S}\times K\). Additionally, each task has an unlabeled query set \(Q=\{x_{i}\}_{i\in\mathcal{I}_{Q}}\) composed of \(|Q|=N_{Q}\times K\) unseen examples from each of the \(K\) classes. \(\mathcal{I}_{S}\) and \(\mathcal{I}_{Q}\) represent the drawn indices during the sampling process for support set and query set, respectively. Pre-trained models use few-shot techniques and the labeled support sets to adapt to the tasks at hand and are evaluated based on their performances on the unlabeled query sets. _Remark_ Setting the values of \(N\) and \(K\) in textual FSL is not standardized, as discussed in Sec. 3.1. Therefore, in all of our experiments, we have relied on setting \((N,K)\in\{5,10\}^{2}\). ### Proposed Transductive Method NLP few-shot classifiers rely only on inductive inference, while computer vision has shown significant performance improvements using transductive inference for FSL. Transductive inference succeeds in FSL because it jointly classifies all unlabeled query samples of a single task, leading to more efficient and accurate classification compared to inductive methods that classify one sample at a time. Let us begin by introducing some basic notation and definitions before introducing our new transductive loss based on the Fisher-Rao distance. In the API-based few-shot classification setting, our goal is to train a classification head \(g_{\phi}:\mathcal{Z}\rightarrow\mathbb{R}^{K}\) that maps the feature representations to the posterior distribution space for making predictions. To simplify the equations for the rest of the paper, we use the following notations for the posterior predictions of each \(i\in\mathcal{I}_{S}\cup\mathcal{I}_{Q}\) and for the class marginals within \(Q\): \(p_{ik}=g_{\phi}(f_{\theta}(x_{i}))_{k}=\mathbb{P}(Y=k|X=x_{i};\theta,\phi)\) and \(\widehat{p}_{k}=\frac{1}{|Q|}\sum_{i\in\mathcal{I}_{Q}}p_{ik}=\mathbb{P}(Y_{Q} =k;\theta,\phi)\) where \(X\) and \(Y\) are the r.v.s associated with the raw features and labels, respectively, and where \(Y_{Q}\) means restriction of the r.v. \(Y\) to set \(Q\). For training the classification head in the transductive setting, prior research aims at finding \(\phi\) such that \(\phi=\arg\min\text{CE}-\lambda\times{R_{Q}}^{3}\), with CE:\(=-\frac{1}{|S|}\sum_{i\in\mathcal{I}_{S}}\sum_{k=1}^{K}y_{ik}\log(p_{ik})\) being the cross-entropy supervision on the support set (in which \(y_{ik}\) is the \(k^{\text{th}}\) coordinate of the one-hot en coded label vector associated to sample \(i\)) and \(R_{Q}\) being a transductive loss on the query set \(Q\). Note that this transductive regularization has been proposed in the literature based on the InfoMax principle (Cardoso, 1997; Linsker, 1988), and the inductive loss can be found by setting \(\lambda=0\). In what follows, we review the regularizers introduced in previous work. **Entropic Minimization (H)** An effective regularizer for transductive FSL can be derived from the field of semi-supervised learning, drawing inspiration from the approach introduced in (Grandvalet and Bengio, 2004). This regularizer, proposed in (Dhillon et al., 2019), utilizes the conditional Shannon Entropy (Cover, 1999) of forecast results from query samples during testing to enhance model generalization. Formally: \[R_{Q}^{H}=\frac{1}{|Q|}\sum_{i\in\mathcal{I}_{Q}}\sum_{k=1}^{K}p_{ik}\log(p_{ ik}). \tag{1}\] **Mutual Information Maximization (I)** A promising alternative to entropic minimization for addressing the challenges of transductive FSL is to adopt the Info-max principle. (Boudiaf et al., 2020) extended this idea, introduced in (Hu et al., 2017), and propose as regularizer a surrogate of the mutual-information \(R_{Q}^{I}(\alpha)=\): \[-\sum_{k=1}^{K}\hat{p}_{k}\log\hat{p}_{k}+\alpha\frac{1}{|Q|}\sum_{i\in \mathcal{I}_{Q}}\sum_{k=1}^{K}p_{ik}\log(p_{ik}). \tag{2}\] **Limitation of existing strategies**: Despite its effectiveness, the previous method has a few limitations that should be taken into account. One of these limitations is the need to fine-tune the weight of different entropies using the hyperparameter \(\alpha\). This parameter-tuning process can be time-consuming and may require extensive experimentation to achieve optimal results. Additionally, recent studies have shown that relying solely on the first Entropic term, which corresponds to the Entropic minimization scenario in Equation 1, can lead to suboptimal performance in FSL. ### A Fisher-Rao Based Regularizer In the FSL scenario, minimizing parameter tuning is crucial. Motivated by this, in this section, we introduce a new parameter-free transductive regularizer that fits into the InfoMax framework. Additionally, our loss inherits the attractive properties of the Fisher-Rao distance between soft-predictions \(\mathbf{q}:=(q_{1},\ldots,q_{K})\) and \(\mathbf{p}:=(p_{1},\ldots,p_{K})\), which is given by (Picot et al., 2023): \[d_{\text{FR}}(\mathbf{q},\mathbf{p}):=2\arccos\left(\sum_{k=1}^{K}\sqrt{q_{k} \times p_{k}}\right). \tag{3}\] The proposed transductive regularizer denoted by \(R_{Q}^{\text{FR}}\), for each single few-shot task, can be described as measuring the Fisher-Rao distance between pairs of query samples: \[R_{Q}^{\text{FR}}:=\frac{1}{|Q|}\sum_{i\in\mathcal{I}_{Q}}-\log \sum_{j\in\mathcal{I}_{Q}}\sum_{k=1}^{K}\sqrt{p_{ik}\times p_{jk}} \tag{4}\] \[=\frac{1}{|Q|}\sum_{i\in\mathcal{I}_{Q}}-\log\sum_{j\in\mathcal{ I}_{Q}}\cos\left(\frac{d_{\text{FR}}(\mathbf{p}_{i},\mathbf{p}_{j})}{2} \right), \tag{5}\] where \(d_{\text{FR}}(\mathbf{p}_{i},\mathbf{p}_{j})\) is the Fisher-Rao distance between pairs of soft-predictions \((\mathbf{p}_{i},\mathbf{p}_{j})\). Furthermore, it is shown that expression (4) yields a surrogate of the Mutual Information as shown by the following proposition. This result to the best of our knowledge is new, as far as we can tell. **Theorem 1**.: _(Fisher-Rao as a surrogate to maximize Mutual Information) Let \((\mathbf{q}_{i})_{i\in\mathcal{I}_{Q}}\) be a collection of soft predictions corresponding to the query samples. Then, it holds that \(\forall\;0\leq\alpha\leq 1\):_ \[R_{Q}^{\text{FR}}+\log|Q|\leq R_{Q}^{I}(1)\leq R_{Q}^{I}(\alpha), \tag{6}\] Proof.: Further details are relegated to Ap. A. _Advantage of \(R_{Q}^{\text{FR}}\) over \(R_{Q}^{I}(\alpha)\):_ Similarly to \(R_{Q}^{I}(\alpha)\), \(R_{Q}^{\text{FR}}\) can be exploited to maximize the Mutual Information. However, \(R_{Q}^{\text{FR}}\) is parameter-free and thus, it does not require tuning \(\alpha\). ### Additional Few-shot Inductive Baseline In addition to the transductive methods of Sec. 3.2, we will explore three additional inductive methods for few-shot classification: prototypical networks, linear probing, and a semi-supervised classifier. **Prototypical Networks (PT)** PT learn a metric space where the distance between two points corresponds to their degree of similarity. During inference, the distance between the query example and each class prototype is computed, and the predicted label is the class with the closest prototype. PT has been widely used in NLP and is considered as a strong baseline (Snell et al., 2017; Sun et al., 2019; Gao et al., 2019). **Linear Probing (CE)** Fine-tuning a linear head on top of a pretrained model is a popular approach to learn a classifier for classification tasks and was originally proposed in Devlin et al. (2018). **Semi-supervised Baselines (SSL).** We additionally propose two semi-supervised baselines following two steps. In the first step, a classifier is trained using the support set \(\mathcal{S}\) and used to label \(\mathcal{Q}\). In the second step, the final classifier is trained on both \(\mathcal{S}\) and \(\mathcal{Q}\) with the pseudo label obtained from the first step. ## 4 An Enhanced Experimental Setting ### Datasets Benchmarking the performance of FSL methods on diverse sets of datasets is critical to evaluate their generalization capabilities in a robust manner as well as their potential for real-world applications. Previous work on FSL Karimi Mahabadi et al. (2022); Perez et al. (2021) mainly focuses on datasets with a reduced number of classes (_i.e._, \(K<5\)). Motivated by practical considerations we choose to build a new benchmark composed of datasets with a larger number of classes. Specifically, we choose Go Emotion Demszky et al. (2020), Tweet Eval Barbieri et al. (2020), Clinc Larson et al. (2019), Banking Casanueva et al. (2020) and the Multilingual Amazon Reviews Corpus Keung et al. (2020). These datasets cover a wide range of text classification scenarios and are of various difficulty4. A summary of the datasets used can be found in Tab. 1. Footnote 4: Datasets are available in Dataset (Lhoest et al., 2021) ### Model Choice The selection of an appropriate backbone model is a critical factor in achieving high performance in few-shot NLP tasks. To ensure the validity and robustness of our findings, we have included a diverse range of transformer-based backbone models in our study, including 1. _Three different sizes of RoBERTa based models_Liu et al. (2019). Similar to BERT, RoBERTa is pretrained using the closed task Taylor (1953). We consider two different sizes of the RoBERTa model, namely RoBERTa (B) with 124M parameters and RoBERTa (L) with 355M parameters and DistilRoBERTa, a lighter version of RoBERTa trained through a distillation process Hinton et al. (2015), for a total of 82M parameters. 2. _Three sentence-transformers encoder_Reimers and Gurevych (2019). Following Muennighoff et al. (2022), we consider MPNET-base Song et al. (2020), MiniLM Wang et al. (2020), and Albert Small V2 Lan et al. (2019). 3. _Multilingual models._ To address realistic multilingual scenarios, we rely on three sizes of XLM-RoBERTa Conneau et al. (2020, 2019): base (B), large (L) and XL (XL). 4. text-davinci _model_: to mimic the typical setting of API-based models, we also conduct experiments on text-davinci, only accessible through OpenAI's API. ### Evaluation Framework Prior research in textual FSL typically involves sampling a low number of tasks, typically less than 10, of each dataset. In contrast, we utilize an episodic learning framework that generates a large number of N-shots K-ways tasks. This framework has gained popularity through inductive meta-learning approaches, such as those proposed by Finn et al. (2017); Snell et al. (2017); Vinyals et al. (2016); Sung et al. (2018); Mishra et al. (2017); Rusu et al. (2019); Oreshkin et al. (2018), as it mimics the few-shot environment during evaluation and improves model robustness and generalization. In this context, episodic training implies that a different model is initialized for each generated few-shot task, and all tasks are compiled independently in parallel. This approach allows to the computation of more reliable performance statistics by evaluating the generalization capabilities of each method on a more diverse set of tasks. To account for the model's generalization ability, we average the results for each dataset over 1000 episodes, with the N considered classes varying in every episode. For each experiment, we consider the F1-Score. ## 5 Experiments ### Case Study of text-davinci In this experiment, we investigate the performance of text-davinci in both its language model and \begin{table} \begin{tabular}{c c} \hline Dataset & Classes (K) \\ \hline Tweet & 20 \\ Emotion & 25 \\ Amazon & 30 \\ B77 & 77 \\ Clinc & 151 \\ \hline \end{tabular} \end{table} Table 1: Datasets Statistics. embedding-based model forms. We assess its classification capabilities using the aforementioned baseline and explore the language model's performance when applied in an in-context learning (ICL) setup with prompting. **Takeaways**. From Tab. 2, we observe that SSL performs comparably to CE, which is simpler to use and will be considered as the baseline in the next part of our study. Although ICL slightly outperforms CE, its implementation comes at a significant cost. In ICL, each class requires N shots, forcing the user to send a long input query with additional instructions. This query length becomes prohibitive as the number of classes increases, and on average, it is 58 times longer than using the embedding base API in our benchmark. The lengthy input and ICL approach make it time-consuming for generation (violating **R1**), require the user to provide labels (violating **R2**), and prevent the reuse of embeddings for future use (_e.g._, retrieval, clustering). Additionally, ICL is 60 times more expensive than CE. Thus, we will discard ICL for the subsequent part of this study. ### Overall Results **Global results:** To evaluate the effectiveness of various few-shot methods, we conducted a comprehensive analysis of their classification performance across all datasets, all backbones, and all considered N-shots/K-ways scenarios. Results are reported in Tab. 3. _An interesting observation is that transductive approaches I and FR outperform their inductive counterparts (CE and PT)_. Notably, we found that vanilla entropy minimization, which solely relies on H, consistently underperforms in all considered scenarios. Our analysis revealed that FR surpasses traditional fine-tuning based on cross-entropy by a margin of 3.7%. **Mono-lingual experiment**: In order to thoroughly analyze the performance of each method, we conducted a per-dataset study, beginning with a focus on the mono-lingual datasets. Fig. 2 reveals that the global trends observed in Tab. 3 remain consistent across datasets of varying difficulty levels. Notably, we observed consistent improvements achieved by transductive regularizers (such as I or FR) over CE. However, the relative improvement is highly dependent on the specific dataset being evaluated. Specifically, FR achieves +6.5% F1-score on Banking, but only a shy +1.5% on Tweet. A strong baseline generally suggests highly discriminative features for the task, and therefore a strong upside in leveraging additional unlabeled features, and vice versa. Therefore, we hypothesize that the potential gains to be obtained through transduction correlate with the baseline's performance.5 Footnote 5: Additional multilingual results (_i.e._, on es, de, fr) can be found on Sec. B.3. They exhibit the same behavior. ### Study Under Different Data-Regime In this experiment, we investigated the performance of different loss functions under varying conditions of 'ways' and'shots'. As shown in Fig. 3, we observed that increasing the number of classes ('ways') led to a decrease in F1 while increasing the number of examples per class ('shots') led to an improvement in F1. This can be explained by \begin{table} \begin{tabular}{l r r r r r} \hline \hline N-shots & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{5} & \(|x|\) \\ K-ways & 10 & 5 & 10 & 5 \\ \hline FR & **69.83** & **77.46** & **66.70** & **75.03** & 14.2 \\ H & 10.00 & 20.00 & 10.01 & 20.04 & 14.2 \\ I & **68.38** & **78.52** & **65.15** & **73.06** & 14.2 \\ \hline CE & 68.21 & 75.47 & 64.92 & 72.70 & 14.2 \\ PT & 67.95 & 75.41 & 64.60 & 72.50 & 14.2 \\ SSL & 68.27 & 75.55 & 64.99 & 72.75 & 14.2 \\ \hline ICL & 68.9 & 76.24 & 65.2 & 74.3 & 900 \\ \hline \hline \end{tabular} \end{table} Table 2: Aggregated performance over K, N, the different datasets for text-davinci. \(|x|\) stands for the averaged input length. \begin{table} \begin{tabular}{l r r r r} \hline \hline N-shots & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{5} \\ K-ways & 10 & 5 & 10 & 5 \\ \hline FR & **52.09** & **61.99** & **48.71** & **56.55** \\ I & 50.07 & 59.17 & 46.42 & 55.74 \\ H & 15.07 & 27.39 & 15.33 & 25.84 \\ \hline CE & 48.31 & 56.87 & 45.27 & 53.94 \\ SSL & 50.39 & 58.78 & 47.33 & 55.85 \\ PT & 47.29 & 56.05 & 44.32 & 53.20 \\ \hline \hline \end{tabular} \end{table} Table 3: Aggregated performance over K,N, the different datasets and considered backbone. Figure 2: Performance on the monolingual datasets. the fact that having more data enables the classifier to better discern the characteristics of each class. Interestingly, the relationship between the number of shots and classification F1 may not be the same for all classes or all loss functions. Fig. 3 shows that different loss functions (e.g. FR on banking) benefited greatly from adding a few shots, while others did not show as much improvement. However, this variability is dependent on the specific dataset and language being used, as different classes may have different levels of complexity and variability, and some may be inherently easier or harder to classify than others. ### Ablation Study On Backbones In this experiment, we examined how different loss functions perform when increasing the number of parameters in various models. The results, presented in Fig. 4, show the average performance across the experiments and are organized by the loss function. We observed an _inverse scaling law_ for both the RoBERTa and XLM-RoBERTa family of models, where increasing the number of parameters led to a decrease in performance for the losses tested. However, within the same family, we observe that the superiority of FR remains consistent. An interesting finding from Fig. 4 is that the transductive regularization technique using FR outperforms other methods on text-davinci. This highlights the effectiveness of FR in improving the performance of the model and suggests that transductive regularization may be a promising approach for optimizing language models. ### Practical Considerations In this experiment, we adopt a practical standpoint and aim to evaluate the effectiveness of an API model, specifically text-davinci. In Tab. 4, we report the training speed of one episode on a MAC with CPU. Overall, we observed that the transductive loss is slower as it necessitates the computation of the loss on the query set, whereas PT is faster as it does not involve any optimization. Furthermore, we note that FR is comparable in speed to I. To provide a better understanding of these results, we can compare our method with existing approaches (in the light of **R2**). For instance, PET (Schick and Schutze, 2020) entails a training time of 20 minutes on A100, while ADAPET (Tam et al., 2021) necessitates 10 minutes on the same hardware. ## 6 Conclusions This paper presents a novel FSL framework that utilizes API models while meeting critical constraints of real-world applications (i.e., **R1**, **R2**, **R3**). This approach is particularly appealing as it shifts the computational requirements (**R2**), eliminating the need for heavy computations for the user and reducing the cost of embedding. To provide a better understanding, embedding over 400k sequences cost as low as 7 dollars. In this scenario, our research highlights the potential of transductive losses, which have previously been disregarded by the NLP community. A candidate loss is the Fisher-Rao distance which is parameter-free and could serve as a simple baseline in the future. \begin{table} \begin{tabular}{l c} \hline Loss & CPU Time \\ \hline CE & 0.45s \\ FR & 0.83s \\ H & 0.75s \\ I & 0.83s \\ PT & 0.01s \\ SSL & 0.80s \\ \hline \end{tabular} \end{table} Table 4: Training time for 1 episode on a M1-CPU. Figure 4: Impact of model size. Figure 3: The effect of ways and shots on test performance on monolingual (left) and multilingual (right) datasets. ## 7 Limitations We are optimistic that our research will have a positive impact on society. Nonetheless, it is essential to acknowledge the limitations of API-based few-shot classification models despite their promising results in various tasks. Firstly, the performance of the introduced methods is heavily dependent on the quality of available API models. If the API models do not provide sufficient information or lack diversity, the introduced methods may struggle to accurately classify input texts. Secondly, the black-box nature of the backbone limits the interpretability of API-based few-shot classification methods, which may hinder their adoption. Ultimately, the aim of this work is to establish a baseline for future research on transductive inference. As a result, not all existing transductive methods are compared in this study. ## Acknowledgements This work was performed using HPC resources from GENCI-IDRIS (Grants 2022- AD01101838, 2023-103256 and 2023-101838).
2304.09687
Self-consistent multi-component simulation of plasma turbulence and neutrals in detached conditions
Simulations of high-density deuterium plasmas in a lower single-null magnetic configuration based on a TCV discharge are presented. We evolve the dynamics of three charged species (electrons, D$^{+}$ and D$_{2}^{+}$), interacting with two neutrals species (D and D$_2$) through ionization, charge-exchange, recombination and molecular dissociation processes. The plasma is modelled by using the drift-reduced fluid Braginskii equations, while the neutral dynamics is described by a kinetic model. To control the divertor conditions, a D$_2$ puffing is used and the effect of increasing the puffing strength is investigated. The increase in fuelling leads to an increase of density in the scrape-off layer and a decrease of the plasma temperature. At the same time, the particle and heat fluxes to the divertor target decrease and the detachment of the inner target is observed. The analysis of particle and transport balance in the divertor volume shows that the decrease of the particle flux is caused by a decrease of the local neutral ionization together with a decrease of the parallel velocity, both caused by the lower plasma temperature. The relative importance of the different collision terms is assessed, showing the crucial role of molecular interactions, as they are responsible for increasing the atomic neutral density and temperature, since most of the D neutrals are produced by molecular activated recombination and D$_2$ dissociation. The presence of strong electric fields in high-density plasmas is also shown, revealing the role of the $E \times B$ drift in setting the asymmetry between the divertor targets. Simulation results are in agreement with experimental observations of increased density decay length, attributed to a decrease of parallel transport, together with an increase of plasma blob size and radial velocity.
D. Mancini, P. Ricci, N. Vianello, G. Van Parys, D. S. Oliveira
2023-04-19T14:26:03Z
http://arxiv.org/abs/2304.09687v2
# Self-consistent multi-component simulation of plasma turbulence and neutrals in detached conditions ###### Abstract Simulations of high-density deuterium plasmas in a lower single-null magnetic configuration based on a TCV discharge are presented. We evolve the dynamics of three charged species (electrons, D\({}^{+}\) and D\({}_{2}^{+}\)), interacting with two neutrals species (D and D\({}_{2}\)) through ionization, charge-exchange, recombination and molecular dissociation processes. The plasma is modelled by using the drift-reduced fluid Braginskii equations, while the neutral dynamics is described by a kinetic model. To control the divertor conditions, a D\({}_{2}\) puffing is used and the effect of increasing the puffing strength is investigated. The increase in fuelling leads to an increase of density in the scrape-off layer and a decrease of the plasma temperature. At the same time, the particle and heat fluxes to the divertor target decrease and the detachment of the inner target is observed. The analysis of particle and transport balance in the divertor volume shows that the decrease of the particle flux is caused by a decrease of the local neutral ionization together with a decrease of the parallel velocity, both caused by the lower plasma temperature. The relative importance of the different collision terms is assessed, showing the crucial role of molecular interactions, as they are responsible for increasing the atomic neutral density and temperature, since most of the D neutrals are produced by molecular activated recombination and D\({}_{2}\) dissociation. The presence of strong electric fields in high-density plasmas is also shown, revealing the role of the \(E\times B\) drift in setting the asymmetry between the divertor targets. Simulation results are in agreement with experimental observations of increased density decay length, attributed to a decrease of parallel transport, together with an increase of plasma blob size and radial velocity. _Keywords :_ plasma turbulence, neutral interactions, GBS, high-density ## 1 Introduction In order to operate within the constraint imposed by the materials used for the plasma facing components, future fusion reactors will need to work in regimes where a large fraction of the power is dissipated via radiation [1, 2, 3]. This can be achieved by operating the divertor in detached conditions, reached in present devices for example by increasing the core density, where a reduction of target temperature, heat and particle fluxes to the walls is observed [4, 5, 1]. This reduction is largely determined by the plasma-neutral interactions present at low temperatures, \(T\lesssim 5\) eV, with an important role played by molecules as sink of particles, momentum and energy [4, 6, 7]. Indeed, neutral atoms and molecules can be ionized at these temperatures, generating atomic and molecular ions at the cost of the ionisation energy, or participate in recombination and charge-exchange reactions, which act as a particle and momentum sink. Molecules can also undergo dissociative processes, increasing the channels for ionization and recombination, e.g. through molecular activated recombination (MAR) reactions. Even if the overall importance of MAR reactions in high density discharges is still debated [8], their role as ion sink is shown to be dominant compared to the atomic ion recombination [9, 10, 7], producing excited atoms that contribute to the total radiative losses [11, 12]. Moreover, molecular interactions contribute to momentum losses and are expected to play an important role in the transport dynamics and, as a consequence, in the asymmetries observed between the inner and outer divertor targets [13, 14] and in the dependence of the detachment threshold on the divertor leg length [1]. The importance of molecules in detachment calls for multi-component simulations that include molecular species. The multi-component simulations of a tokamak plasma is usually based on fluid-diffusive models that consider a version of the Braginskii fluid equations simplified by modelling cross-field transport through empirical anomalous diffusion coefficients. The plasma dynamics is coupled with a kinetic Monte-Carlo model for the neutral dynamics. This approach is used in several modelling studies of detachment. For example, the SOLPS-ITER code [15] is used to model a TCV density ramp in Ref. [10] and the ASDEX detachment regimes in Ref. [16], while deuterium molecular emissions in DIII-D ohmic discharges are studied by using the EDGE2D-EIRENE [12]. Despite the significant progress obtained by using fluid-diffusive models, simulating the plasma-neutral reactions self-consistently with turbulence is crucial to improve our predictive capabilities and, ultimately, the control of detachment [1, 7]. Different models are able to capture the turbulent plasma dynamics by using fluid and gyrofluid models. These models are implemented in codes such as BOUT++ [17], FELTOR [18], GRILLIX [19], GDB [20], GBS [21, 22, 23] and TOKAM3X [24]. However, multi-component turbulent plasma simulations are very recent. They are used, for example, in the analysis of carbon impurities dynamics with SOLEDGE3X (combination of SOLEDGE2D and TOKAM3X) [25] or in the simulation of a gas puff imaging diagnostics in a limited magnetic configuration with GBS [26]. Single-seeded blobs are studied by using the multi-species version of FELTOR [27], the mult-species model implemented in the Hermes-3 module of the BOUT++ code [28], and the nHesel code, that simulates a single-species plasma with multiple species of neutrals modelled with a fluid approach [29, 30]. In this work, we present the first turbulence simulations of a deuterium plasma including molecules in a diverted tokamak geometry. The plasma we consider is composed of electrons and two ion species, D\({}^{+}\) and D\({}^{+}_{2}\), coupled with a kinetic neutral model that include the dynamics of two deuterium neutral species, D and D\({}_{2}\). The plasma and neutral models are described in Ref. [22]. The simulations are carried out with the GBS code generalized here to perform multi-component simulations of the full tokamak plasma volume, considering a diverted magnetic configuration, retaining the SOL-edge-core interplay [23]. The solution of a kinetic model for the neutrals allows us to simulate self-consistently the neutral dynamics, without introducing ad-hoc diffusion coefficients, which are required by fluid approaches. The interplay between molecular interactions, plasma target profiles and turbulent transport is investigated in a lower single-null L-mode discharge, with increasing plasma core density. Understanding these processes in the L-mode confinement regime is a first essential step, since it simplifies both the experimental and the numerical effort, mitigating the need to understand the transient phenomena induced, e.g., by edge localized modes. The outcome of two different simulations is presented, where the electron density at the separatrix is increased by a factor of two by varying the intensity of the D\({}_{2}\) gas puff. With higher density, we find a steady-state scenario where the inner strike point (ISP) presents a reduction of particle and heat fluxes, with large plasma pressure gradient along the magnetic field lines, which recalls one of the most important features of detached conditions [5]. Our simulations show that molecular interactions affect the plasma dynamics increasing the D density in the divertor volume through MAR, modifying the average D temperature and ultimately decreasing the plasma temperature via ionization and charge-exchange reactions. The increase in plasma collisionality due to lower temperature establishes strong electric fields in the SOL, with an associated \(E\times B\) drift, which increases the plasma asymmetry between the two targets. In addition, we observe the formation of a density shoulder at the outer mid-plane (OMP) [31], due to the increase of turbulent transport observed in high resistivity scenarios [31, 32, 33, 34]. The present paper is organised as follows. After the Introduction, in Sec. 2 we introduce the model used to self-consistently simulate plasma turbulence and neutral dynamics, as well as its implementation in the GBS code and the simulation setup. Sec. 3 provides an overview of the results obtained from our simulations, focusing on the analysis of the density, temperature and pressure profiles, with particular attention to the role played by neutral-plasma interaction terms and the importance of molecules. The analysis of the fluxes to the target and the assessment of the detachment conditions are described in Sec. 4. The conclusions follow. ## 2 Simulation model and set-up The simulations presented in this study are carried out with the GBS code, a three-dimensional, flux-driven code used to study plasma turbulence in the tokamak boundary [21, 23]. GBS was initially developed to simulate basic plasma physics experiments [35] and then ported to the geometry of the tokamak boundary, first in limited [36] and later in diverted configurations [37]. GBS can now perform simulations of three dimensional magnetic equilibrium configurations such as stellarators [38]. In GBS the plasma description is provided by the drift-reduced Braginskii equations [39] coupled to a self-consistent kinetic neutral model [40]. Thanks to recent efforts, both plasma and neutral models are now extended to simulate multiple species [22]. The results we discuss in the present paper are based on simulations of the dynamics of five species (D\({}^{+}\), D\({}_{2}^{+}\), electrons, D and D\({}_{2}\)) in a diverted configuration. In the following, we first describe the plasma and then the neutral model. Finally, we turn to the setup of the simulations presented in this work. ### The plasma model The model of the three plasma species (D\({}^{+}\), D\({}_{2}^{+}\) and electrons) is based on the Braginskii fluid equations [41], with the multi-species closure proposed by Zhdanov [42] that include plasma-neutral collision terms in the form of Krook operators [22]. In our model, we consider the drift-reduced approximation [39], i.e. the limit of turbulent time scales slower than the ion cyclotron time scale, \(\Omega_{ci}\tau_{\rm turb}\gg 1\), and turbulent scale lengths larger than the ion Larmor radius, \(k_{\perp}\rho_{i}\ll 1\), with \(\Omega_{ci}=eB/m_{i}\) and \(\rho_{si}=c_{si}/\Omega_{ci}\) the cyclotron frequency and Larmor radius are defined for each ion species \(i=\rm D^{+},D_{2}^{+}\). Within these hypotheses, the component of the velocity perpendicular to the magnetic field is written as \(\mathbf{v}_{\perp i}=\mathbf{v}_{E\times B}+\mathbf{v}_{di}+\mathbf{v}_{{\rm pol },i}+\mathbf{v}_{{\rm fric},i}\), where \(\mathbf{v}_{E\times B}=(\mathbf{E}\times\mathbf{B})/B^{2}\) is the \(E\times B\) drift, \(\mathbf{v}_{di}=(\mathbf{B}\times\nabla p_{i})/(en_{i}B^{2})\) the diamagnetic drift, \(\mathbf{v}_{{\rm pol},i}\) the polarization drift and \(\mathbf{v}_{{\rm fric},i}\) the drift due to friction between different ion species and neutrals. The detailed expressions of the velocities are given in Refs. [23, 22]. The electron perpendicular velocity is approximated by its leading order component \(\mathbf{v}_{\perp e}=\mathbf{v}_{E\times B}+\mathbf{v}_{de}\). Exploiting the collisional Zdhanov closure proposed in Refs. [25] and [42], with the approximation of \(n_{\rm D_{2}^{+}}/n_{\rm D^{+}}\ll 1\) proposed in Ref. [22], the plasma equations implemented in GBS take the form: \[\frac{\partial n_{\rm e}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,n_{\rm e}]+\frac{2}{B}\left[C(p_{ \rm e})-n_{\rm e}C(\phi)\right]-\nabla_{\parallel}(n_{\rm e}v_{\parallel{\rm e }})+\mathcal{D}_{n_{\rm e}}\nabla_{\perp}^{2}n_{\rm e}+S_{n_{\rm e}}\] \[+n_{D}\nu_{{\rm iz},{\rm D}}-n_{\rm D^{+}}\nu_{{\rm rec},{\rm D^{+ }}}+n_{\rm D_{2}}\nu_{{\rm iz},{\rm D_{2}}}-n_{\rm D_{2}^{+}}\nu_{{\rm rec},{ \rm D_{2}^{+}}} \tag{1}\] \[+n_{\rm D_{2}}\nu_{{\rm diss-iz},{\rm D_{2}}}+n_{\rm D_{2}^{+}} \nu_{{\rm diss-iz},{\rm D_{2}^{+}}}-n_{\rm D_{2}^{+}}\nu_{{\rm diss-rec},{ \rm D_{2}^{+}}}\quad,\] \[\frac{\partial n_{\rm D_{2}^{+}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,n_{\rm D_{2}^{+}}]-\nabla_{\parallel}(n _{\rm D_{2}^{+}}v_{\parallel{\rm D_{2}^{+}}})-\frac{2}{B}\left[C(p_{\rm D_{2}^{ +}})+n_{\rm D_{2}^{+}}C(\phi)\right] \tag{2}\] \[+\mathcal{D}_{n_{\rm D_{2}^{+}}}\nabla_{\perp}^{2}n_{\rm D_{2}^{ +}}+S_{n_{\rm D_{2}^{+}}}+n_{\rm D_{2}}\nu_{\rm iz,D_{2}}-n_{\rm D_{2}^{+}}\nu _{\rm rec,D_{2}^{+}}+n_{\rm D_{2}^{+}}\nu_{\rm cx,D_{2}-D^{+}}\] \[-n_{\rm D^{+}}\nu_{\rm cx,D-D_{2}^{+}}-n_{\rm D_{2}^{+}}\left(\nu _{\rm diss-iz,D_{2}^{+}}+\nu_{\rm diss,D_{2}^{+}}+\nu_{\rm diss-rec,D_{2}^{+}} \right)\quad,\] \[\frac{\partial\Omega}{\partial t}= -\frac{\rho_{*}^{-1}}{B}\nabla\cdot\left([\phi,\omega_{\rm D^{+} }]+2\left[\phi,\omega_{\rm D_{2}^{+}}\right]\right)-\nabla\cdot\left(v_{ \parallel{\rm D^{+}}}\nabla_{\parallel}\omega_{\rm D^{+}}+v_{\parallel{\rm D _{2}^{+}}}\nabla_{\parallel}\omega_{\rm D_{2}^{+}}\right)\] (3) \[+2BC(p_{\rm e}+p_{\rm D^{+}}+p_{\rm D_{2}^{+}})+B^{2}\nabla_{ \parallel}j_{\parallel}+\frac{B}{3}C(G_{\rm D^{+}}+G_{\rm D_{2}^{+}})\] \[+\eta_{0\Omega}\nabla_{\parallel}^{2}\Omega+\mathcal{D}_{\perp \Omega}\nabla_{\perp}^{2}\Omega-\nabla\cdot\left[\frac{2n_{\rm D_{2}}}{n_{\rm D _{2}^{+}}}\left(\nu_{\rm cx,D_{2}}+\nu_{\rm iz,D_{2}}+\nu_{\rm cx,D_{2}-D^{+}} \right)\omega_{\rm D_{2}^{+}}\right]\] \[-\nabla\cdot\left[\frac{n_{\rm D^{+}}}{n_{\rm D^{+}}}\left(\nu_{ \rm cx,D}+\nu_{\rm iz,D}+\nu_{\rm cx,D-D_{2}^{+}}\right)\omega_{\rm D^{+}} \right]-\nabla\cdot\left[\frac{n_{\rm D_{2}}}{n_{\rm D^{+}}}\nu_{\rm di-iz,D_{ 2}}\omega_{\rm D^{+}}\right]\] \[+\nabla\cdot\left[\frac{n_{\rm D_{2}^{+}}}{n_{\rm D^{+}}}\left(2 \nu_{\rm di-iz,D_{2}^{+}}+\nu_{\rm di,D_{2}^{+}}\right)\left(\omega_{\rm D_{2} ^{+}}-\omega_{\rm D^{+}}\right)\right]\quad,\] \[\frac{\partial U_{\parallel{\rm e}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,v_{\parallel{\rm e}}]+\frac{m_{ \rm D^{+}}}{m_{\rm e}}\left[\nu j_{\parallel}+\nabla_{\parallel}\phi-\frac{ \nabla_{\parallel}p_{\rm e}}{n_{\rm e}}-\frac{2}{3n_{\rm e}}\nabla_{\parallel }G_{\rm e}-0.71\nabla_{\parallel}T_{\rm e}\right]\] (4) \[-v_{\parallel{\rm e}}\nabla_{\parallel}v_{\parallel{\rm e}}+ \mathcal{D}_{v_{\parallel{\rm e}}}\nabla_{\perp}^{2}v_{\parallel{\rm e}}+ \frac{1}{n_{\rm e}}[n_{\rm D^{+}}\left(2\nu_{\rm iz,D}+\nu_{\rm e-D}\right) \left(v_{\parallel{\rm D^{+}}}-v_{\parallel{\rm e}}\right)\] \[+n_{\rm D_{2}}\left(2\nu_{\rm iz,D_{2}}+\nu_{\rm e-D_{2}}+2\nu_{ \rm diss-iz,D_{2}}+\nu_{\rm diss,D_{2}}\right)\left(v_{\parallel{\rm D_{2} }}-v_{\parallel{\rm e}}\right)\] \[+n_{\rm D_{2}^{+}}\left(2\nu_{\rm diss-iz,D_{2}^{+}}+\nu_{\rm diss,D_{2}^{+}}\right)\left(v_{\parallel{\rm D_{2}^{+}}}-v_{\parallel{\rm e}} \right)]\quad,\] \[\frac{\partial v_{\parallel{\rm D^{+}}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,v_{\parallel{\rm D^{+}}}]-v_{ \parallel{\rm D^{+}}}\nabla_{\parallel}v_{\parallel{\rm D^{+}}}-\nabla_{ \parallel}\phi-\frac{\nabla_{\parallel}p_{\rm D}^{+}}{n_{\rm D^{+}}}\] (5) \[-\frac{2}{3n_{\rm D^{+}}}\nabla_{\parallel}G_{\rm D^{+}}+0.71 \frac{n_{\rm e}}{n_{\rm D^{+}}}\nabla_{\parallel}T_{\rm e}-\nu\frac{n_{\rm e} }{n_{\rm D^{+}}}j_{\parallel}+\mathcal{D}_{v_{\parallel{\rm D^{+}}}}\nabla_{ \perp}^{2}v_{\parallel{\rm D^{+}}}\] \[+\frac{1}{n_{\rm D^{+}}}[n_{\rm D}\left(\nu_{\rm iz,D}+\nu_{\rm cx,D}+\nu_{\rm cx,D-D_{2}^{+}}\right)\left(v_{\parallel{\rm D}}-v_{\parallel{ \rm D^{+}}}\right)\] \[+n_{\rm D_{2}^{+}}\left(2\nu_{\rm diss-iz,D_{2}^{+}}+\nu_{\rm diss,D_{2}^{+}}\right)\left(v_{\parallel{\rm D_{2}^{+}}}-v_{\parallel{\rm D^{+}}}\right)\] \[+n_{\rm D_{2}}\nu_{\rm diss-iz,D_{2}}\left(v_{\parallel{\rm D_{2} }}-v_{\parallel{\rm D^{+}}}\right)]\quad,\] \[\frac{\partial v_{\parallel{\rm D_{2}^{+}}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,v_{\parallel{\rm D_{2}^{+}}}]-v_{ \parallel{\rm D_{2}^{+}}}\nabla_{\parallel}v_{\parallel{\rm D_{2}^{+}}}+\frac{ 1}{2}\left[-\nabla_{\parallel}\phi-\frac{\nabla p_{\rm D_{2}^{+}}}{n_{\rm D _{2}^{+}}}-\frac{2}{3n_{\rm D_{2}^{+}}}\nabla_{\parallel}G_{\rm D_{2}^{+}}\right]\] (6) \[+\mathcal{D}_{v_{\parallel{\rm D_{2}^{+}}}}\nabla_{\perp}^{2}v_{ \parallel{\rm D_{2}^{+}}}+\frac{n_{\rm D_{2}}}{n_{\rm D_{2}^{+}}}(\nu_{\rm iz,D_{2}}+\nu_{\rm cx,D_{2}}+\nu_{\rm cx,D_{2}-D^{+}})(v_{\parallel{\rm D_{2}^{+}} }-v_{\parallel{\rm D_{2}^{+}}})\quad,\] \[\frac{\partial T_{\rm e}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,T_{\rm e}]-v_{\parallel{\rm e}}\nabla_ {\parallel}T_{\rm e}+\frac{4T_{\rm e}}{3B}\left[\frac{7}{2}C(T_{\rm e})+\frac{ T_{\rm e}}{n_{\rm e}}C(n_{\rm e})-C(\phi)\right]-\frac{2T_{\rm e}}{3}\nabla_{ \parallel}v_{\parallel{\rm e}} \tag{7}\] \[+\frac{2}{3n_{\rm e}}\left[\frac{1.62}{\nu}\nabla_{\parallel} \left(n_{\rm e}T_{\rm e}\nabla_{\parallel}T_{\rm e}\right)-0.71\nabla_{ \parallel}\left(T_{\rm e}j_{\parallel}\right)\right]\] \[+\chi_{\perp{\rm e}}\nabla_{\perp}^{2}T_{\rm e}+\nabla_{ \parallel}\left(\chi_{\parallel{\rm e}}\nabla_{\parallel}T_{\rm e}\right)+S_ {T_{\rm e}}\] \[+\frac{1}{n_{\rm e}}\{n_{\rm D}\nu_{{\rm iz},{\rm D}}\left[- \frac{2}{3}E_{{\rm iz},{\rm D}}-T_{\rm e}+\frac{m_{\rm e}}{m_{{\rm D}^{+}}}v_ {\parallel{\rm e}}\left(v_{\parallel{\rm e}}-\frac{4}{3}v_{\parallel{\rm D}^{ +}}\right)\right]\] \[+n_{{\rm D}_{2}}\nu_{{\rm iz},{\rm D}_{2}}\left[-\frac{2}{3}E_{{ \rm iz},{\rm D}_{2}}-T_{\rm e}+\frac{m_{\rm e}}{m_{{\rm D}^{+}}}v_{\parallel {\rm e}}\left(v_{\parallel{\rm e}}-\frac{4}{3}v_{\parallel{\rm D}_{2}^{+}} \right)\right]\] \[+n_{{\rm D}_{2}}\nu_{{\rm diss},{\rm D}_{2}}\left[-\frac{2}{3}E_ {{\rm diss},{\rm D}_{2}}+\frac{2}{3}\frac{m_{\rm e}}{m_{{\rm D}^{+}}}v_{ \parallel{\rm e}}\left(v_{\parallel{\rm e}}-v_{\parallel{\rm D}_{2}^{+}} \right)\right]\] \[+n_{{\rm D}_{2}}\nu_{{\rm diss}-{\rm iz},{\rm D}_{2}}\left[- \frac{2}{3}E_{{\rm diss}-{\rm iz},{\rm D}_{2}}-T_{\rm e}+\frac{m_{\rm e}}{m_{ {\rm D}^{+}}}v_{\parallel{\rm e}}\left(v_{\parallel{\rm e}}-\frac{4}{3}v_{ \parallel{\rm D}_{2}^{+}}\right)\right]\] \[+n_{{\rm D}_{2}^{+}}\nu_{{\rm diss},{\rm D}_{2}^{+}}\left[- \frac{2}{3}E_{{\rm diss},{\rm D}_{2}^{+}}+\frac{2}{3}\frac{m_{\rm e}}{m_{{ \rm D}^{+}}}v_{\parallel{\rm e}}\left(v_{\parallel{\rm e}}-v_{\parallel{\rm D}_ {2}^{+}}\right)\right]\] \[+n_{{\rm D}_{2}^{+}}\nu_{{\rm diss}-{\rm iz},{\rm D}_{2}^{+}} \left[-\frac{2}{3}E_{{\rm diss}-{\rm iz},{\rm D}_{2}^{+}}-T_{\rm e}+\frac{m_{ \rm e}}{m_{{\rm D}^{+}}}v_{\parallel{\rm e}}\left(v_{\parallel{\rm e}}-\frac{4 }{3}v_{\parallel{\rm D}_{2}^{+}}\right)\right]\] \[-n_{{\rm D}^{+}}\nu_{{\rm e}-{\rm D}}\frac{m_{\rm e}}{m_{{\rm D}^ {+}}}\frac{2}{3}v_{\parallel{\rm e}}(v_{\parallel{\rm D}^{+}}-v_{\parallel{\rm e }})-n_{{\rm D}_{2}^{+}}\nu_{{\rm e}-{\rm D}_{2}}\frac{m_{\rm e}}{m_{{\rm D}^{ +}}}\frac{2}{3}v_{\parallel{\rm e}}(v_{\parallel{\rm D}_{2}^{+}}-v_{\parallel {\rm e}})\}\quad,\] \[\frac{\partial T_{{\rm D}^{+}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,T_{{\rm D}^{+}}]-v_{\parallel{\rm D} ^{+}}\nabla_{\parallel}T_{{\rm D}^{+}}+\frac{4}{3}\frac{T_{{\rm D}^{+}}}{B} \left[\frac{1}{n_{{\rm D}^{+}}}C(p_{\rm e}+p_{{\rm D}_{2}^{+}})-C(\phi)\right]\] (8) \[-\frac{2T_{{\rm D}^{+}}}{3n_{{\rm D}^{+}}}\left[\nabla_{ \parallel}\left(n_{\rm e}v_{\parallel{\rm e}}\right)-\nabla_{\parallel}\left( n_{{\rm D}_{2}^{+}}v_{\parallel{\rm D}_{2}^{+}}\right)-v_{\parallel{\rm D}^{+}} \nabla_{\parallel}\left(n_{{\rm D}_{2}^{+}}\right)\right]\] \[-\frac{10}{3}\frac{T_{{\rm D}^{+}}}{B}C(T_{{\rm D}^{+}})+\frac{2 }{3n_{{\rm D}^{+}}}\frac{2.32}{\sqrt{2\nu}}\sqrt{\frac{m_{\rm e}}{m_{{\rm D}^{ +}}}}\nabla_{\parallel}\left(n_{\rm e}T_{{\rm D}^{+}}\nabla_{\parallel}T_{{ \rm D}^{+}}\right)\] \[+\chi_{\perp{\rm D}^{+}}\nabla_{\perp}^{2}T_{{\rm D}^{+}}+\nabla_ {\parallel}\left(\chi_{\parallel{\rm D}^{+}}\nabla_{\parallel}T_{{\rm D}^{+}} \right)+S_{T_{{\rm D}^{+}}}\] \[+\frac{1}{n_{{\rm D}^{+}}}\left\{n_{{\rm D}}\left(\nu_{{\rm iz}, {\rm D}}+\nu_{{\rm cx},{\rm D}}+\nu_{{\rm cx},{\rm D}-{\rm D}_{2}^{+}}\right) \left[T_{{\rm D}}-T_{{\rm D}^{+}}+\frac{1}{3}\left(v_{\parallel{\rm D}}-v_{ \parallel{\rm D}^{+}}\right)^{2}\right]\] \[+n_{{\rm D}_{2}}\nu_{{\rm diss}-{\rm iz},{\rm D}_{2}}\left[T_{{ \rm D},{\rm diss}-{\rm iz}({\rm D}_{2})}-T_{{\rm D}^{+}}+\frac{1}{3}\left(v_ {\parallel{\rm D}_{2}}-v_{\parallel{\rm D}^{+}}\right)^{2}\right]\] \[+2n_{{\rm D}_{2}^{+}}\nu_{{\rm diss}-{\rm iz},{\rm D}_{2}^{+}} \left[T_{{\rm D},{\rm diss}-{\rm iz}\left({\rm D}_{2}^{+}}\right)-T_{{\rm D}^{+} }+\frac{1}{3}\left(v_{\parallel{\rm D}_{2}^{+}}-v_{\parallel{\rm D}^{+}} \right)^{2}\right]\] \[+n_{{\rm D}_{2}^{+}}\nu_{{\rm diss},{\rm D}_{2}^{+}}\left[T_{{ \rm D},{\rm diss}\left({\rm D}_{2}^{+}}\right)-T_{{\rm D}^{+}}+\frac{1}{3} \left(v_{\parallel{\rm D}_{2}^{+}}-v_{\parallel{\rm D}^{+}}\right)^{2} \right]\right\}\quad,\] \[\frac{\partial T_{\rm D_{2}^{+}}}{\partial t}= -\frac{\rho_{*}^{-1}}{B}[\phi,T_{\rm D_{2}^{+}}]-v_{\parallel\rm D_{ 2}^{+}}\nabla_{\parallel}T_{\rm D_{2}^{+}}-\frac{4}{3}\frac{T_{\rm D_{2}^{+}}} {B}\left[\frac{1}{n_{\rm D_{2}^{+}}}C(p_{\rm D_{2}^{+}})+C(\phi)\right] \tag{9}\] \[-\frac{10}{3}\frac{T_{\rm D_{2}^{+}}}{B}C(T_{\rm D_{2}^{+}})- \frac{2T_{\rm D_{2}^{+}}}{3}\nabla_{\parallel}v_{\parallel\rm D_{2}^{+}}+\frac {2}{3n_{\rm D_{2}^{+}}}\frac{0.92}{\sqrt{2}\nu}\sqrt{\frac{m_{\rm e}}{m_{\rm D ^{+}}}}\nabla_{\parallel}\left(n_{\rm e}T_{\rm D^{+}}\nabla_{\parallel}T_{\rm D ^{+}}\right)\] \[+\chi_{\perp\rm D_{2}^{+}}\nabla_{\perp}^{2}T_{\rm D_{2}^{+}}+ \nabla_{\parallel}\left(\chi_{\parallel\rm D_{2}^{+}}\nabla_{\parallel}T_{\rm D _{2}^{+}}\right)+S_{T_{\rm D_{2}^{+}}}\] \[+\frac{n_{\rm D_{2}}}{n_{\rm D_{2}^{+}}}(\nu_{\rm ex,D_{2}}+\nu_ {\rm iz,D_{2}}+\nu_{\rm ex,D_{2}-D^{+}})\left[T_{\rm D_{2}}-T_{\rm D_{2}^{+}} +\frac{2}{3}(v_{\parallel\rm D_{2}}-v_{\parallel\rm D_{2}^{+}})^{2}\right]\quad,\] solved with the Poisson and Ampere equations \[\nabla\cdot\left[\left(n_{\rm D^{+}}+2n_{\rm D_{2}^{+}}\right)\nabla_{\perp} \phi\right]=\Omega-\tau\nabla_{\perp}^{2}\left(p_{\rm D^{+}}+2p_{\rm D_{2}^{+}}\right) \tag{10}\] and \[\left(\nabla_{\perp}^{2}-\frac{\beta_{\rm e0}}{2}\frac{m_{\rm D^{+}}}{m_{\rm e }}n_{\rm e}\right)=\nabla_{\perp}^{2}U_{\parallel\rm e}-\frac{\beta_{\rm e0}} {2}\frac{m_{\rm D^{+}}}{m_{\rm e}}n_{\rm D^{+}}v_{\parallel\rm D^{+}}+\frac{ \beta_{\rm e0}}{2}\frac{m_{\rm D^{+}}}{m_{\rm e}}\overline{j}_{\parallel}\quad, \tag{11}\] while the atomic ions density is evaluated imposing quasi-neutrality \(n_{\rm D^{+}}=n_{\rm e}-n_{\rm D_{2}^{+}}\). In Eqs. (1-11), \(U_{\parallel\rm e}=V_{\parallel\rm e}+e\psi/m_{\rm e}\) is the sum of electron inertia and electromagnetic induction, \(p_{a}=n_{a}T_{a}\) is the pressure for the species \(a\), \(a={\rm e,D^{+},D_{2}^{+}}\), and \(\Omega=\Omega_{\rm D^{+}}+2\Omega_{\rm D_{2}^{+}}\) is the plasma vorticity, with \(\Omega_{i}=\nabla\cdot\omega_{i}=\nabla\cdot(n_{i}\nabla_{\perp}\phi+\nabla_{ \perp}p_{i})\) the contribution of each ion species. The operator \([\phi,f]={\bf b}\cdot(\nabla\phi\times\nabla f)\) is the \({\bf E}\times{\bf B}\) convective operator, \(C(f)=B/2[\nabla\times({\bf b}/B)]\cdot\nabla f\) is the curvature operator, \(\nabla_{\parallel}f={\bf b}\cdot\nabla f\) is the parallel gradient, and \(\nabla_{\perp}^{2}f=\nabla\cdot[({\bf b}\times\nabla f)\times{\bf b}]\) is the perpendicular Laplacian, with \({\bf b}={\bf B}/B\) the unit vector in the direction of the magnetic field. The electron gyroviscous term is given by \(G_{\rm e}=-\eta_{0\rm e}\left[2\nabla_{\parallel}v_{\parallel\rm e}+C(\phi)/B- C(p_{\rm e})/(en_{\rm e}B)\right]\), while for the ion species \(G_{i}=\eta_{0i}\left[2\nabla_{\parallel}v_{\parallel i}+C(\phi)/B+C(p_{i})/(en_{ i}B)\right]\). The plasma-neutrals interaction terms considered in this work are ionization, recombination, dissociation, charge-exchange and electron-neutral elastic collisions, all listed in Table 1. We consider the collisional processes that have larger cross sections in the deuterium plasma in typical conditions of the tokamak boundary [43], where the reaction rates \(\langle v\sigma\rangle\) are obtained from the AMJUEL [44] and HYDEL [45] databases. The reaction frequencies for ionization, recombination, elastic collisions and dissociative processes are averaged over the electron velocity distribution function, assumed Maxwellian, while the one for charge exchange processes are averaged over the ion velocity distribution function. Velocities and energies of the particles that results for the reactions are evaluated by using momentum and energy considerations, resulting in the values listed in Table 2[22]. In particular, for an elastic collision between an electron and an atomic or molecular neutral, it is assumed that the neutral velocity is not affected by the reaction, while the electron is emitted isotropically according to a Maxwellian distribution function centered at the velocity of the incoming electron. Regarding the ionization processes, the electrons and ions are generated according to a Maxwellian distribution function centered at the fluid velocity of the incoming neutral, with the electron temperature taking into account the loss of the ionization energy, \(\left\langle E_{\rm{iz,D}}\right\rangle\) or \(\left\langle E_{\rm{iz,D_{2}}}\right\rangle\). In the simulations, we consider an effective ionization energy of \(E_{\rm{iz,D}}=30.0\ \rm{eV}\), taking into account the radiation losses associated with the ionization process [4]. For dissociation processes, we follow a similar procedure, with the reaction-specific electron energy loss assumed to be the energy necessary to excite the molecule and incur in a Franck-Condon dissociation. The D atoms generated by dissociative-recombination reactions, namely MAR processes, considered in this work, are described by a Maxwellian distribution function, with average temperature \(T_{D,\;\rm{diss-rec}(D_{2}^{+})}\). The simulation domain encompasses the whole tokamak plasma volume, with a rectangular poloidal cross section of vertical extension \(L_{Z}\) and radial extension \(L_{R}\), leading to a natural choice of a cylindrical coordinate system \((R,\varphi,Z)\), where \(R\) is the radial distance from the tokamak axis of symmetry, \(\varphi\) the toroidal angle and \(Z\) the vertical coordinate. The magnetic field is expressed in terms of the flux function \(\psi\), \({\bf B}=RB_{\varphi}\nabla\varphi+\nabla\psi\times\nabla\varphi\), where \(\nabla\psi\) is the direction orthogonal to the flux surface, defining a flux-aligned coordinate system, \((\nabla\psi,\nabla\chi,\nabla\varphi)\) where \(\nabla\chi=\nabla\varphi\times\nabla\psi\), used in the analysis of the simulation results. In the following of the present paper, all quantities in Eqs. (1-11) are normalized to their reference value. Densities are normalized to the reference density \(n_{0}\), \(T_{\rm{e}}\) to \(T_{\rm{e0}}\), both \(T_{\rm{D^{+}}}\) and \(T_{\rm{D_{2}^{+}}}\) to \(T_{\rm{D^{+}}0}\) and parallel velocities to the sound speed \(c_{s0}=\sqrt{T_{\rm{e0}}/m_{\rm{D^{+}}}}\). The magnetic field strenght \(B\) is normalized to the field value on the magnetic axis \(B_{0}\) \begin{table} \begin{tabular}{l l l} \hline \hline **Collisional process** & **Equation** & **Reaction Frequency** \\ \hline Ionization of D & \(\rm{e^{-}+D\to 2e^{-}+D^{+}}\) & \(\nu_{\rm{iz,D}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{iz,D}}(v_{\rm{e}})\right\rangle\) \\ Recombination of \(\rm{D^{+}}\) and \(\rm{e^{-}+D^{+}\to D}\) & \(\nu_{\rm{rec,D^{+}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{rec,D^{+}} }(v_{\rm{e}})\right\rangle\) \\ \(\rm{e^{-}-D^{+}}\) elastic collisions & \(\rm{e^{-}+D^{+}\to e^{-}+D^{+}}\) & \(\nu_{\rm{e\mbox{-D}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{e\mbox{-D}} }(v_{\rm{e}})\right\rangle\) \\ Ionization of \(\rm{D_{2}}\) & \(\rm{e^{-}+D_{2}\to 2e^{-}+D_{2}^{+}}\) & \(\nu_{\rm{iz,D_{2}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{iz,D_{2}}}(v_{ \rm{e}})\right\rangle\) \\ Recombination of \(\rm{D_{2}^{+}}\) and \(\rm{e^{-}+D_{2}^{+}\to D_{2}}\) & \(\nu_{\rm{rec,D_{2}^{+}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{rec,D_{2 }^{+}}}(v_{\rm{e}})\right\rangle\) \\ \(\rm{e^{-}-D_{2}^{+}}\) elastic collisions & \(\rm{e^{-}+D_{2}^{+}\to e^{-}+D_{2}^{+}}\) & \(\nu_{\rm{e\mbox{-D}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{e\mbox{-D}} }(v_{\rm{e}})\right\rangle\) \\ Dissociation of \(\rm{D_{2}}\) & \(\rm{e^{-}+D_{2}\to e^{-}+D+D}\) & \(\nu_{\rm{diss,D_{2}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{diss,D_{2 }}}(v_{\rm{e}})\right\rangle\) \\ Dissociative ionization of \(\rm{D_{2}^{+}}\) & \(\rm{e^{-}+D_{2}^{+}\to e^{-}+D^{+}+D}\) & \(\nu_{\rm{diss,D_{2}^{+}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{diss,D_{2 }^{+}}}(v_{\rm{e}})\right\rangle\) \\ Dissociative ionization of \(\rm{D_{2}^{+}}\) & \(\rm{e^{-}+D_{2}^{+}\to 2e^{-}+2D^{+}}\) & \(\nu_{\rm{diss,i\mbox{-},D_{2}^{+}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{ diss-i\mbox{-},D_{2}^{+}}}(v_{\rm{e}})\right\rangle\) \\ Dissociative recombination of \(\rm{D_{2}^{+}}\) & \(\rm{e^{-}+D_{2}^{+}\to 2D}\) & \(\nu_{\rm{diss-rec,D_{2}^{+}}}=n_{\rm{e}}\left\langle v_{\rm{e}}\sigma_{\rm{diss -rec,D_{2}^{+}}}(v_{\rm{e}})\right\rangle\) \\ Charge-exchange of \(\rm{D^{+}},\rm{D}\) & \(\rm{D^{+}+D\to D+D^{+}}\) & \(\nu_{\rm{cx,D}}=n_{\rm{D^{+}}}\left\langle v_{\rm{D^{+}}}\sigma_{\rm{cx,D^{+}} }(v_{\rm{D^{+}}})\right\rangle\) \\ Charge-exchange of \(\rm{D_{2}^{+}},\rm{D_{2}}\) & \(\rm{D_{2}^{+}+D_{2}\to D_{2}+D_{2}^{+}}\) & \(\nu_{\rm{cx,D_{2}}}=n_{\rm{D_{2}^{+}}}\left\langle v_{\rm{D_{2}^{+}}}\sigma_{\rm{cx,D_{2}^{+}}}(v_{\rm{D_{2}^{+}}})\right\rangle\) \\ Charge-exchange of \(\rm{D_{2}^{+}},\rm{D}\) & \(\rm{D_{2}^{+}+D\to D_{2}+D^{+}}\) & \(\nu_{\rm{cx,D-D_{2}^{+}}}=n_{\rm{D_{2}^{+}}}\left\langle v_{\rm{D_{2}^{+}}} \sigma_{\rm{cx,D-D_{2}^{+}}}(v_{\rm{D_{2}^{+}}})\right\rangle\) \\ Charge-exchange of \(\rm{D_{2}},\rm{D^{+}}\) & \(\rm{D_{2}+D^{+}\to D_{2}^{+}+D}\) & \(\nu_{\rm{cx,D_{2}-D^{+}}}=n_{\rm{D_{2}^{+}}}\left\langle v_{\rm{D_{2}^{+}}} \sigma_{\rm{cx,D-D_{2}^{+}}}(v_{\rm{D_{2}^{+}}})\right\rangle\) \\ \hline \hline \end{tabular} \end{table} Table 1: Collisional processes considered and their respective reaction rates. perpendicular lengths to the ion sound Larmor radius \(\rho_{s0}=c_{s0}/\Omega_{c\mathrm{D}^{+}}\), parallel lengths to the tokamak major radius \(R_{0}\), and time to \(t_{0}=R_{0}/c_{s0}\). The dynamics is then set by the following dimensionless parameters: the normalized ion Larmor radius, \(\rho_{*}=\rho_{s0}/R_{0}\), the ion to electron temperature ratio, \(\tau=T_{\mathrm{D}^{+}0}/T_{\mathrm{e}0}\), and the normalized Spitzer resistivity \(\nu=e^{2}n_{0}R_{0}/(m_{\mathrm{D}^{+}}c_{s0}\sigma_{\parallel})=\nu_{0}T_{ \mathrm{e}}^{-3/2}\), with \[\sigma_{\parallel}=\left(1.96\frac{n_{0}e^{2}\tau_{e}}{m_{e}}\right)n=\left( \frac{5.88}{4\sqrt{2\pi}}\frac{(4\pi\epsilon_{0})^{2}}{e^{2}}\frac{T_{\mathrm{ e}0}^{3/2}}{\lambda\sqrt{m_{e}}}\right)(T_{\mathrm{e}})^{3/2} \tag{12}\] and, as a consequence, \[\nu_{0}=\frac{4\sqrt{2\pi}}{5.88}\frac{e^{4}}{(4\pi\epsilon_{0})^{2}}\frac{ \sqrt{m_{\mathrm{e}}}R_{0}n_{0}\lambda}{m_{\mathrm{D}^{+}}c_{s0}T_{\mathrm{e }0}^{3/2}}\,. \tag{13}\] The expression for the normalized viscosities, \(\eta_{0e}\), \(\eta_{\mathrm{0D}^{+}}\) and \(\eta_{\mathrm{0D}_{2}^{+}}\) in Eqs. (4-6), and thermal conductivities, \(\chi_{0e}\), \(\chi_{\mathrm{0D}^{+}}\) and \(\chi_{\mathrm{0D}_{2}^{+}}\) in Eqs. (7-9), can be found in Ref. [23] and are all assumed constant in this work. The normalized diffusion coefficients \(D_{f}\), for each field \(f\), are introduced for numerical stability. In our simulations, fuelling is entirely the result of self-consistent neutral ionization processes, while external electron heating is added in Eq. (7) through the \(s_{T_{\mathrm{e}}}\) source terms. This temperature source is toroidally uniform and expressed as an analytical function of the flux function \[s_{T_{\mathrm{e}}}=\frac{s_{T0}}{2}\left[\tanh\left(-\frac{\psi(R,Z)-\psi_{T} }{\Delta_{T}}\right)+1\right]\,, \tag{14}\] where \(\psi_{T}\) is a flux surface localized inside the LCFS. The heating source is therefore the sum of the contributions given by the external heating and by the neutral interactions \begin{table} \begin{tabular}{l l l} \hline **Collisional process** & \(\mathbf{e}^{-}\) **Energy loss** & **Temperature of products** \\ \hline Ionization of D & \(\left\langle E_{\mathrm{iz,D}}\right\rangle=13.60\mathrm{eV}\) & — \\ Ionization of D\({}_{2}\) & \(\left\langle E_{\mathrm{iz,D_{2}}}\right\rangle=15.43\mathrm{eV}\) & — \\ Dissociation of D\({}_{2}\) & \(\left\langle E_{\mathrm{diss,D_{2}}}\right\rangle\simeq 14.3\mathrm{eV}\) & \(T_{\mathrm{D,diss(D_{2})}}\simeq 1.95\mathrm{eV}\) \\ Dissociative ionization & & \\ of D\({}_{2}\) (\(E_{\mathrm{e}}<26\mathrm{eV}\)) & \(\left\langle E_{\mathrm{diss-iz,D_{2}}}\right\rangle\simeq 18.25\mathrm{eV}\) & \(T_{\mathrm{D,diss-iz(D_{2})}}\simeq 0.25\mathrm{eV}\) \\ Dissociative ionization & & \\ of D\({}_{2}\) (\(E_{\mathrm{e}}>26\mathrm{eV}\)) & \(\left\langle E_{\mathrm{diss-iz,D_{2}}}\right\rangle\simeq 33.6\mathrm{eV}\) & \(T_{\mathrm{D,diss-iz(D_{2})}}\simeq 7.8\mathrm{eV}\) \\ Dissociation of D\({}_{2}^{+}\) & \(\left\langle E_{\mathrm{diss,D_{2}^{+}}}\right\rangle\simeq 13.7\mathrm{eV}\) & \(T_{\mathrm{D,diss(D_{2}^{+})}}\simeq 3.0\mathrm{eV}\) \\ Dissociative ionization of D\({}_{2}^{+}\) & \(\left\langle E_{\mathrm{diss-iz,D_{2}^{+}}}\right\rangle\simeq 15.5\mathrm{eV}\) & \(T_{\mathrm{D,diss-iz(D_{2}^{+})}}\simeq 0.4\mathrm{eV}\) \\ Dissociative recombination of D\({}_{2}^{+}\) & — present in Eqs. (1-9), i.e. \[s_{P_{\rm tot}}=n_{\rm e}(s_{T_{\rm e}}+s_{T_{\rm e}}^{\rm neu})+n_{\rm D^{+}}s_{ T_{\rm D^{+}}}^{\rm neu}+n_{\rm D^{+}_{2}}s_{T_{\rm D^{+}_{2}}}^{\rm neu}+T_{ \rm e}s_{n_{\rm e}}^{\rm neu}+T_{\rm D^{+}}s_{n_{\rm D^{+}}}^{\rm neu}+T_{\rm D ^{+}_{2}}s_{n_{\rm D^{+}_{2}}}^{\rm neu}\quad. \tag{15}\] We implement a pre-sheath set of magnetic boundary conditions at the walls where the strike points are located, i.e. the lower and the left walls, as detailed in Ref. [46] and extended in Ref. [22] to include molecular deuterium, that is \[v_{\parallel{\rm e}} = \pm c_{s}\max\{\exp\left(\Lambda-\frac{\phi}{T_{e}}\right),\exp( \Lambda)\} \tag{16}\] \[v_{\parallel{\rm D^{+}}} = \pm c_{s}\sqrt{1+\frac{T_{\rm D^{+}}}{T_{\rm e}}}\] (17) \[v_{\parallel{\rm D^{+}_{2}}} = \frac{v_{\parallel{\rm D^{+}}}}{\sqrt{2}}\] (18) \[\partial_{s}\phi = \mp\frac{c_{s}}{\sqrt{1+\frac{T_{\rm D^{+}}}{T_{e}}}}\partial_{n }v_{\parallel{\rm D^{+}}}\] (19) \[\partial_{s}n_{\rm e} = \partial_{s}n_{{\rm D^{+}}}=\mp\frac{n_{\rm e}}{c_{s}\sqrt{1+\frac {T_{\rm D^{+}}}{T_{e}}}}\partial_{n}v_{\parallel{\rm D^{+}}}\] (20) \[\partial_{s}n_{{\rm D^{+}_{2}}} = \mp\frac{n_{{\rm D^{+}_{2}}}}{c_{s}\sqrt{1+\frac{T_{\rm D^{+}}}{ T_{e}}}}\partial_{n}v_{\parallel{\rm D^{+}_{2}}}\] (21) \[\partial_{s}T_{e} = \partial_{s}T_{{\rm D^{+}}}=\partial_{n}T_{{\rm D^{+}_{2}}}=0\] (22) \[\Omega = \mp\left(n_{\rm e}+n_{{\rm D^{+}_{2}}}\right)\sqrt{1+\frac{T_{i} }{T_{e}}}\partial_{nn}^{2}v_{\parallel{\rm D^{+}}} \tag{23}\] where \(s\) is the direction perpendicular to the vessel wall, the plus (minus) sign refers to the magnetic field pointing toward (away from) the wall, the dimensionless ion sound speed is \(c_{s}=\sqrt{T_{\rm e}}\) and \(\Lambda=\log\sqrt{m_{{\rm D^{+}}}/(2\pi m_{e})}\simeq 3\). A set of simplified boundary conditions is also used at the top and right walls that do not present strike points. With respect to the set of boundary conditions in Eqs. (16-23), in these cases the electrostatic potential is set to \(\phi=\Lambda T_{e}\), implying \(v_{\parallel e}=\pm c_{s}\). ### The multi-species kinetic neutral model The neutral model used in this work is based on the kinetic description introduced in a limited tokamak configuration in Ref. [40] for the case of a mono-atomic neutral species, then extended in Ref. [22] to take into account molecular deuterium. The same approach to study the neutral dynamics was used in Ref. [31] to carry out in a diverted configuration, with a single species model. Here, we extend Ref. [31] to include the molecular dynamics. We underline that our model can be extended to include, in principle, an arbitrary number of species. The kinetic equation we consider to evolve the distribution function \(f_{\rm D}\) is \[\eqalign{\frac{\partial f_{\rm D}}{\partial t}+\mathbf{v}\cdot\frac{\partial f_{ \rm D}}{\partial\mathbf{x}}=&-\nu_{\rm iz,D}f_{\rm D}-\nu_{\rm cx,D}\left(f_{ \rm D}-\frac{n_{\rm D}}{n_{\rm D^{+}}}f_{\rm D^{+}}\right)+\nu_{\rm rec,D^{+}} f_{\rm D^{+}}\\ &+\nu_{\rm cx,D_{2}\text{-}D^{+}}\left(\frac{n_{\rm D_{2}}}{n_{\rm D^{+}}}f_{ \rm D^{+}}\right)-\nu_{\rm cx,D\text{-}D_{2}^{+}}f_{\rm D}+2\nu_{\text{diss,D _{2}}}f_{\rm D_{2}}+\nu_{\text{diss-iz,D_{2}}}f_{\rm D_{2}}\\ &+\nu_{\text{diss,D_{2}^{+}}}f_{\rm D_{2}^{+}}+2\nu_{\text{diss-rec,D_{2}^{+}} }f_{\rm D_{2}^{+}}\quad,\end} \tag{24}\] and a similar one is used for \(f_{\rm D_{2}}\), \[\eqalign{\frac{\partial f_{\rm D_{2}}}{\partial t}+\mathbf{v}\cdot\frac{ \partial f_{\rm D_{2}}}{\partial\mathbf{x}}=&-\nu_{\rm iz,D_{2}}f_{\rm D_{2}} -\nu_{\rm cx,D_{2}}\left(f_{\rm D_{2}}-\frac{n_{\rm D_{2}}}{n_{\rm D_{2}^{+}}}f _{\rm D_{2}^{+}}\right)\\ &+\nu_{\rm rec,D_{2}^{+}}f_{\rm D_{2}^{+}}-\nu_{\rm cx,D_{2}\text{-}D^{+}}f _{\rm D_{2}}+\nu_{\rm cx,D\text{-}D_{2}^{+}}\left(\frac{n_{\rm D}}{n_{\rm D_{2}^ {+}}}f_{\rm D_{2}^{+}}\right)\\ &-\nu_{\text{diss,D_{2}}}f_{\rm D_{2}}-\nu_{\text{diss-iz,D_{2}}}f_{\rm D_{2}} \quad,\end} \tag{25}\] where \(f_{\rm D^{+}}\) and \(f_{\rm D_{2}^{+}}\) are the velocity distribution functions of the \(\rm D^{+}\) and \(\rm D_{2}^{+}\) ions and all the reaction frequencies are defined in Table 1. The formal solution of Eqs. (24-25) can be found by applying the method of characteristics, yielding: \[\eqalign{f_{n}(\mathbf{x},\mathbf{v},t)=&\int_{0}^{r_{\rm b}^{\prime}}\left[ \frac{S_{n}(\mathbf{x}^{\prime},\mathbf{v},t^{\prime})}{v}+\delta\left(r^{ \prime}-r_{\rm b}^{\prime}\right)f_{n}(\mathbf{x}_{\rm b}^{\prime},\mathbf{v},t_{\rm b}^{\prime})\right]\\ &\times\exp\left[-\frac{1}{v}\int_{0}^{r^{\prime}}\nu_{\rm eff,n}( \mathbf{x}^{\prime\prime},t^{\prime\prime})dr^{\prime\prime}\right]dr^{\prime }\quad,\end} \tag{26}\] for the two neutral species, \(n=\rm D\) or \(\rm D_{2}\). Equation (26) describes the distribution function of neutrals at position \(\mathbf{x}\), velocity \(\mathbf{v}\) and time \(t\), as the result of neutrals generated at position \(\mathbf{x}^{\prime}=\mathbf{x}-r^{\prime}\mathbf{v}/v\), and time \(t^{\prime}=t-r^{\prime}/v\), where \(r^{\prime}\) is the coordinate along the characteristic connecting \(\mathbf{x}^{\prime}\) and \(\mathbf{x}\), and \(r_{\rm b}^{\prime}\) denotes the distance between the position \(\mathbf{x}\) and the intersection of the characteristic with the boundary. The term \(S_{n}\) is the volumetric source of \(\rm D\) or \(\rm D_{2}\), generated by charge-exchange, recombination and dissociation reactions. The exponential term in Eq. (26) takes into account all processes that lead to a loss of neutrals on the way from \(\mathbf{x}^{\prime}\) to \(\mathbf{x}\). We now turn to the boundary condition for the distribution function, \(f_{n}(\mathbf{x}_{b},\mathbf{v},t_{b}^{\prime})\) in Eq. (26). In typical experimental conditions, \(\rm D\) or \(\rm D_{2}\) are emitted from the wall as a result of the reciclying of the \(\rm D^{+}\) or \(\rm D_{2}^{+}\) ions impacting the wall. A fraction of the outflowing ions, \(\alpha_{\rm refl}\), is reflected as fast neutrals, with the same temperature of the impacting ions, while the rest of the ions are absorbed by the wall and emitted with the boundary temperature \(T_{b}\). Similarly, the reflection or re-emission of the outflowing neutrals contribute to the neutral emission of the same neutral species flux from the wall. In addition, a small fraction, \(\beta_{\rm assoc}\), of the absorbed \(\rm D^{+}\) and \(\rm D\) goes through association processes, contributing to the D\({}_{2}\) emission. The resulting boundary condition for the D species is therefore \[\begin{split} f_{\rm D}(\mathbf{x}_{\rm b}^{\prime},\mathbf{v},t^{ \prime})=&(1-\alpha_{\rm refl})\Gamma_{\rm emiss,D}(\mathbf{x}_{ \rm b}^{\prime},t^{\prime})\chi_{\rm in,D}(\mathbf{x}_{\rm b}^{\prime},\mathbf{ v},T_{b})\\ &+\alpha_{\rm refl}\left[f_{\rm D}(\mathbf{x}_{\rm b}^{\prime}, \mathbf{v}-2\mathbf{v}_{\rm p},t^{\prime})+f_{\rm D^{+}}(\mathbf{x}_{\rm b}^{ \prime},\mathbf{v}-2\mathbf{v}_{\rm p},t^{\prime})\right]\quad,\end{split} \tag{27}\] where \(\chi_{\rm in,D}\) is the velocity distribution of the emitted neutrals and \(\mathbf{v}_{p}\) is the velocity in the direction perpendicular to the wall. For the D\({}_{2}\) species we impose a similar boundary condition, \[\begin{split} f_{\rm D_{2}}(\mathbf{x}_{\rm b}^{\prime},\mathbf{ v},t^{\prime})=&(1-\alpha_{\rm refl})\Gamma_{\rm emiss,D_{2}}( \mathbf{x}_{\rm b}^{\prime},t^{\prime})\chi_{\rm in,D_{2}}(\mathbf{x}_{\rm b}^ {\prime},\mathbf{v},T_{b})\\ &+\alpha_{\rm refl}\left[f_{\rm D_{2}}(\mathbf{x}_{\rm b}^{\prime },\mathbf{v}-2\mathbf{v}_{\rm p},t^{\prime})+f_{\rm D_{2}^{+}}(\mathbf{x}_{ \rm b}^{\prime},\mathbf{v}-2\mathbf{v}_{\rm p},t^{\prime})\right]\quad.\end{split} \tag{28}\] The fluxes of the emitted neutrals takes into account the probability of association, \(\beta_{\rm assoc}\) \[\Gamma_{\rm emiss,D}= \left(1-\beta_{\rm assoc}\right)\left(\Gamma_{\rm out,D}+\Gamma_ {\rm out,D^{+}}\right) \tag{29}\] \[\Gamma_{\rm emiss,D_{2}}= \Gamma_{\rm out,D_{2}}+\Gamma_{\rm out,D_{2}^{+}}+\frac{\beta_{ \rm assoc}}{2}\left(\Gamma_{\rm out,D}+\Gamma_{\rm out,D^{+}}\right)\quad, \tag{30}\] and include the fluxes of ions to the walls due to their parallel motion, the diamagnetic and \(\mathbf{E}\times\mathbf{B}\) drifts, \(\Gamma_{\rm out,D^{+}}\) and \(\Gamma_{\rm out,D_{2}^{+}}\), as well as the outflowing fluxes of neutrals, \(\Gamma_{\rm out,D}\) and \(\Gamma_{\rm out,D_{2}}\). As detailed in Refs. [22, 40], Eq. (26) can be integrated in velocity space, obtaining an integral equation for the neutral densities, \(n_{\rm D}\) and \(n_{\rm D_{2}}\). The resulting equations can be simplified under the assumptions that the time of flight of neutrals is lower than the turbulence timescale and that the mean free path of neutrals is shorter than the typical turbulence scale lengths in the parallel direction, obtaining a set of two-dimensional equations for the variables \(n_{\rm D}\) and \(n_{\rm D_{2}}\), that is \[\begin{split} n_{\rm D}(\mathbf{x}_{\perp})=&\int_{ \rm S}n_{\rm D}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D}(\mathbf{x}_{\perp}^ {\prime})K_{p\to p}^{\rm D,D^{+}}(\mathbf{x}_{\perp},\mathbf{x}_{\perp}^{ \prime})dA^{\prime}\\ &+\int_{\rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D2-D^{+}}(\mathbf{x}_{\perp}^{\prime})K_{p\to p}^{\rm D,D^{+}}(\mathbf{x}_{ \perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\rm S}2n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{ \rm diss,D_{2}}(\mathbf{x}_{\perp}^{\prime})K_{p\to p}^{\rm D,diss(\ D_{2})}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{ \rm diss-iz,D_{2}}(\mathbf{x}_{\perp}^{\prime})K_{p\to p}^{\rm D,diss-iz(D_{2})}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))(1-\beta_{\rm assoc})\Gamma_{\rm out,D}(\mathbf{x}_{\perp,b}^{ \prime})K_{b\to p}^{\rm D,reem}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{ \prime})da_{\rm b}^{\prime}\\ &+n_{\rm D[rec(D^{+})]}(\mathbf{x}_{\perp})+n_{\rm D[out(D^{+})] }(\mathbf{x}_{\perp})+n_{\rm D[diss(D_{2}^{+})]}(\mathbf{x}_{\perp})\quad, \end{split} \tag{31}\] and \[\begin{split} n_{\rm D_{2}}(\mathbf{x}_{\perp})=&\int_{ \rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D_{2}}(\mathbf{x}_{ \perp}^{\prime})K_{p\to b}^{\rm D_{2},D_{2}^{+}}(\mathbf{x}_{\perp},\mathbf{x}_ {\perp}^{\prime})dA^{\prime}\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))\Gamma_{\rm out,D_{2}}(\mathbf{x}_{\perp,b}^{\prime})K_{b\to b}^{ \rm D_{2}}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{\prime})da_{\rm b}^{\prime }\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))\frac{\beta_{\rm assoc}}{2}\Gamma_{\rm out,D}(\mathbf{x}_{\perp,b}^{ \prime})K_{b\to p}^{\rm D_{2}}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{ \prime})da_{\rm b}^{\prime}\\ &+\int_{\rm S}n_{\rm D}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx, D-D_{2}^{+}}(\mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D_{2},D_{2}^{+}}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+n_{\rm D_{2}[\rm rec(D_{2}^{+})]}(\mathbf{x}_{\perp})+n_{\rm D_ {2}[\rm out(D_{2}^{+})]}(\mathbf{x}_{\perp})+n_{\rm D_{2}[\rm out(D^{+})]}( \mathbf{x}_{\perp})\quad,\end{split} \tag{32}\] which are coupled with two equations for the outgoing neutral fluxes, \(\Gamma_{\rm out,D}\) and \(\Gamma_{\rm out,D_{2}}\), \[\begin{split}\Gamma_{\rm out,D}(\mathbf{x}_{\perp,b})=& \int_{\rm S}n_{\rm D}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D}( \mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D,D^{+}}(\mathbf{x}_{\perp}, \mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D_{2}\cdot D^{+}}(\mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D,D^{+}}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\rm S}2n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm diss,D_{2}}(\mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D,diss(D_{2})}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm diss,z,D_{2}}(\mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D,diss-iz(D_{2})}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))(1-\beta_{\rm assoc})\Gamma_{\rm out,D}(\mathbf{x}_{\perp,b}^{ \prime})K_{b\to b}^{\rm D,rem}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{ \prime})da_{\rm b}^{\prime}\\ &+\Gamma_{\rm D[rec(D^{+})]}(\mathbf{x}_{\perp})+\Gamma_{\rm D[ out(D^{+})]}(\mathbf{x}_{\perp})+\Gamma_{\rm D[diss(D_{2}^{+})]}(\mathbf{x}_{ \perp})\quad,\end{split} \tag{33}\] and \[\begin{split}\Gamma_{\rm out,D_{2}}(\mathbf{x}_{\perp,b})=& \int_{\rm S}n_{\rm D_{2}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx,D_{2}}( \mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D_{2},D_{2}^{+}}(\mathbf{x}_{\perp },\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))\Gamma_{\rm out,D_{2}}(\mathbf{x}_{\perp,b}^{\prime})K_{b\to b}^{\rm D _{2}}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{\prime})da_{\rm b}^{\prime}\\ &+\int_{\partial\rm S}(1-\alpha_{\rm refl}(\mathbf{x}_{\perp,b}^{ \prime}))\frac{\beta_{\rm assoc}}{2}\Gamma_{\rm out,D}(\mathbf{x}_{\perp,b}^{ \prime})K_{b\to b}^{\rm D_{2}}(\mathbf{x}_{\perp},\mathbf{x}_{\perp,b}^{ \prime})da_{\rm b}^{\prime}\\ &+\int_{\rm S}n_{\rm D}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm cx, D-D_{2}^{+}}(\mathbf{x}_{\perp}^{\prime})K_{p\to b}^{\rm D_{2},D_{2}^{+}}( \mathbf{x}_{\perp},\mathbf{x}_{\perp}^{\prime})dA^{\prime}\\ &+\Gamma_{\rm out,D_{2}[\rm rec(D_{2}^{+})]}(\mathbf{x}_{\perp})+ \Gamma_{\rm out,D_{2}[\rm out(D_{2}^{+})]}(\mathbf{x}_{\perp})+\Gamma_{\rm out,D_{2}[\rm out(D^{+})]}(\mathbf{x}_{\perp})\quad,\end{split} \tag{34}\] where the integrals appearing in Eqs. (31-34) are carried out over the area, \(S\), of each poloidal plane or over its boundary, \(\partial S\). In Eqs. (31-34) the terms \(K_{i\to j}\) are the kernel functions presented in Ref. [22], where the integrals in the velocity space are performed. The contribution to the neutral densities and fluxes at the wall, which are proportional to the ion densities and fluxes, include the contribution to \(n_{\rm D}\) coming from volumetric \(\rm D^{+}\) recombination \[n_{\rm D[rec(D^{+})]}(\mathbf{x}_{\perp})= \int_{\rm S}n_{\rm D^{+}}(\mathbf{x}_{\perp}^{\prime})\nu_{\rm rec,D^{+}}(\mathbf{x}_{\perp}^{\prime})K_{p\to p}^{\rm D,D^{+}}(\mathbf{x}_{\perp}, \mathbf{x}_{\perp}^{\prime})dA^{\prime}\quad, \tag{35}\] from D\({}^{+}\) recombination on the boundary, \[\begin{split} n_{\rm D[out(D^{+}]}=&\int_{\rm\partial S} \Gamma_{\rm out,D^{+}}({\bf x}^{\prime}_{\perp,\rm b})[(1-\alpha_{\rm refl}({ \bf x}^{\prime}_{\perp,\rm b}))(1-\beta_{\rm assoc})K^{\rm D,reem}_{b\to p}({ \bf x}_{\perp},{\bf x}^{\prime}_{\perp,\rm b})\\ &+\alpha_{\rm refl}K^{\rm D,refl}_{b\to p}({\bf x}_{\perp},{\bf x }^{\prime}_{\perp,\rm b})]da^{\prime}_{\rm b}\quad,\end{split} \tag{36}\] and from D\({}^{+}_{2}\) dissociation, \[\begin{split} n_{\rm D[diss(D^{+}_{2}]}=&\int_{\rm S }n_{\rm D^{+}_{2}}({\bf x}^{\prime}_{\perp})[\nu_{\rm diss,D^{+}_{2}}({\bf x} ^{\prime}_{\perp})K^{\rm D,diss(D^{+}_{2})}_{p\to p}({\bf x}_{\perp},{\bf x}^ {\prime}_{\perp})\\ &+2\nu_{\rm diss-rec,D^{+}_{2}}({\bf x}^{\prime}_{\perp})K^{\rm D,diss-rec(D^{+}_{2})}_{p\to p}({\bf x}_{\perp},{\bf x}^{\prime}_{\perp})]dA ^{\prime}\quad.\end{split} \tag{37}\] The set of Eqs. (31-34) is discretized on a Cartesian grid, \((R,Z)\), written in matrix form and then numerically solved for \(n_{\rm D}\) and \(n_{\rm D_{2}}\). Once the densities are known, all moments of the neutral distribution function can be evaluated (see Ref. [22]). ### Simulation setup In this work, we consider two simulations carried out with the GBS code implementing the model described in Sec. 2. The simulation parameters are based on an experimental dataset developed for validation studies, TCV-X21 [47]. TCV-X21 is a lower single-null L-mode discharge performed at low toroidal magnetic field, with value at magnetic axis \(B_{0}=0.95\) T, in forward field direction (ion-\(\nabla B\) drift direction pointing from the core toward the X-point), with plasma current \(I_{p}\)=165 kA. The upstream experimental density and electron temperature at the separatrix, taken as the reference density and temperature for the simulations, are \(n_{0}=0.6\times 10^{19}\) m\({}^{-3}\) and \(T_{e0}=35\) eV. This corresponds to ion sound Larmor radius \(\rho_{s0}\simeq 1\) mm, sound speed \(c_{s0}\simeq 4.1\times 10^{4}\) m/s and reference time \(t_{0}=0.02\) ms. Given the explorative nature of the present study, the computational cost of the simulation is reduced by considering a domain corresponding to, approximately, half the size of the TCV tokamak (\(R/\rho_{s0}=450\)), i.e. \(L_{R}=300\,\rho_{s0}\), \(L_{Z}=600\,\rho_{s0}\) and \(L_{\varphi}=2\pi R_{0}\simeq 2800\,\rho_{s0}\). The dimensionless simulation parameters are \(\rho_{*}^{-1}=450\), \(\tau=1\), \(\eta_{0e}=3\times 10^{-4}\), \(\eta_{0{\rm D}^{+}}=\eta_{0{\rm D}^{+}_{2}}=2\times 10^{-2}\), \(\chi_{\parallel e}=20\), \(\chi_{\parallel{\rm D}^{+}}=\chi_{\parallel{\rm D}^{+}_{2}}=1\), \(m_{i}/m_{e}=2500\), \(\beta_{e0}=2\times 10^{-6}\), and \(\nu_{0}=0.05\). The diffusion coefficients for numerical stability are set to \(D_{f}=15\), with the field \(f=\{n,T_{e},T_{D^{+}},T_{D^{+}_{2}},\Omega,U_{\parallel e},v_{\parallel{\rm D }^{+}},v_{\parallel{\rm D}^{+}_{2}}\}\), and the cross-field transport associated to those terms are verified to be at least one order of magnitude lower than the effective transport coefficients, as evaluated from the analysis of our results (see Sec. 4). The amplitude of the temperature source is chosen so that the power source, integrated over the core region, corresponds to the estimated experimental value of the power crossing the separatrix in the TCV-X21 case, \(P_{\rm sep}=120\) kW [47]. These parameters are chosen to mimic the typical conditions found in L-mode diverted discharges, as described in Ref. [48], where turbulent transport is mostly interchange driven. Recycling is not considered on the top and right walls, where no strike points are present. A constant reflection coefficient, \(\alpha_{\rm refl}=0.2\), and an association coefficient \(\beta_{\rm ass}=0.1\) are considered on the left and bottom walls [4]. A gas puff is located on the bottom wall, with a narrow gaussian profile centered at the coordinate \(R=450\rho_{s0}\), corresponding, approximately, to one of the gas puff positions present in TCV. The neutrals are puffed from the wall at room temperature \(T_{\rm wall}=T_{\rm GP,D_{2}}=0.03\)eV. The two simulations presented in this work have the same setup, except for the strength of the D\({}_{2}\) gas puff on the bottom wall. In the first simulation we introduce no puffing. Therefore, the presence of neutrals results only from plasma recycling and recombination processes. We label this simulation as _low density_. In the second simulation, we increase the neutrals and plasma density by introducing a gas puff of D\({}_{2}\), labelling it as _high density_ simulation. The two simulations allow us to explore the dynamics at two different separatrix densities. The low-density simulation is characterized by \(n_{e,{\rm sep}}=1.62\times 10^{19}{\rm m}^{-3}\) and the high density by \(n_{e,{\rm sep}}=3.42\times 10^{19}{\rm m}^{-3}\) at \(Z=Z_{\rm axis}=0\). Regarding the numerical parameters of our simulations, we use a plasma grid of \(N_{R}\times N_{Z}\times N_{\phi}=150\times 300\times 64\) points, while the neutral grid is \(N_{R}^{n}\times N_{Z}^{n}\times N_{\phi}^{n}=50\times 100\times 64\). The time step for the plasma evolution is \(\Delta t\simeq 3\times 10^{-5}t_{0}\), while the solution for the neutral model is evaluated every \(\Delta t\simeq 3\times 10^{-2}t_{0}\)[23]. The initial conditions of the low-density simulation are provided by a quasi-steady state simulation with only atomic neutrals interactions [23]. A turbulent quasi-steady state in the low-density simulation is reached after approximately \(20\,t_{0}\), when losses at the vessel balance the particle sources and the plasma and neutral quantities oscillate around constant values. The high-density simulation is then obtained introducing the D\({}_{2}\) gas puff in the quasi-steady state of the low-density simulation. The time-averaged profiles are evaluated over an interval \(\Delta t=10\,t_{0}\) during the quasi-steady states of the simulations. In the analysis, we also present flux tube averages that leverage the magnetic field aligned coordinate system introduced in Sec. 2. The poloidal coordinate \(\chi\) goes from \(\chi=0\) at the inner strike point (ISP) to \(\chi=1\) at the outer strike point (OSP). The radial coordinate is expressed as \(\rho_{\psi}=\sqrt{(\psi-\psi_{\rm axis})/(\psi_{\rm LCFS}-\psi_{\rm axis})}\), where \(\psi_{\rm LCFS}\) and \(\psi_{\rm axis}\) are the poloidal flux function values at the last closed flux surface and at the magnetic axis, having \(\rho_{\psi}=1\) at the last closed flux surface. ## 3 The plasma and neutral turbulent dynamics In this section, we provide an overview of the simulation results presented in this work, focusing on the turbulent dynamics of the plasma and neutral species, and their interactions. The density profiles of the plasma and neutral species are detailed in Sec. 3.1. In Sec. 3.2 we discuss the pressure profile, together with the analysis of the energy sink due to neutral interactions. The temperature profile is the subject of Sec. 3.3. We present the electric field appearing in our simulations in Sec. 3.4. Finally, the formation of a density shoulder is the subject of Sec. 3.5. ### Plasma and neutrals density profiles In Fig. 1, time- and toroidally-averaged profiles of the electron and molecular ion densities on the poloidal plane are shown for the low- and high-density simulations, together with a typical snapshot of their fluctuations, normalized to their average values (we denote with tilde the fluctuating quantities and with overline their time- and toroidal-average values, e.g. \(n_{\mathrm{e}}=\tilde{n}_{\mathrm{e}}+\overline{n}_{\mathrm{e}}\)). The high-density simulation presents not only increased core density, but also increased density in the SOL region, with higher level of turbulence fluctuations. A similar increase of turbulent fluctuation amplitude with density are reported in Refs. [32, 48]. The \(\mathrm{D}_{2}^{+}\) density is, at least, two orders of magnitude lower than the electron one and, as a consequence, \(n_{\mathrm{D}^{+}}\simeq n_{\mathrm{e}}\), as assumed by our model. The high-density simulation exhibits strong enhancement of the plasma density in the private flux region close to the OSP, not observed at the ISP, resulting from the balance of the fluxes and the colder target existing at higher plasma density, as discussed in Sec. 4. The density of molecular ions is large in the region close to the targets, with a negligible value inside the last-closed flux surface. In Fig. 2 we show the time- and toroidally-averaged profile of the neutral densities and of the ion density sources, \(S_{\mathrm{iz,D}^{+}}\) and \(S_{\mathrm{iz,D}_{2}^{+}}\), where only direct ionizations of D Figure 1: Time and toroidally-averaged profiles and typical snapshot of the normalized fluctuations of electron density, \(n_{\mathrm{e}}\), and molecular ion density, \(n_{\mathrm{D}_{2}^{+}}\), for the low-density (top row) and high-density (bottom row) simulations. and D\({}_{2}\), respectively, are taken into account. At low density, neutrals result from recombination processes at the wall and are recycled at the target, most of the ionizations occurring close to the targets. With the introduction of the gas puff, molecular neutrals penetrate deeper in the tokamak volume and the ionization front enters the edge and core regions. At the same time, the atomic neutral density increases in the high-density simulation in all the SOL volume. This is due, at the same time, to the increase of recombination and the decrease of the ionization processes they undergo in the SOL. Focusing on recombination processes, we observe that atomic neutrals are produced through recombination, dissociation of D\({}_{2}\) or D\({}_{2}^{+}\) molecules and MAR processes, as Table 1 shows. By performing a series of simulations where we artificially remove one of these reactions at a time, we identify MAR reactions as the main source of \(n_{\rm D}\) in our high-density simulation. Indeed, they account for 40% of the produced neutrals. This result is in agreement with experimental findings, where MAR in high-density discharges are estimated dominant compared to other recombination channels [10, 7]. Regarding the ionization processes, we note that their decrease in the target regions is a consequence of the strong decrease of the local temperature to values smaller than 3 eV Figure 2: Time- and toroidally-averaged poloidal atomic neutrals density, \(n_{D}\), molecular neutrals density, \(n_{\rm D_{2}}\), D\({}^{+}\) ion density source, \(S_{n_{\rm D^{+}},\rm{iz}}\), and D\({}_{2}^{+}\) ion density source, \(S_{n_{\rm D^{+}_{2}},\rm{iz}}\), for the low-density (top row) and high-density (bottom row) simulations. (see Sec. 3.3). ### Power losses and pressure drop A detached scenario is characterized by a significant pressure drop between the upstream (OMP) region and the target [5], often observed with the increase of radiative losses due to a set of interactions with neutrals, which are important in the SOL region up to the X-point [1, 14]. The pressure drop is also the result of momentum loss mechanisms, e.g. due to ion-neutral charge-exchange reactions [4, 49]. In order to investigate the pressure drop in our simulations, we first consider the energy losses due to plasma-neutral interactions. In our model they are obtained by combining the density and temperature sources in Eqs. (1-9), and they appear in Eq. (15). The losses associated with the neutral-plasma reactions considered in our model, evaluated separately in order to estimate their relative importance, are shown in Fig. 3 along a flux tube close to the separatrix, \(1\leq\rho_{\psi}\leq 1.08\), as a function of the poloidal coordinate \(\chi\). In the low-density simulation, ionization processes are relevant only in the target region and energy losses are present only below the X-point (\(\chi_{\rm Xpt,\ HFS}=0.05\) and \(\chi_{\rm Xpt,\ LFS}=0.86\)), where both ionization and charge-exchange losses are important due to the significant neutral density. On the other hand, the high-density simulation presents strong energy losses also above the X-point, where the ionization sink peaks. We point out that the integral of the energy losses above the X-point in the high-density simulation is twenty times as high as the low-density one, mainly due to the high \(n_{\rm D}\) in the SOL resulting, as already discussed, from the molecular interactions [7, 12]. Figure 3: Time- and toroidally-averaged energy sink due to plasma-neutral interactions for the low-density (left) and high-density (right) simulations, along a flux tube close to the separatrix \(1\leq\rho_{\psi}\leq 1.08\), as a function of the poloidal coordinate \(\chi\). The black dashed lines denote the X-point coordinates. MAI stands for Molecular Activated Ionization, CX-EX stands for the sum of charge-exchange reactions, EIR stands for Electron-Ion Recombination. Focusing on the divertor legs, we note that both atomic and molecular ionization losses are practically absent close to both targets, a feature already observed in detachment experiments [1]. On the outer divertor leg, \(0.85\leq\chi\leq 0.95\), the main energy sink is due to radiative losses caused by D\({}_{2}\) and D\({}_{2}^{+}\) dissociation, together with charge-exchange reactions, which dominates closer to the target. On the other hand, along the inner divertor leg, for \(\chi<0.08\), we observe that charge-exchange reactions are the main loss mechanism in a wider region than for the outer leg. We now turn to the pressure drop appearing in our simulations. Figure 4 shows the time- and toroidally-averaged total pressure and D\({}^{+}\) ion parallel velocity, along the flux tube as in Fig. 3. We evaluate the total pressure as \[p_{\rm tot}=n_{\rm e}T_{\rm e}+n_{\rm D^{+}}\left(T_{\rm D^{+}}+m_{\rm D^{+}}v_ {\parallel,\rm D^{+}}^{2}\right) \tag{38}\] since the \(n_{\rm D^{+}_{2}}\) as well as the electron dynamic pressure contributions are negligible. In both simulations the D\({}^{+}\) fluid presents a stagnation point close to the OMP, \(\chi=0.7\), and the module of the velocity increases toward the two targets, as observed in previous simulations [31]. However, the high-density simulation presents lower velocity at both divertor targets, as expected from the large number of charge-exchange reactions and the low temperature (see Sec 3.3). Both pressure profiles present a maximum around the OMP, where the density and temperature are higher. In the high-density simulation, the total pressure drop is larger than in the low-density case. Comparing Fig. 3 with Fig. 4, it is possible to observe that the power loss peaks at \(\chi=0.18\) and \(\chi=0.80\) (see Fig. 3), where a strong pressure drop occurs, indicating that plasma-neutral interactions described above play a crucial role in determining the pressure profile in our simulations. Figure 4: Averaged ion parallel velocity (left) and total plasma pressure (right) along a flux tube close to the separatrix \(1\leq\rho_{\psi}\leq 1.1\), as a function of the poloidal coordinate \(\chi\). The black dashed lines denote the X-point coordinates. ### Plasma and neutral temperature In addition to significant power losses, detachment scenarios are characterized by low temperature, in particular in the divertor region [4], resulting from a significant rate of neutral-plasma reactions. Indeed, the relative importance of the atomic reactions is mainly determined by the plasma temperature profile [22]. The temperature of all species present in our simulations are shown in Fig. 5. In both simulations, the temperature of the molecular species are lower than the atomic species, while the D\({}^{+}\) and electron temperatures are very similar. At low density, the plasma temperature decreases because of the ionization processes occurring close to the target, causing a steep temperature gradient. Since the temperature remains above 3 eV, recombination and dissociation reactions are negligible and neutrals are emitted mainly from the wall in this simulation. A fraction \(\alpha_{\rm refl}=0.2\) are emitted at the incoming ion temperature and the remaining are released at the wall temperature \(T_{\rm wall}=0.03\) eV, explaining the value of \(T_{\rm D}\). On the other hand, in the high-density simulation, the temperature of the charged species is sufficiently high (\(T_{\rm e}>3\) eV) only above the X-point for neutrals ionization to occur, while this is not the case closer to the target, a condition that is denoted as power starvation [50]. In turn, neutrals are produced in the divertor volume through dissociation and recombination processes, since the temperature is lower than 3 eV (see Table 2), and not only at the wall, as in the low-density simulation. Due to the asymmetries in the density profiles of the molecules, ultimately determined by the gas puff position, the temperature at the target of the molecular species is asymmetrical between the ISP and the OSP. The plasma energy losses due to charge-exchange are dominant at the targets (see Fig. 3), lowering the plasma temperature. In particular the presence of the D\({}_{2}\) puff at the outer target leads Figure 5: Time- and toroidally-averaged profiles of all the species temperatures for the low density (left) and high density (right) simulations, in a flux tube close to the separatrix, \(1\leq\rho_{\psi}\leq 1.1\), as a function of the poloidal coordinate \(\chi\). The black dashed line denotes the coordinates of the X-point. to higher D density, resulting in increased charge-exchange processes and in \(T_{\rm D^{+}}\simeq T_{\rm D}\). ### Plasma potential and Ohm's law Strong electric fields in the divertor volume of high-density discharges are predicted and observed experimentally [51, 52], leading to \(E\times B\) flows in the poloidal plane [53, 54] Our simulations confirm the presence of these flows. The time- and toroidally-averaged electrostatic potential obtained from both simulations is shown in Fig. 6. In both cases the potential has positive values in the SOL, higher at the LFS and around the X-point. A positive value of the plasma potential at the X-point is observed in simulations and experiments with the magnetic field direction corresponding to the one of our simulations [51]. The increase of density leads to higher \(\phi\) values in the SOL region at the LFS, except close to the targets, where the potential decreases. This results in the presence of an electric field pointing toward both targets at both strike point regions in the high-density simulation. This is relevant to explain the transport mechanisms at play in the high-density simulation, as described in Sec. 4. We study the origin of the electric field by analysing the generalized Ohm's law, Eq. (4), which defines the relationship between parallel gradients of potential, electron temperature, pressure and parallel current, that is \[E_{\parallel}=-\nabla_{\parallel}\phi=\frac{\nabla_{\parallel}p_{e}}{n_{e}e}+ 0.71\frac{\nabla_{\parallel}T_{e}}{e}-\nu j_{\parallel}\,, \tag{39}\] Figure 6: Time- and toroidally-averaged plasma potential in the low-density (left) and high-density (right) simulation. having neglected electron inertia. In Fig. 7 the time- and toroidally-averaged contributes to \(E_{\parallel}\) appearing in Eq. (39) are shown along the radial direction in a flux tube close to the separatrix. In both simulations the main contribution to the parallel electric field is given by the term \(\nu j_{\parallel}\). In the high-density case the relative importance of this term is increased, especially close to the target where the temperature is lower, being \(\nu\propto T_{\rm e}^{-3/2}\) (see Eq. (13)). Experimentally, it is observed that in discharges with relatively high temperature at the target, \(T_{\rm e}\geq 20\) eV, the resistive term do not influence the plasma potential at the OMP [55]. Our results show that collisionality determines the potential along the flux tube when the plasma temperature is low. ### Mid-plane plasma profiles: density shoulder and turbulent transport Experimental operation at high density, achieved through the increase of gas throughput, reveal the tendency to develop flatter density profiles generally associated with an increased level of turbulence, a phenomenon know as _density shoulder_[33, 56]. Previous numerical investigation using GBS, which do not include molecular interaction terms, show an increase of turbulence level and the flattening of the pressure profile with the increase of fuelling. In those simulations, the density increase results from the increase of atomic neutrals interactions [31], mimicked by increasing the plasma resistivity when those interactions are not included [32, 48]. We investigate the density shoulder formation in the present simulations. Density and pressure profiles for the two simulations considered in this work are shown in Fig. 8, normalized to their value at the separatrix. We identify two decay lengths, one in the near SOL, \(1\leq\rho_{\psi}\leq 1.1\), and one in the far SOL, in agreement with several experiments [57, 33] and simulations [48]. The increase of density yields an increase of the near Figure 7: Time- and toroidally-averaged parallel electric field and its components appearing in the generalized Ohm’s law Eq. (39), in a flux tube close to the separatrix, \(1\leq\rho_{\psi}\leq 1.08\), as a function of the poloidal coordinate. The black dashed line denotes the coordinates of the X-point. SOL decay lengths of the density and pressure. In agreement with the simulations that include only atomic contribution [31], also in the present simulations, the density shoulder appears in combination with a strong reduction of temperature and parallel velocity at the target. In order to estimate the perpendicular transport, we consider in Fig. 9 the radial profiles at the OMP of the averaged \(E\times B\) flux, \(\overline{\Gamma}_{E\times B}\), of the effective transport coefficient, \(D_{E\times B,\mathrm{eff}}=\overline{\Gamma}_{E\times B}/|\overline{\nabla}n_ {e}|\), and of the effective velocity, \(v_{E\times B,\mathrm{eff}}=\overline{\Gamma}_{E\times B}/\overline{n}_{e}\). These include both the time- and toroidally-averaged steady state and the fluctuating flux components of the \(E\times B\) flux, \(\overline{\Gamma}_{E\times B}=\Gamma_{\overline{E}\times\overline{B}}+\Gamma _{\overline{E}\times\overline{B}}\), with \(\Gamma_{\overline{E}\times\overline{B}}=\overline{n}_{\mathrm{e}}(\overline{ E}\times\overline{B})/B^{2}\) and \(\Gamma_{\overline{E}\times\overline{B}}=\overline{\tilde{n}_{\mathrm{e}}( \widetilde{E}\times\bar{B})}/B^{2}\). We note that the \(E\times B\) flux is significantly larger than the diamagnetic flux, neglected in the present analysis. Focusing on the SOL, the fluctuating component accounts for half the total flux in the low-density simulation, while its relative importance increases at high density, up to 70% of the total flux. In the SOL of the high-density simulation, not only \(\Gamma_{E\times B}\) is larger compared to the low-density case, as expected from the higher density values, but also the effective diffusion coefficient \(D_{E\times B,\mathrm{eff}}\) is larger. In addition, the effective velocity, \(v_{E\times B,\mathrm{eff}}\), is larger in the high-density simulation for \(\rho_{\psi}>1.15\). These results imply that the higher flux in the far SOL is not only an effect of the increased density, in contrast to the results of simulations that include only atomic deuterium in a simplified magnetic configuration [31]. Both experimental results and simulations show that the far SOL turbulent flux in L-mode tokamak discharges is mostly the result of the motion of coherent filamentary structures, denoted as blobs. In GBS simulations, blobs are identified with an algorithm developed and used for the analysis of previous GBS results [58, 32], which was recently Figure 8: Time- and toroidally-averaged electron density and total plasma pressure at the OMP. The dashed lines show the linear fits that identify the near SOL the far SOL decay lengths. extended to detect their three-dimensional structure. The algorithm finds the regions where the density fluctuations are 2.5 times above the local standard deviation and tracks them in time. A fluctuation is identified as a blob if it is detected over an area of, at least, \(20\rho_{s0}^{2}\) on a poloidal plane and it has a toroidal extension above \(\pi R_{0}/5\). The blob detection algorithm fits the blob density perturbation in the poloidal plane with a gaussian function. From the blob center of mass motion, identified as the center of the fitting gaussian function, we retrieve the time-average components of the blob velocity in the poloidal plane, \(v_{b,\psi}(R,Z)\) and \(v_{b,\chi}(R,Z)\). Our analysis covers a time interval sufficiently large to ensure the statistical convergence of the blob properties. In agreement with Ref. [48] and also in agreement with experimental results [34], blobs in our simulations are typically in the resistive ballooning regime, where blob velocity increases with their size and with the SOL resistivity [59, 31]. In Fig. 10 the radial and poloidal components of the blob velocity, \(v_{\psi}\) and \(v_{\chi}\), are plotted at the OMP, as a function of the distance from the separatrix. The velocity \(v_{\psi}\) is normalised to the flux expansion \(f_{x}\), since the radial velocity of a field-aligned structure is expected to be constant over a flux surface [60]. At low density, both the ratio \(v_{\psi}/f_{x}\) and \(v_{\chi}\) increase with \(\rho_{\psi}\) in the near SOL and flatten in the far SOL, while in the high-density simulation the blob velocity increases through the SOL. These trends are in qualitative agreement with experimental results, showing also the same order of magnitude [33, 34, 60]. As expected from Refs. [60, 61], the magnitude of the blob radial velocity is the same as the effective \(v_{E\times B,\rm eff}\) (see Fig. 9). To compare the blob velocity dependence on \(\chi\), we present the poloidal profile of the blob velocity components in Fig. 11, from the OMP to the X-point, averaged over \(\rho_{\psi}\geq 1.1\). We find higher radial velocity in the entire SOL region for higher density, while poloidal velocities are the same in the two simulations up to the X-point. Both in the low- and high-density simulation, \(v_{\psi}\) and \(v_{\chi}\) decrease for increasing \(\chi\), from the OMP to the X-point. Experimental measurements of the blobs velocities in TCV and Alcator C-mod discharges, made at different distance from the X-point, show the same tends as our simulations [62, 60]. To conclude, our blob tracking analysis shows an increase in radial velocity with increasing fuelling, which leads to the larger turbulent \(\Gamma_{E\times B}\) and to the larger effective perpendicular transport shown in Fig. 9. The increased perpendicular transport, together with the decreased parallel flux, yields a larger \(\Gamma_{\perp}/\Gamma_{\parallel}\) ratio, going from \(\Gamma_{\perp}/\Gamma_{\parallel}\simeq 0.05\), in the low-density simulation, to \(\Gamma_{\perp}/\Gamma_{\parallel}\simeq 0.1\) at higher density. The increase of this ratio is ultimately responsible for the density shoulder formation [31, 63]. Compared to single-ion simulations, we observe that the introduction of molecular interactions lowers the SOL plasma temperature, leading to higher resistivity and faster blobs [32], ultimately increasing the perpendicular transport and the ratio \(\Gamma_{\perp}/\Gamma_{\parallel}\). Figure 11: Radial, \(\psi\), (left) and poloidal, \(\chi\), (right) components of the blob center of mass velocity, averaged over all blobs, in a flux tube in the far SOL, \(\rho_{\psi}\geq 1.1\). The values are expressed as a function of the poloidal coordinate \(\chi\), from the OMP, \(\chi_{\rm OMP}=0.64\) to the X-point \(\chi_{\rm Xpt}=0.87\). Figure 10: Radial, \(\psi\), (left) and poloidal, \(\chi\), (right) components of the blob center of mass velocity. The radial component is divided by the flux-expansion \(f_{x}\). The positive direction of the poloidal coordinate goes from the ISP to the OSP. The velocity is the average over all blobs, at the OMP region. ## 4 Fluxes to the divertor targets and detachment In this section we present the analysis of the particle and heat fluxes to the divertor targets, showing that detachment conditions are achieved at the inner target of the high-density simulation. Detachment is characterized by reduced ion and heat fluxes at the divertor targets compared to attached discharges [5]. We start by showing the ion flux profiles at the target and in the divertor volume in Sec. 4.1, followed by the evaluation of the Degree of Detachment (DOD) in Sec. 4.2 and by the analysis of the target heat flux in Sec. 4.3. ### Ion particle flux at the target and particle balance When the density is ramped up in a tokamak discharge, the divertor moves across different recycling conditions, from attached to high-recycling and then to partial and, finally, full detachment conditions [5, 14]. The saturation of the target ion flux identifies the onset of detachment. This occurs when, as the plasma density increases, the peak ion flux at the target no longer increases and the reduction of the ion flux integrated over the target area is observed. The onset of detachment at the ISP and the OSP in lower-single null discharges often occurs at a different level of core density [13], with the differences between the two legs depending on the toroidal magnetic field direction, pointing out to a possible role of the \(E\times B\) drift [14]. In fact, simulations of the different phases of detachment, carried out with SOLPS-ITER, show that the introduction of drifts improves the comparison with experimental measurements [16]. In Fig. 12, we show the profile of the particle flux to the wall in our simulations. The region that surrounds the ISP at the left wall and, similarly, a region of the bottom wall around the OSP are considered. The ion flux at the target is evaluated as the sum of the parallel flow and the drift motion in the direction perpendicular to the target, that is \[\Gamma_{\mathrm{D}^{+},j}=n_{\mathrm{D}^{+}}(v_{\parallel\mathrm{D}^{+},j}+v _{\perp\mathrm{D}^{+},j})=n_{\mathrm{D}^{+}}(v_{\parallel\mathrm{D}^{+}}b_{j} +v_{E\times B,\mathrm{D}^{+},j}+v_{di,j})\,, \tag{40}\] where \(b_{j}\) is the \(j\) component of the unit vector along the direction of the magnetic field, with \(j=R\) or \(Z\), for the ISP and OSP, respectively. Starting the analysis from the low-density simulation, we note that the larger contribution to the flux is given by the parallel flux at both targets. The ion flux at the ISP is approximately symmetric around its maximum, which is located in the SOL (\(\rho_{\psi}>1\)). On the other hand, it is possible to identify two peaks of the ion flux at the OSP, one in the SOL and a second one in the private flux region (\(\rho_{\psi}<1\)). The parallel flux is responsible for the peak in the SOL, while the \(E\times B\) flux is dominant for \(\rho_{\psi}<1\), transporting density from the ISP to the OSP. This is consistent with SOLPS-ITER simulations showing that the inclusion of \(E\times B\) drift in the plasma dynamics can lead to the formation of a hollow profile of the ion flux to the target, as well as a higher density at the ISP than at the OSP in forward field configuration and vice-versa [64, 65]. The density increase affects the ion particle flux differently at the two targets. At the ISP, the integral of the flux decreases by, approximately, 50% with respect to the low-density simulation, and the peak of the flux is located further from the separatrix, deeper into the SOL. The decrease of the ion particle flux at the ISP is the consequence of the reduction of the parallel velocity, caused by the decrease of the electron and ion temperatures. This is also observed in previous single-component GBS simulations with the increase of the fuelling rate [31]. In contrast, the ion particle flux increases with the density at the OSP, both in the SOL and in the private flux regions, showing a single peak located inside the separatrix. This is due to a lower reduction of the parallel velocity at the OSP than at the ISP, with respect to the low-density simulation. In fact, the parallel velocity is proportional to the ion temperature (see Fig. 5), which is lowered by an increase of the density due to the ionization reactions occuring at the LFS, as shown in Fig. 2. The differences in the location of the peak of the ion particle flux between the low- and high-density simulations is explained by the different \(E\times B\) drift present in the two simulations. In Fig. 13 we present the vector plot of the contributions to the ion particle flux on the poloidal plane in the two simulations, with the colormap representing the flux module. In the low-density simulation, the flux is dominated in the SOL by the parallel flow and the \(E\times B\) drift is comparable to it only inside the private flux region close to the OSP. While the parallel flux peaks close to the strike point, the \(E\times B\) drift transports plasma from the OSP to the ISP. In the high-density simulation, the contribution of the \(E\times B\) drift increases, as expected from Fig. 6. The drift creates a convective cell of circulating plasma, transporting ions from above the X-point to the far SOL, and from the OSP to the X-point. The larger value of the \(\Gamma_{E\times B,\mathrm{D}^{+}}\) flux, with respect to the parallel flux in the high-density simulation, increases the density inside the separatrix at the OSP (\(\rho_{\psi}=0.98\)), while at the ISP the \(E\times B\) drift moves the peak to \(\rho_{\psi}=1.12\) (see Fig. 12). Figure 12: Time- and toroidally-averagd ion particle flux at the target, for both strike points, as a function of the normalized poloidal flux function. The decrease of the particle flux at the ISP in the high-density simulation is mainly caused by the reduction of its parallel component and by the decrease of the plasma sources, while the role of the \(E\times B\) drift is small. The asymmetries between the ion fluxes at the targets are, ultimately, generated by the asymmetries in the plasma-neutral interactions (see Fig. 3) and are strengthened by the effect of the \(E\times B\) drift [54, 14, 13], which is a consequence of the temperature profile set by molecular reactions. ### Degree of Detachment The two point model, derived to relate the evolution of the upstream profiles to the target density and temperature, establish a quadratic dependence of the ion flux at the target from plasma density upstream, \(\Gamma_{\rm D^{+}}\simeq Cn_{\rm upD^{+}}^{2}\)[4]. This model, validated in several devices (e.g. JET [5] or ASDEX [14]) is valid for density values below the detachment onset. Based on the two-point model result, the divertor recycling state is often characterized by the Degree Of Detachment, defined as \(\rm DOD=C\frac{n_{\rm upD^{+}}^{2}}{\Gamma_{\rm D^{+}}}\)[5]. If \(\rm DOD>1\), the measured flux at the target is lower than expected from the two-point model and the plasma is said to be detached. We note that the calibration constant Figure 13: Vector plot of the time- and toroidally-averaged components of the ion flux, projected on the poloidal plane, in the divertor volume, for the low-density (top row) and high-density (bottom row) simulation. The colormap represents the module of the flux. depends on the power crossing the separatrix and the connection length of the specific flux tube. In Fig. 14 we show the DOD profile of the high-density simulation, at both targets, considering flux tubes at different \(\rho_{\psi}\). The two simulations we consider have the same input power in the SOL and the same magnetic geometry, therefore we evaluate \(C\) from the low-density simulation and use this value to determine if the high-density simulation is in a detached state. This methodology is equivalent to considering that the simulation at lower density is not detached, an hypothesis that gives us a lower limit on the DOD value. The density upstream is evaluated as the average density at the separatrix at \(Z=0\). At the ISP we observe \(\mathrm{DOD}>1\), across the entire SOL region, as expected from the observed decrease of the ion flux. At the OSP we also observe \(\mathrm{DOD}>1\), in particular at increasing distance from the separatrix, even if the ion flux is larger than in the low-density simulation. We can conclude that the high-density simulation presents a detached ISP with features that are compatible with the experimental conditions observed at density values larger than the ones necessary for the detachment onset. On the other hand, the OSP is in a partially detached state, where the particle flux reduction is not observed [49]. ### Target heat flux We evaluate the sum of the \(\mathrm{D}^{+}\) ion and electron heat flux considering the contribution from conduction and convection due the parallel and drift fluxes, that is \[\begin{split} q_{\mathrm{tot},j}=&\frac{3}{2}T_{ \mathrm{D}^{+}}\Gamma_{\mathrm{D}^{+},j}-(\chi_{\perp,\mathrm{D}^{+}}\nabla_{ \perp,j}T_{\mathrm{D}^{+}}+b_{j}\,\chi_{\parallel,\mathrm{D}^{+}}\nabla_{ \parallel}T_{\mathrm{D}^{+}})\\ &+\frac{3}{2}T_{\mathrm{e}}\Gamma_{\mathrm{e},j}-(\chi_{\perp, \mathrm{e}}\nabla_{\perp,j}T_{\mathrm{e}}+b_{j}\,\chi_{\parallel,\mathrm{e}} \nabla_{\parallel}T_{\mathrm{e}})\,,\end{split} \tag{41}\] Figure 14: Degree of Detachment, \(\mathrm{DOD}=Cn_{\mathrm{D}^{+},\mathrm{up}}^{2}/\Gamma_{\mathrm{D}^{+}}\), for flux tube at different locations, in the high-density simulation. where \(\Gamma_{\rm D^{+},j}\) is defined in Eq. (40) and an analogous definition is used for \(\Gamma_{\rm e,j}\), the diffusion coefficients \(\chi_{\perp}\) and \(\chi_{\parallel}\) are those appearing in Eqs. (7-8) and \(b_{j}\) is the component of the magnetic field unit vector, with \(j=R\) or \(Z\). The \(\rm D^{+}_{2}\) contribution to the flux is neglected, since \(n_{\rm D^{+}_{2}}\ll n_{\rm D^{+}}\). The time-averaged profiles of the heat flux for the two targets are shown in Fig. 15. In the low-density case, the heat flux is mainly determined by conduction and parallel electron convection, equally contributing with their sum and accounting for 80% of the total heat flux. As a consequence, the heat flux peak is located at the strike points where the parallel ion particle flux peaks, both at the ISP and at the OSP. The heat flux is lower at both targets in the high-density simulation with respect to the low-density one, showing a flat profile at the ISP, where the pressure is lower (see Fig. 4). The contribution of \(\rm D^{+}\) ions, largely dominated by convection, increases up to 40% of the total heat flux in the high-density simulation. As observed for the particle flux, the heat flux decrease is stronger for the ISP, due to the stronger reduction of parallel convection toward the target. We point out that our simulations retrieve the experimental observations of heat flux decrease with the simultaneous increase of upstream radiative and momentum losses (see Fig. 3) at the onset of detachment [1, 4]. ## 5 Conclusions In this work the first multi-component simulations of plasma turbulence coupled to kinetic neutral dynamics are presented in a diverted tokamak configuration. The simulations are performed by exploiting the multi-component model described in Ref. [22] and considering the magnetic equilibrium of a realistic TCV discharge, that is the TCV-X21 configuration [47]. The self-consistent treatment of the interactions between five species (electrons, \(\rm D^{+}\), \(\rm D^{+}_{2}\), \(\rm D\) and \(\rm D_{2}\)) is simulated. The model takes into account the main collisional processes between plasma and neutrals, including ionization, Figure 15: Time- and toroidally-averaged heat flux on the target, for both strike points, as a function of the normalized poloidal flux function. recombination, elastic collisions, charge-exchange and molecular dissociation. We present the results from two simulations performed at different fuelling rates, obtained by changing the strength of a D\({}_{2}\) gas puff. While GBS simulations that do not include the molecular dynamics retrieve important features associated with increased fuelling [31], the introduction of molecular dynamics improves the understanding of the processes at play in high-density L-mode tokamak discharges. For instance, the relevant density of D\({}_{2}\) creates new channels of D production, mainly MAR, producing atomic deuterium at a relatively high temperature, \(T_{\rm D}\simeq 3\) eV. The increase of the neutral density yields an increased radiated power through ionization reactions, leading to power starvation in the divertor region and moving the ionization front from the targets to the region above the X-point. The high neutral density is also responsible for the increase of momentum losses, identified by the increase of charge-exchange reactions along both legs of our high-density simulations, which leads to the decrease of the ion parallel velocity. For sufficiently low plasma temperature, \(T_{e}<3\) eV, molecular dissociations and charge-exchange reactions between D\({}^{+}\) and D become the main plasma energy sink, leading the ion and electron temperatures to values close to the neutral temperature, 0.03 eV\(<T_{\rm D}<3\) eV. Because of the low temperature values and associated momentum losses, the parallel ion velocity is reduced in the high-density with respect to the low-density simulation, as observed in previous GBS simulations [31]. This yields a reduction of total heat flux at the targets, lowering both heat convection and heat conduction. In particular, the increase in fuelling causes a strong reduction of particle flux at the ISP, compatible to typical experimental observations in the detachment regime. Indeed, to our knowledge, the simulations presented in this work are the first simulations of plasma detachment that include a self-consistent treatment of plasma turbulence and neutral interactions. The analysis of the fluxes shows that the decrease of the particle flux to the wall in the high-density simulation is associated with a decreased ionization source, due to a reduced plasma temperature in the SOL. The asymmetries between the ISP and OSP are explained by the different local plasma temperature and molecular neutral density, in turn determined by the magnetic configuration and by the position of the gas puff, combined with the effect of the \(E\times B\) flux. In addition, the reduced plasma temperature leads to an increase of the plasma resistivity. This generates strong electric field in the SOL. Indeed, the equilibrium profile of the parallel electric field \(E_{\parallel}\) follows the generalized Ohm's law, Eq. (39), where the contributions of the electron temperature and pressure gradients are negligible compared to the resistive term, \(\nu j_{\parallel}\). The profiles of both density and pressure at the OMP show that the increase in fuelling leads to the formation of a density shoulder and an increase of the near SOL decay length for both quantities. We observe that the density shoulder is the result of an increase of the perpendicular transport together with the decrease of parallel transport. Leveraging blob tracking routines developed in past studies and recently improved, we perform a detailed investigation of filamentary transport in the SOL, comparing blob velocities with the effective velocity \(\overline{\Gamma}_{E\times B}/\overline{n}_{e}\). We observe an increase of blob radial velocity with increased fuelling, well reproduced by the increase of the radial \(v_{E\times B}\) due to stronger electric field in the far SOL. It is interesting to note that the increase in radial velocity was not observed in simulations where fuelling was increased without including the role of molecules and the plasma temperature in the far SOL was higher [31]. From the analysis of our simulations we retrieve several qualitative similarities with experimental results. For instance, the increase of deuterium puffing leads to a decrease in the heat flux at the targets and a decrease in the particle flux in the inner target, up to detachment conditions [4, 5]. The asymmetry between the two targets results from the combination of local D\({}_{2}\) and D\({}_{2}^{+}\) density [13] and stronger \(E\times B\) drift [14]. Our results of filamentary transport reproduce the increase of radial velocity at higher density [34, 60], where faster blobs appear in correspondence of a strong electric field in the SOL, determined by plasma resistivity at low temperature. The simulations presented herein were partially carried out on the CINECA Marconi supercomputer under the neutralGBS project, partially at the Swiss National Supercomputing Center (CSCS) under the project IDs s1028 and s1170 and partially at SuperMUC-NG thanks to a PRACE awards. This work, supported in part by the Swiss National Science Foundation, has been carried out within the framework of the EUROfusion Consortium, partially funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). The Swiss contribution to this work has been funded by the Swiss State Secretariat for Education, Research and Innovation (SERI). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union, the European Commission or SERI. Neither the European Union nor the European Commission nor SERI can be held responsible for them.
2307.01953
Toward more frugal models for functional cerebral networks automatic recognition with resting-state fMRI
We refer to a machine learning situation where models based on classical convolutional neural networks have shown good performance. We are investigating different encoding techniques in the form of supervoxels, then graphs to reduce the complexity of the model while tracking the loss of performance. This approach is illustrated on a recognition task of resting-state functional networks for patients with brain tumors. Graphs encoding supervoxels preserve activation characteristics of functional brain networks from images, optimize model parameters by 26 times while maintaining CNN model performance.
Lukman Ismaila, Pejman Rasti, Jean-Michel Lemée, David Rousseau
2023-07-04T23:06:57Z
http://arxiv.org/abs/2307.01953v1
# Toward more frugal models for functional cerebral networks ###### Abstract We refer to a machine learning situation where models based on classical convolutional neural networks have shown good performance. We are investigating different encoding techniques in the form of supervoxels, then graphs to reduce the complexity of the model while tracking the loss of performance. This approach is illustrated on a recognition task of resting-state functional networks for patients with brain tumors. Graphs encoding supervoxels preserve activation characteristics of functional brain networks from images, optimize model parameters by 26 times while maintaining CNN model performance. ## 1 Introduction Convolutional neural networks (CNN) are powerful tools to perform computer vision tasks. CNN are however very demanding in terms of energy, data and annotation due to the large amount of parameters to be tuned during their training. These limitations are specially important in medical imaging where the constitution of large cohorts of unhealthy patients can be a bottleneck as frequently observed in cases of rare diseases like brain tumor. Recently, we have shown the possibility to circumvent this limitation by the use of transfer learning from self-supervised training on healthy data to unhealthy data [1]. We used small data in our experiments, and approach opens the possibility for scalability when a larger model is trained from additional data acquired. This was obtained for the automatic recognition of functional cerebral networks via resting-state functional magnetic resonance imaging (rs-fMRI) [2] for patient with brain tumors. The CNN architecture proposed for the classification of functional brain network with 3D fMRI images by Ismaila _et al._, was observed with high model training parameters despite the small data size [2] which constitutes a complex model and struggles with risks of overfitting. In this work, we test possible ways to simplify deep learning models by reducing the overall parameter size. To this purpose, we propose to compare a basic CNN method with the approach depicted in Fig. 1. Based on a recent work by Gousia _et al._, which highlighted the benefits of graph encoding in optimizing CNN model parameters especially in medical imaging [3]. We investigate various ways of encoding the rs-fMRI 3D volume data in more compacted fashions and systematically compare our observation with the performance obtained in [2]. This effort only represent an initial attempt towards more efficient encoding of our brain volume images, as well as opens the possibility for scalability when a larger model is trained from additional data acquired. ## 2 Database fMRI brain network activation image data of 81 healthy subjects and 55 unhealthy patients were collected. Regular volunteers provide the healthy data, while patients with brain tumors where a binary mask indicate region of lesion in the brain constitute the unhealthy data. This analysis, was done in separate components which creates brain maps of the regions with synchronous blood oxygen level dependent (BOLD) signal activity. In the data acquisition stage, we extracted the intrinsic connectivity networks (ICNs) by using methods that combine the information of both the temporal and spatial dimensions, such as independent component analysis. The extracted signals represent the neuro-anatomical basis for the functional networks in the brain [4]. The statistical parametric mapping (SPM) anatomy toolbox for Matlab was used to generate the 3D brain volume images, from the initial spatio-temporal fMRI signals. Among the 55 ICNs processed for each patients, 7 of these signals where recognized manually by experts to be biological networks of the brain such as Default Mode Network (DMN), Language Network (LANG), Right Fronto-parietal Control Network (rFPCN), Left Fronto-parietal Control (IFPCN), Salience Network (SAL), Dorsal Attention Network (DAN) and Ventral Attention Network (VAN). The annotated images were used in two versions: full images (connectivity map) and corresponding thresholded images. ## 3 Spatial dimension reduction One may wonder if the entire 3D volume in gray levels is fully informative for automatic recognition of the functional cerebral networks. Several dimension reduction approaches can be envisioned. From the acquired brain volumes of resting-state fMRI images \(42px\times 51px\times 34channels\), we normalized the pixel intensity range to 0-1 and computed several reduced version of these raw data as depicted in Fig. 2. First, one can reduce the number of spatial dimension via a projection. We produced 2D gray level image by performing _Mean_ operation on pixel intensity across the axial (A) plane as shown in Fig. 2 Secondly, to understand whether the intensity of the activation map holds discriminative information, we created 2D binary images by performing an _OR_ operation in respect to sagittal, coronal and axial (SCA) plane respectively, which were further stacked together to provide SCA binary stack image. Also, we performed another _OR_ operation across the axial plane to obtain a 3D binary volume image which overall, resulted in 4 variants of generated images as illustrated in Fig. 2. Lastly, we tested if the full resolution of voxels is necessary for the classification of the functional network, which are rather formed by large structures than fine details. To this purpose, segmentation of the gray level activation map was performed using SLIC algorithm [5, 6]. We processed the 2D segmented labels to obtain a superpixels image, while the 3D segmented labels provided the supervoxels image as shown in Fig. 4. Furthermore, we averaged (smoothened) the pixel intensities within each segment of our superpixels and supervoxels images. This step allows us to evaluate the integrity of the functional brain network features which was done by training a CNN model for 7 distinct functional brain network classification using the generated superpixels/supervoxels images. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Data** & **Train-Test** & **Accuracy** & **Parameters** \\ \hline 3D gray level & 315-70 & 0.75 \(\pm\) 0.01 & 2,356,807 \\ \hline 3D binary & 315-70 & 0.66 \(\pm\) 0.02 & 2,356,807 \\ \hline 2D gray level & 315-70 & 0.68 \(\pm\) 0.01 & 2,337,799 \\ \hline 2D binary & 315-70 & 0.63 \(\pm\) 0.01 & 2,337,799 \\ \hline SCA-binary stack & 315-70 & 0.68 \(\pm\) 0.02 & 2,011,271 \\ \hline \end{tabular} \end{table} Table 1: CNN based fMRI brain network classification with unhealthy data. Figure 1: Visual abstract of our method. We consider as baseline performance either in terms of number of parameters and accuracy of CNN applied on raw rs-fMRI activation maps for functional cerebral network automatic recognition. We compare this performance with neural networks applied on compacted versions of the images. Figure 2: fMRI image dimension reduction process. When using the dimension reduction from 3D to 2D or from grey level to binary images, we observe performance drop as provided in Tab. 1. This suggests that, there is information in the gray level distribution and the 3D shape of the network which are not preserved via the simple spatial dimension reduction tested. By contrast, the values in Tab. 2 represent the functional brain network classification results with CNN model using pixels, superpixels and supervoxels data respectively. Interestingly the loss of performance is very limited when one reduces the gray levels to the average value of the pixels inside a supervoxel or even a superpixel image. Therefore, despite the spatial dimension reduction tested, the reduction of the number of parameters in the models is so far very limited or negligible. To produce this reduction of the model, we proposed to encode the most promising dimension reduction technique (supervoxels) in a compact way as described in the next section. ## 4 Graph encoding To further benefit from the spatial dimension reduction of the previous section, we investigate the possibility to reduce the complexity of the associated neural networks models with limited reduction of performance on the functional cerebral network recognition. To this purpose, we consider to encode our supervoxelized images into graphs. Commonly in graphs, interacting nodes are connected by edges whose weights can be defined by either temporal connections or anatomical junctions, because, graphs are naturally good at relational organization between entities, which makes them great option for representing the 3D capture of voxelwise signals mapped to a specific region of the brain [7]. Therefore, a possibly efficient representation of these fMRI network activations in images can be tested using a graph relation network, which connects nodes of related regions via graph edges. To obtain a graph representation of our supervoxels images, we connected the segmented neighboring regions through an edge, and denoted the center of each region as a graph node, segment-wise attributes were encoded as node spatial embeddings. This step was repeated until all neighboring nodes were traversed (see Fig.5). We implemented this approach using the region adjacency graph technique [8], which simply represents each region of the segment as graph nodes and the link between two touching regions as edge using the provided labels of the segmented regions [9]. From the extracted relative spatial coordinates of each superpixel of our image data via the cartesian function, we computed the node position as edge attribute (\(pos[i]-pos[j]\)) via k-NN graph transformation. The number of supervoxels was fixed empirically based on the typical size of the activation spots. The resulting graphs \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Data** & **Train-Test** & **Accuracy** & **Parameters** \\ \hline 3D gray level & 315-70 & \(0.75\pm 0.01\) & 2,356,807 \\ \hline Superpixels image & 315-70 & \(0.69\pm 0.02\) & 2,356,807 \\ \hline Supervoxels image & 315-70 & \(0.73\pm 0.02\) & 2,356,807 \\ \hline \end{tabular} \end{table} Table 2: CNN classification of functional brain networks using superpixels/supervoxels images generated in the segmentation stage of graph encoding process with unhealthy subjects as input data. Figure 4: fMRI (LANG network) image spatial transformation. Figure 5: Graph encoding process from superpixels/supervoxels images. Figure 3: Pixel image segmentation into superpixels and supervoxels. from the encoding stage were observed to be structurally indistinguishable from the connectivity point of view. The contrastive information is expected to stand on the distribution of edge values, which differ from one structural network map to another. We implemented our method using SplineCNN, a graph neural network which uses a novel type of spline-based convolutional layer for learning [10]. This state-of-the-art GNN is suitable for image-based graph classification task because, it allows the capture of local patterns using spatial relationship between graph nodes by performing global graph pooling. We trained our model parameters with 2 convolutional layers and 2 fully connected output layers with indication of 7 classes in the output layer and a softmax activation. Best results were obtained by training with 2-step learning rate values of \(1e-3\) for epochs \(0-200\) and \(1e-5\) for epochs \(200-500\) with early stopping. For fair comparison with the best result obtained with CNN model in [11], we performed transfer learning during the training of the CNN and GNN models using 80% - 10% - 10% ratio for train-validation-test data slit respectively, as well as early stopper with patient set to 10 misses. The performance provided in Tab. 3 shows the recorded result from fMRI functional network classification using this transfer learning strategy. Brute transfer indicates the strategy of training directly on healthy data and testing on unhealthy data for both CNN and GNN models. In this cohort, results were compared with values from training and testing on unhealthy data using CNN and GNN model, which provided the \(1^{st}\) baseline and \(2^{nd}\) baseline values of \(0.75\pm 0.01\) and \(0.64\pm 0.03\) respectively, while 0.78 \(\pm\) 0.01 and \(0.70\pm 0.01\) were recorded in the transfer learning approach with CNN and GNN respectively. As a consequence, we demonstrate the possibility to obtain a compression of a factor of 26 on the number of model parameters after supervoxeization and graph encoding with only a reduction of \(8\%\). ## 5 Conclusion In this study, we investigated ways to reduce the complexity of end-to-end machine learning models based on convolutional neural networks for the automatic recognition of functional cerebral networks via resting-state fMRI data. A compaction of the activation maps into superpixels or supervoxels shows limited impact on the classification performance. We emphasize the anticipated influence of our 3D multi-channel images in model parameters, which motivates exploration of a dimension reduction technique before introducing the graph encoding technique. Model evaluation based on spatial dimension reduction was done to investigate its minimal influence in reducing our model parameter. However, this stage was important towards more efficient data encoding (graph structure), which was later shown to have significantly reduced the model parameter. Our initial encoding effort produces a compression of a factor \(26\times\) where associated reduction in performance was observed at only \(8\%\). The effort to reduce the complexity of the models was concentrated on the encoding approach of our fMRI data. It would naturally be interesting to couple such effort with investigation on the architecture of the models [12, 13].
2303.12566
Computing quadratic points on modular curves $X_0(N)$
In this paper we improve on existing methods to compute quadratic points on modular curves and apply them to successfully find all the quadratic points on all modular curves $X_0(N)$ of genus up to $8$, and genus up to $10$ with $N$ prime, for which they were previously unknown. The values of $N$ we consider are contained in the set \[ \mathcal{L}=\{58, 68, 74, 76, 80, 85, 97, 98, 100, 103, 107, 109, 113, 121, 127 \}.\] We obtain that all the non-cuspidal quadratic points on $X_0(N)$ for $N\in \mathcal{L}$ are CM points, except for one pair of Galois conjugate points on $X_0(103)$ defined over $\mathbb{Q}(\sqrt{2885})$. We also compute the $j$-invariants of the elliptic curves parametrised by these points, and for the CM points determine their geometric endomorphism rings.
Nikola Adžaga, Timo Keller, Philippe Michaud-Jacobs, Filip Najman, Ekin Ozman, Borna Vukorepa
2023-03-22T13:49:04Z
http://arxiv.org/abs/2303.12566v2
# Computing quadratic points on modular curves \(X_{0}(n)\) ###### Abstract. In this paper we improve on existing methods to compute quadratic points on modular curves and apply them to successfully find all the quadratic points on all modular curves \(X_{0}(N)\) of genus up to \(8\), and genus up to \(10\) with \(N\) prime, for which they were previously unknown. The values of \(N\) we consider are contained in the set \[\mathcal{L}=\{58,68,74,76,80,85,97,98,100,103,107,109,113,121,127\}.\] We obtain that all the non-cuspidal quadratic points on \(X_{0}(N)\) for \(N\in\mathcal{L}\) are CM points, except for one pair of Galois conjugate points on \(X_{0}(103)\) defined over \(\mathbb{Q}(\sqrt{2885})\). We also compute the \(j\)-invariants of the elliptic curves parametrised by these points, and for the CM points determine their geometric endomorphism rings. Key words and phrases:Modular curves, quadratic points, elliptic curves, symmetric Chabauty, Mordell-Weil sieve, Jacobians 2020 Mathematics Subject Classification: 11G05, 14G05, 11G18 T. K. was supported by the Deutsche Forschungsgemeinschaft (DFG), Projektnummer STO 299/18-1, AOBJ: 667349 while working on this article. P. M. is supported by an EPSRC studentship EP/R513374/1 and has previously used the surname Michaud-Rodgers. F. N. and B. V. are supported by QuantiXLie Centre of Excellence, a project co-financed by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (Grant KK.01.1.1.01.0004). E. O. is supported by TUBITAK Project 122F413. **Conjecture 1.1** (Quadratic isogenies conjecture).: _There exists an integer \(C\) such that if \(K\) is a quadratic field and \(N>C\) is an integer, then any \(P\in X_{0}(N)(K)\) is either a cusp or a CM-point._ We emphasize that the constant \(C\) in this conjecture does not depend on the quadratic field \(K\). Conjecture 1.1 implies the conjecture of Elkies which states that the set of \(N\) such that \(X_{0}^{+}(N)(\mathbb{Q})\) contains points that are neither CM nor cusps is finite (see [16]). While there are no general bounds as in Conjecture 1.1, there has been much more success in classifying all the quadratic points on \(X_{0}(N)\) for fixed values of \(N\). Bruin and Najman described all the quadratic points on hyperelliptic \(X_{0}(N)\) such that the Jacobian \(J_{0}(N)\) has rank \(0\) over \(\mathbb{Q}\)[12]. Ozman and Siksek then determined all the quadratic points on the non-hyperelliptic \(X_{0}(N)\) with \(J_{0}(N)\) having rank \(0\) over \(\mathbb{Q}\) and with genus \(\leqslant 5\)[34]. Box determined the quadratic points on all the \(X_{0}(N)\) with genus \(\leqslant 5\) and with \(J_{0}(N)\) having positive rank over \(\mathbb{Q}\)[10]. Najman and Vukorepa described all the quadratic points on the bielliptic \(X_{0}(N)\) (for which this had not already been done in previous work) [33]. We provide a more detailed overview of all known results in Section 3.1. In this paper we complete the determination of all the quadratic points on the modular curves \(X_{0}(N)\) of genus up to \(8\), and for \(X_{0}(N)\) with \(N\) prime of genus up to \(10\). The motivation for this is threefold. Firstly, the results (i.e. the tables with explicit quadratic points) are useful as they have applications to Diophantine equations. In order to apply the so-called 'Modular Approach' over a given real quadratic field \(K\) (see for instance [17] for details), one requires the irreducibility of the mod \(p\) representation of a Frey elliptic curve defined over \(K\). This Frey elliptic curve often has a \(2\) or \(3\)-isogeny defined over \(K\) too. Therefore, if the mod \(p\) representation is reducible, then the Frey curve gives a point on \(X_{0}(2p)(K)\) or \(X_{0}(3p)(K)\). Thus having a complete understanding of quadratic points is useful in establishing irreducibility for small values of \(p\). These type of results have been useful for instance in [17, 24, 29]. Secondly, another motivation for these computations is to give evidence for Conjecture 1.1 by showing that as \(N\) (and hence the genus of \(X_{0}(N)\)) gets larger, the non-cuspidal non-CM quadratic points become rarer. Finally, a further motivation was to implement state-of-the-art techniques for determining quadratic points in a general and efficient way. The modular curves we study provide a natural testing ground for this. Let \[\mathcal{L}:=\{58,68,74,76,80,85,97,98,100,103,107,109,113,121,127\}. \tag{1}\] Our main result is the complete determination of the non-cuspidal quadratic points on \(X_{0}(N)\) for \(N\in\mathcal{L}\). **Theorem 1.2**.: _Let \(N\in\mathcal{L}\). The finitely many non-cuspidal quadratic points on the curve \(X_{0}(N)\) are displayed in the tables in Section 4._ We note that every quadratic point on \(X_{0}(N)\) for \(N\in\mathcal{L}\) is either a cusp or CM point, apart from a pair of quadratic points defined over the field \(\mathbb{Q}(\sqrt{2885})\) on the curve \(X_{0}(103)\). For all \(N\in\mathcal{L}\) the genus of \(X_{0}(N)\) is between \(6\) and \(10\). Together with the previous results of [12, 34, 10, 7, 33, 6, 38] this completes the classification of quadratic points on the modular curves \(X_{0}(N)\) of genus up to \(8\), and the classification of quadratic points on \(X_{0}(N)\) of genus up to \(10\) with \(N\) prime. We use three methods to determine the quadratic points on these curves: * The 'going down' method, * The 'rank \(0\)' method, * The 'Atkin-Lehner sieve' method. The 'going down' method can be applied in certain cases when the quadratic points on \(X_{0}(M)\), for some proper divisor \(M\) of \(N\), have been previously determined. We can often reduce the problem to determining the rational points on several Atkin-Lehner quotients of several modular curves, and checking whether finitely many lifts of quadratic points on \(X_{0}(M)\) lift to quadratic points on \(X_{0}(N)\). More details about the 'going down' method are given in Section 3.2. The 'rank 0' method, which is explained in more detail in Section 3.3, can be used when the Mordell-Weil group \(J_{0}(N)(\mathbb{Q})\) has rank 0. It uses a type of Mordell-Weil sieve to determine the quadratic points on \(X_{0}(N)\). The 'Atkin-Lehner sieve' method is the most involved of the three methods. It uses a Mordell-Weil sieve involving an Atkin-Lehner involution on the curve \(X_{0}(N)\), together with a symmetric Chabauty criterion. This method is based on ideas of Siksek [36], which were later further developed in [33, 10]. A successful application of the sieve reduces the problem to considering fixed points of an Atkin-Lehner involution and the rational points on a given Atkin-Lehner quotient. A more detailed description of this method can be found in Section 3.4. Although the methods we use are similar to previously used methods, a major reason why we are able to extend these methods to consider curves of higher genus than were previously studied is thanks to the new techniques we introduce (in Section 2) to work with models and maps of curves \(X_{0}(N)\) and their quotients. In particular, we work with models of \(X_{0}(N)\) on which the action of the Atkin-Lehner involutions is simultaneously diagonalised, allowing us to easily compute quotient maps and see the relationships between quadratic points. We also improve on existing methods for computing equations for the \(j\)-map, as well as introduce a fast method for testing nonsingularity at a given prime. We also note that our algorithms offer significant computational speed-ups when applied to certain previously computed levels (see Remark 3.9). The Magma[9] code used to support and verify the computations in this paper is available at [https://github.com/TimoKellerMath/QuadraticPoints](https://github.com/TimoKellerMath/QuadraticPoints) ### Acknowledgements We would like to thank the organisers of the March 2022 _Modular curves workshop_ at MIT (hybrid) for bringing the authors together and providing a starting point for this project. We would also like to thank Jennifer Balakrishnan, Shiva Chidambaram, Pip Goodman, David Holmes, Jeremy Rouse, Samir Siksek, Michael Stoll, and Damiano Testa for useful discussions. ## 2. Equations for \(X_{0}(n)\) and associated maps One of the main barriers to extending the work of classifying quadratic points on the curves \(X_{0}(N)\) arises from the computational difficulties of working with curves of high genus. In order to use each of our three methods to compute quadratic points, it will be imperative to work with suitable models of the curves \(X_{0}(N)\) and its quotients. In this section, we describe how to obtain such models, maps to Atkin-Lehner quotients, and equations for the \(j\)-map. We also describe a method for verifying whether a prime is of good or bad reduction for a given model. The methods and techniques we present in this section are those required for the computations in this paper, but only make up a subset of the totality of the code that is available in the accompanying Magma files. The code also contains functions that can be used to compute maps to curves \(X_{0}(M)\) with \(M\mid N\), modular parametrisation maps, quotients by subgroups of the Atkin-Lehner group, and more. ### Models of \(X_{0}(n)\) Let \(N\) be such that \(X_{0}(N)\) is non-hyperelliptic of genus \(g\geqslant 2\). A smooth model for the curve \(X_{0}(N)\) in \(\mathbb{P}^{g-1}\) may be obtained as the image of the canonical embedding on a basis of cusp forms in \(S_{2}(N)\). This (now standard) process for obtaining models is described in detail in [18, pp. 17-38], and the Magma code we used to do this is adapted from [34]. Although any basis of cusp forms for \(S_{2}(N)\) may be used to obtain a model of \(X_{0}(N)\), choosing certain bases can produce better models for our purposes. We work with _diagonalised bases_ of cusp forms, which give rise to _diagonalised models_ for \(X_{0}(N)\). We now describe what these are. For \(M\mid N\) such that \(\gcd(M,N/M)=1\), we denote by \(w_{M}\) the corresponding Atkin-Lehner involution. The set of all Atkin-Lehner involutions forms an abelian \(2\)-group, which we denote \(W\). We fix a basis \(\{h_{1},\ldots,h_{g}\}\) for \(S_{2}(N)\) and we may then represent the elements of \(W\) as matrices acting on \(S_{2}(N)\) with respect to this basis. The set of all matrices in \(W\) is simultaneously diagonalisable, meaning there exists a matrix \(T\) such that \(TwT^{-1}\) is a diagonal matrix for each \(w\in W\). Applying this change of basis matrix to the basis \(\{h_{1},\ldots,h_{g}\}\) produces a new basis \(\{f_{1},\ldots,f_{g}\}\) of \(S_{2}(N)\), for which the Atkin-Lehner involutions now act diagonally. That is, for each \(w\in W\) and \(1\leqslant i\leqslant g\), we have \(w(f_{i})=\delta_{w,i}f_{i}\), where \(\delta_{w,i}=\pm 1\). After obtaining a diagonalised basis, we may then obtain a model for \(X_{0}(N)\) in \(\mathbb{P}^{g-1}_{x_{1},\ldots,x_{g}}\) via the image of the canonical embedding. The Atkin-Lehner involutions on this model then act as they do on the \(f_{i}\), namely \(w(x_{1}:\cdots:x_{g})=(\delta_{w,1}x_{1}:\cdots:\delta_{w,g}x_{g})\). There are three main reasons for working with diagonalised models. First and foremost, we will make use of the maps from \(X_{0}(N)\) to its Atkin-Lehner quotients, and these maps will be straightforward to compute and work with, as we see in the following subsection. Secondly, we found that working with a diagonalised basis allowed us to quickly compute the image of the canonical embedding and produce equations for \(X_{0}(N)\) with small coefficients. Finally, we found that the diagonalised models we produced usually had good reduction at small primes not dividing \(2N\), which is important in the application of the sieving methods we use. We note that a diagonalised model of a non-split Cartan curve is used in [30, pp. 254-255], and that diagonalised bases of cusp forms for \(N=p\) prime are used in [33]. In each of these cases, there is a single involution. ### Maps to Atkin-Lehner quotients Let \(\{f_{1},\ldots,f_{g}\}\) be a diagonalised basis for \(S_{2}(N)\) and assume that we have obtained the corresponding diagonalised model for \(X_{0}(N)\) in \(\mathbb{P}^{g-1}_{x_{1}^{*},\ldots,x_{g}}\) on which each Atkin-Lehner involution \(w\in W\) acts as \(w(x_{i})=\delta_{w,i}x_{i}\), with \(\delta_{w,i}=\pm 1\). For each \(w\in W\), we wish to compute a model for the quotient curve \(X_{0}(N)/w\), as well as a map from \(X_{0}(N)\) to this quotient. For each \(N\) and \(w\) we considered, the genus of \(X_{0}(N)/w\) was \(\geqslant 2\). _Case 1: The quotient \(X_{0}(N)/w\) is non-hyperelliptic._ In this case, since the genus of \(X_{0}(N)/w\) is \(\geqslant 3\), the curve \(X_{0}(N)/w\) may be canonically embedded. We choose the indices \(i\) for which \(w(f_{i})=f_{i}\), which we denote by \(i_{1},\ldots,i_{t}\), and compute the image of the canonical embedding on the cusp forms \(\{f_{i_{1}},\ldots,f_{i_{t}}\}\). This gives a model for \(X_{0}(N)/w\) in \(\mathbb{P}^{t-1}\), and the map from \(X_{0}(N)\) to \(X_{0}(N)/w\) is simply given by the projection map \((x_{1}:\cdots:x_{g})\mapsto(x_{i_{1}}:\cdots:x_{i_{t}})\). _Case 2: The quotient \(X_{0}(N)/w\) is hyperelliptic._ In this case the curve \(X_{0}(N)/w\) cannot be canonically embedded. Instead, we choose a set of indices \(1\leqslant i_{1},\ldots,i_{t}\leqslant g\) for which either \(\delta_{w,i_{j}}=1\) for \(1\leqslant j\leqslant t\), or \(\delta_{w,i_{j}}=-1\) for \(1\leqslant j\leqslant t\). In this way, the coordinates \(x_{i_{1}},\ldots,x_{i_{t}}\) form a subset of coordinates on which \(w\) acts trivially in projective space. We then project onto these coordinates and consider the image of this projection map. If this image is a curve of the expected genus (or if the projection map is of degree \(2\) onto its image), then we have obtained a projective model for \(X_{0}(N)/w\) in \(\mathbb{P}^{t-1}\). If this is not the case, then we have in fact obtained a model for a (non-trivial) quotient of \(X_{0}(N)/w\), in which case we can try to repeat this process with a different set of indices \(1\leqslant i_{1},\ldots,i_{t}\leqslant g\). If we succeed in obtaining a model for \(X_{0}(N)/w\), we may then apply a transformation to take this model to a standard model for this hyperelliptic curve in weighted projective space. We note that choosing a larger set of admissible indices \(\{i_{1},\ldots,i_{t}\}\) increases the likelihood of success, but also gives a more complicated quotient map to the standard hyperelliptic model. The method presented in Case 2 succeeded for all values of \(N\) and Atkin-Lehner quotients we considered. However, if no suitable set of indices can be found, then it is also possible to use Magma's inbuilt CurveQuotient function (which we found to be slower and produced more complicated maps in the cases we tested). ### Equations for the \(j\)-map Assume we have a model for \(X_{0}(N)\) obtained as the image of the canonical embedding on the cusp forms \(\{f_{1},\ldots,f_{g}\}\in S_{2}(N)\). We describe how to obtain equations for the \(j\)-map, \(j\colon X_{0}(N)\to X_{0}(1)\cong\mathbb{P}^{1}\). Obtaining these equations will allow us to compute the cusps on the models of our curves \(X_{0}(N)\), compute the \(j\)-values of any quadratic points we compute, and finally compute quadratic points with a given \(j\)-invariant obtained using the 'going down' method. The methods we present here are in fact applicable in a wider setting. In particular, they can be used to compute modular parametrisation maps, as well as maps to modular curves \(X_{0}(M)\) with \(M\mid N\). We start by following the method outlined in [34, pp. 2464-2466]. As a modular function, we have the \(q\)-expansion \[j(q)=\frac{1}{q}+744+196884q+21493760q^{2}+\cdots.\] Using linear algebra, we compute homogeneous polynomials \(F,G\in\mathbb{Q}[X_{1},\ldots,X_{g}]\) of the same degree \(r\), such that \[\frac{F(f_{1}(q),\ldots,f_{g}(q))}{G(f_{1}(q),\ldots,f_{g}(q))}=j(q) \tag{2}\] up to some precision, say \(O(q^{5N})\). We then need to verify that this equality holds up to arbitrary precision. At this point, one would usually attempt to verify this by checking the equality up to the Sturm bound, but since \(j(q)\) is not a modular form (i.e. not holomorphic), this is not possible. One option is to use the fact that \(j(q)=E_{4}(q)^{3}/\Delta(q)\) and consider \(F\Delta-GE_{4}^{3}\), which is a modular form of weight \(12+\deg(F)\). The Sturm bound will be quite large in this case, and this method will not generalise. Instead, applying the argument in [34, p. 2466], it is enough to show that the equality (2) holds up to precision \(O(q^{d_{F/G}+d_{j}+1})\). Here, \(d_{F/G}\) and \(d_{j}\) are the degrees of \(F/G\) and \(j\) respectively as rational functions on \(X_{0}(N)\). The degree \(d_{j}\) can be computed solely in terms of \(N\). In [34, p. 2466], the degree \(d_{F/G}\) is explicitly computed, and once it has been verified that \(d_{F/G}=d_{j}\), it is enough to check the equality (2) up to precision \(O(q^{2d_{j}+1})\). However, computing \(d_{F/G}\) is slow, and computationally infeasible for curves of high genus that we need to work with. Instead, we will find an upper bound on \(d_{F/G}\). **Lemma 2.1**.: _Let \(F/G\) be a rational function on a canonically embedded curve \(X\subset\mathbb{P}^{g-1}\), with \(F\) and \(G\) homogeneous polynomials of degree \(r\). Then_ \[\deg(F/G)\leqslant(2g-2)\cdot r.\] Proof.: We have that \[\deg(F/G)=\deg(\operatorname{div}_{0}(F/G))\leqslant\deg(\operatorname{div}(F)),\] where \(\operatorname{div}(\mathrm{F})\) and \(\operatorname{div}_{0}(F)\) denote the divisor of \(F\) and the divisor of zeros of \(F\) respectively. Then \[\operatorname{div}(F)=\operatorname{div}(X\cap\{F=0\}).\] This is the intersection of a curve and a hypersurface. By Bezout's theorem (as stated in [35, pp. 167-168] for example), the degree of this intersection divisor is \(\deg(X)\cdot\deg(\{F=0\})=(2g-2)\cdot r\). Thanks to this lemma and using the formula \(d_{j}=\prod_{p\mid N}\left(1+\frac{1}{p}\right)\), it suffices to check (2) up to precision \(O(q^{m})\), where \[m=((2g-2)\cdot r)+1+N\cdot\prod_{p\mid N}\Big{(}1+\frac{1}{p}\Big{)}.\] The usual way of finding suitable homogeneous polynomials \(F\) and \(G\) of the same degree \(r\) is to test values \(r=2,3,4,\dots\) until a degree that works is reached. Since \(r\) can be somewhat large, this can be very slow. In order to start at a suitable value of \(r\), we use the following lemma, communicated to us by Jeremy Rouse. We note that this lemma aids us in choosing a value of \(r\) that will work from the outset, but its correctness is in fact irrelevant for our computations. **Lemma 2.2**.: _Let \(X\) be a curve of genus \(g\geqslant 2\) and let \(\varphi\colon X\to\mathbb{P}^{1}\) be a map of degree \(d\). Define \(r\) to be the smallest positive integer larger than \(\frac{d}{2(g-1)}+\frac{1}{2}\). Then \(\varphi\) may be expressed as a ratio of two holomorphic differential \(r\)-forms on \(X\)._ In our set-up, any \(r\)-fold product of the cusp forms \(f_{i}\) (which is a modular form of weight \(2r\)) corresponds to a holomorphic differential \(r\)-form on \(X_{0}(N)\), and so we may seek polynomials \(F\) and \(G\) of degree \(r\), as defined in this lemma. Proof.: Write \(\operatorname{div}(\varphi)=D_{0}-D\), where \(D_{0}\) and \(D\) are effective divisors of degree \(d\) (the divisors of zeros and poles of \(\varphi\) respectively). We aim to show that there exists a holomorphic differential \(r\)-form, \(h\), such that \(f=\varphi h\) is also a holomorphic differential \(r\)-form. Then \(\varphi=f/h\) as required. It will suffice to find a form \(h\) such that \(\operatorname{div}_{0}(h)\geqslant D\). We define (for any divisor \(D^{\prime}\) on \(X\)), \[\mathcal{L}^{(r)}(D^{\prime})=\{\text{meromorphic differential $r$-forms, $\mu$ }|\,\operatorname{div}(\mu)\geqslant-D^{\prime}\}.\] It will be enough to prove that \(\dim\left(\mathcal{L}^{(r)}(-D)\right)>0\). Let \(K\) be a canonical divisor on \(X_{0}(N)\). Then by [31, p. 238], we have an isomorphism of vector spaces \(\mathcal{L}^{(r)}(-D)\cong\mathcal{L}(-D+rK).\) By Riemann-Roch, \[\dim\left(\mathcal{L}(-D+rK)\right) \geqslant\deg(-D+rK)-g+1\] \[=-d+r(2g-2)-g+1.\] This is positive if and only if \(r>\frac{g+d-1}{2(g-1)}=\frac{d}{2(g-1)}+\frac{1}{2}\), as required. We note that the value of \(r\) obtained by applying this lemma is not necessarily the minimal \(r\) one can choose. However, in the cases we tested, we found it was usually the minimum possible value, or very close to it. ### Testing nonsingularity In order to apply the 'rank 0' and 'Atkin-Lehner sieve' methods we use for computing quadratic points, it is crucial to be able to verify whether or not, for a given prime \(p\), the reduction of a model for \(X_{0}(N)\) mod \(p\) is singular. If the model is singular mod \(p\), then Magma's inbuilt IsSingular command will determine this right away. However, if the model is nonsingular mod \(p\), then for curves of genus \(\geqslant 9\) we found it was very slow to check this directly. Instead we use the following lemma to verify nonsingularity. The idea of using this method was suggested to us by David Holmes. **Lemma 2.3**.: _Let \(X\) be a nonsingular projective model for a geometrically irreducible curve \(Y\) over \(\mathbb{Q}\) and let \(p\) be a prime. Denote by \(\tilde{X}\) the reduction of \(X\) mod \(p\) and suppose that \(\tilde{X}\) is an integral (i.e. reduced and irreducible) curve. If \(\tilde{X}\) has a nonsingular \(\mathbb{F}_{p}\)-point and the (geometric) genus of \(\tilde{X}\) equals the genus of \(Y\), then \(X\) has good reduction at \(p\)._ Proof.: Since \(\tilde{X}\) is integral and has a nonsingular \(\mathbb{F}_{p}\) point, it is geometrically integral. The arithmetic genus of \(\tilde{X}\) matches the arithmetic genus of \(X\), which is the genus of \(Y\) (since \(X\) is a nonsingular model for \(Y\)). It follows that the arithmetic genus and geometric genus of \(\tilde{X}\) are equal. We now apply [37, Lemma 0CE4] to the normalisation of \(\tilde{X}\) to conclude that \(\tilde{X}\) is nonsingular. We note that verifying that the (geometric) genus of \(\tilde{X}\) equals the genus of \(X_{0}(N)\) is a fast computation, since Magma works with the function field of the curve to do this and there is a formula for the genus of \(X_{0}(N)\). ## 3. Computing quadratic points In this section we start by providing an overview of the known results on quadratic points on \(X_{0}(N)\). We then introduce the three methods we use to study quadratic points on \(X_{0}(N)\), and apply them to prove Theorem 1.2. ### Overview of previously studied \(X_{0}(n)\) and methods We start by providing an overview of the known results on quadratic points on \(X_{0}(N)\). We say a point \(Q\) is a _quadratic point_ on \(X_{0}(N)\) if \(Q\in X_{0}(N)(K)\setminus X_{0}(N)(\mathbb{Q})\) for a quadratic field \(K\). We will usually consider a quadratic point together with its Galois conjugate, \(Q^{\sigma}\), where \(\sigma\) denotes the non-trivial element of \(\operatorname{Gal}(K/\mathbb{Q})\). A pair of quadratic points gives rise to a rational point on the symmetric square of \(X_{0}(N)\), which we write as an effective degree \(2\) divisor \(Q+Q^{\sigma}\in X_{0}(N)^{(2)}(\mathbb{Q})\). We recall that a smooth projective curve \(X/\mathbb{Q}\) of genus \(\geqslant 2\) has infinitely many quadratic points (as we range over all quadratic fields) if and only if it is hyperelliptic or if it is bielliptic with a degree \(2\) morphism \(X\to E\) where \(E/\mathbb{Q}\) is an elliptic curve of positive rank over \(\mathbb{Q}\)[20]. In these cases, when we say that the quadratic points have been classified, we mean that any quadratic points not arising as part of an infinite (geometric) family have been computed. So far, thanks to the work of numerous people across many papers, the quadratic points have been classified on many modular curves \(X_{0}(N)\) with genus \(g(X_{0}(N))\geqslant 2\). We list the results in chronological order: 1. The hyperelliptic \(X_{0}(N)\) with \(\operatorname{rk}J_{0}(N)(\mathbb{Q})=0\), see [12]: This occurs for \[N\in\{22,23,26,28,29,30,31,33,35,39,40,41,46,47,48,50,59,71\}.\] 2. The non-hyperelliptic \(X_{0}(N)\) with \(g(X_{0}(N))\leqslant 5\) and \(\operatorname{rk}J_{0}(N)(\mathbb{Q})=0\), see [34]. This occurs for \[N\in\{34,38,42,44,45,51,52,54,55,56,63,64,72,75,81\}.\] 3. The \(X_{0}(N)\) with \(g(X_{0}(N))\leqslant 5\) and \(\operatorname{rk}J_{0}(N)(\mathbb{Q})>0\), see [10]. This occurs for \[N\in\{37,43,53,61,57,65,67,73\}.\] 4. The bielliptic \(X_{0}(N)\) which have not been already dealt with in (i)-(iii), see [33]. These values are \[N\in\{60,62,69,79,83,89,92,94,95,101,119,131\}.\] 5. Some \(X_{0}(N)\) that were interesting for other reasons: the case \(N=77\) was solved in [6], \(N=91\) in [38] and the cases \(N=125\) and \(169\) in [7]. Two broad methods are used to obtain these results. The first is by applying some kind of Mordell-Weil sieve (with different variations according to the properties of the curve \(X_{0}(N)\)), and the second is the 'going down' method mentioned in the introduction. In order to prove Theorem 1.2 we will use three distinct methods, two of which (namely 'rank \(0\)' and the 'Atkin-Lehner sieve') make use of a Mordell-Weil sieve. We split the set \(\mathcal{L}\) (defined in (1)) into three distinct sets, presented in Table 1, according to which method we use. We note that more than one of the methods we use could, in principle, be applied for certain levels \(N\in\mathcal{L}_{1}\cup\mathcal{L}_{2}\). We also note that our methods can be used to compute the set of quadratic points for many more levels than we have presented here, although we felt like considering \(15\) levels was a suitable number to display our techniques. The interested reader may consult the accompanying Magma files where we also ran our code for several other levels up to genus \(12\). ### Going down When the quadratic points on \(X_{0}(M)\) have been already classified, it is often beneficial to use this classification to obtain all the quadratic points on \(X_{0}(N)\) for \(N=dM\), where \(d>1\) is an integer. We have two cases: 1) when \(X_{0}(M)\) has finitely many quadratic points, which have all been found, and 2) when \(X_{0}(M)\) has infinitely many quadratic points, and all of them have been described, in the sense that all but finitely many are pullbacks of rational points on some quotient, while the remaining finitely many that are not pullbacks have been explicitly listed. The case 1) is of course easier: one need only check whether each of the elliptic curves corresponding to a quadratic point on \(X_{0}(M)\) gives rise to a quadratic point on \(X_{0}(N)\). The cases \(M=34\) and \(38\), with \(d=2\) in each case, fall into this category. In case 2) it is necessary for \(M\) and \(d\) to be coprime as we will use [33, Proposition 2.2]. If this is satisfied, then the problem reduces to determining the rational points on several quotients of modular curves. The case \((d,M)=(2,29)\) falls into this category. We aim to prove the following proposition. **Proposition 3.1**.: _Let \(N\in\mathcal{L}_{1}=\{58,68,76\}\). The finitely many non-cuspidal quadratic points on the curve \(X_{0}(N)\) are displayed in the tables in Section 4. Note that all non-cuspidal quadratic points have complex multiplication._ Proof.: Let \(N\in\{58,68,76\}\), let \(M=N/2\), and let \(P\in X_{0}(N)(K)\) be a quadratic point. The image of \(P\) in \(X_{0}(M)(K)\) has the same \(j\)-invariant as \(P\). For \(N=68\) and \(76\), the finitely many quadratic points on \(X_{0}(M)\), together with their \(j\)-invariants, have been classified in [34]. This provides us with a finite list of possible pairs \((j,L)\) of \(j\)-invariants and quadratic fields such that \((j(P),K)=(j,L)\). In the case \(M=34\), there are two pairs of non-CM points (denoted \(P_{5}\) and \(P_{6}\) in Table 8.1 of [34]). It can be seen that the corresponding elliptic curves do not admit a cyclic \(4\)-isogeny, as otherwise this isogeny class (over \(L\)) would need to contain at least \(6\) curves with a \(34\)-isogeny. For each remaining pair \((j,L)\), we check using a model for \(X_{0}(N)\) and equations for the \(j\)-map whether there exist any points in \(X_{0}(N)(L)\) with \(j\)-invariant \(j\). Next, we consider the case \(N=58\). By [12], we know that any elliptic curve with a \(29\)-isogeny over a quadratic field either corresponds to one of the finitely many points with exceptional \(j\)-invariants (listed in [12, Table 5]), or is a \(\mathbb{Q}\)-curve of degree \(29\) which in addition has a \(2\)-isogeny. We first check that none of the elliptic curves with the exceptional \(j\)-invariants has a point of order \(2\), and so the image of \(P\) in \(X_{0}(M)(K)\) must correspond to a \(\mathbb{Q}\)-curve of degree \(29\) with a \(2\)-isogeny. We apply [33, Proposition 2.2] and conclude that either \(P\) corresponds to a rational point on either \(X_{0}(58)/w_{29}\) or \(X_{0}^{+}(116)\), or that \(P\) is a CM point. Applying the classical Chabauty-Coleman method using Magma's inbuilt function to compute the \(\mathbb{Q}\)-rational points on the genus \(2\) curve \(X_{0}(58)/w_{29}\) (whose Jacobian has rank \(1\) over \(\mathbb{Q}\)), we obtain that it has precisely \(8\) rational points. We compute the pullbacks of rational points on \(X_{0}(58)/w_{29}\) and obtain the \(4\) rational cusps and \(6\) pairs of quadratic points on \(X_{0}(58)\), all of which correspond to CM curves. These are the points \(P_{1},\ldots,P_{6}\) in Table 2. Next, the non-cuspidal rational points on \(X_{0}^{+}(116)\) correspond to CM points by [32, Theorem 0.1], and so \(P\) must be a CM point. From the data associated to the paper [13], we obtain five possible \begin{table} \begin{tabular}{l l} \hline \hline Method & levels \(N\) \\ \hline Going down & \(\mathcal{L}_{1}:=\{58,68,76\}\) \\ Rank \(0\) & \(\mathcal{L}_{2}:=\{80,98,100\}\) \\ Atkin–Lehner sieve & \(\mathcal{L}_{3}:=\{74,85,97,103,107,109,113,121,127\}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Overview of the methods we use. pairs \((j,L)\) of \(j\)-invariants \(j\) and quadratic fields \(L\) for the pair \((j(P),K)\). Each of these pairs in fact already arises as the \(j\)-invariant and field of definition of one of the points \(P_{1},\dots,P_{6}\) found in the previous paragraph. It remains to check whether there are additional points on \(X_{0}(58)\), different than the ones we have already discovered, corresponding to the pairs \((j,L)\) we have already found. It turns out that there are, and we discover two additional points, the points \(P_{7}\) and \(P_{8}\) in Table 2, corresponding to \((j,L)=\left(-3375,\mathbb{Q}(\sqrt{-7})\right)\). We note that for the cases 68 and 76, it would also be possible in the proof above to construct the map from \(X_{0}(N)\) to \(X_{0}(M)\) and directly pull back points. ### Rank \(0\) A method for computing the quadratic points on non-hyperelliptic curves \(X_{0}(N)\) whose Jacobian \(J_{0}(N)\) has rank \(0\) over \(\mathbb{Q}\) is developed in [34]. We call this the 'rank \(0\)' method. In this section, we use this method to prove the following result. **Proposition 3.2**.: _Let \(N\in\mathcal{L}_{2}=\{80,98,100\}\). The finitely many non-cuspidal quadratic points on the curve \(X_{0}(N)\) are displayed in the Tables in Section 4. Note that all non-cuspidal points have complex multiplication._ We note that the curve \(X_{0}(80)\) has two pairs of cuspidal quadratic points defined over the field \(\mathbb{Q}(\sqrt{-1})\). In [34], the 'rank \(0\)' method is used on curves up to genus \(5\). Although we use the same method, the code of [34] could not be used to extend the computations to curves of larger genus in a reasonable time. The reason we are able to work with higher genus curves is thanks to the models of \(X_{0}(N)\) and equations for the \(j\)-maps that we use (described in Section 2), as well as our method for verifying nonsingularity at a given prime (see Lemma 2.3). In order to use the 'rank \(0\)' method, we must verify that \(J_{0}(N)(\mathbb{Q})\) has rank \(0\). Using the results of [19] and [25] (see [2, p. 4] for further details), we were able to compute the rank of \(J_{0}(N)(\mathbb{Q})\) for every level \(N\in\mathcal{L}\). The set \(\mathcal{L}_{2}=\{80,98,100\}\) consists of the levels \(N\in\mathcal{L}\) such that the rank of \(J_{0}(N)(\mathbb{Q})\) is \(0\), and for which we could not use the 'going down' method. We briefly outline the method used in [34], which uses a type of Mordell-Weil sieve to prove that a given list of quadratic points on \(X_{0}(N)\) is complete. Let \(P_{0}\in X_{0}(N)(\mathbb{Q})\) denote a rational cusp and let \(\iota\colon X_{0}(N)^{(2)}(\mathbb{Q})\hookrightarrow J_{0}(N)(\mathbb{Q})\) denote the Abel-Jacobi map with basepoint \(2P_{0}\), which is injective since \(X_{0}(N)\) is non-hyperelliptic. Suppose that \(D=Q+Q^{\sigma}\in X_{0}(N)^{(2)}(\mathbb{Q})\) is a hypothetical unknown quadratic point. We compute the _rational cuspidal divisor class group_\(C_{0}(N)(\mathbb{Q})\) of \(X_{0}(N)\), which is defined as the subgroup of \(J_{0}(N)\) generated by the linear equivalence classes of the degree \(0\)\(\mathbb{Q}\)-rational cuspidal divisors (divisors supported only on the cusps). By bounding the index of \(C_{0}(N)(\mathbb{Q})\) in \(J_{0}(N)(\mathbb{Q})\), we see that \(I\cdot J_{0}(N)(\mathbb{Q})\subseteq C_{0}(N)(\mathbb{Q})\) for a positive integer \(I\). For any level \(N\), by the Manin-Drinfeld theorem, we have that \(C_{0}(N)(\mathbb{Q})\subseteq J_{0}(N)(\mathbb{Q})_{\text{tors}}\), and the generalised Ogg conjecture asserts that this inclusion is in fact an equality (see [34, p. 2463]). We also note that for any prime level \(N\), thanks to the work of Mazur [27, Theorem 1], we know that \(C_{0}(N)(\mathbb{Q})=J_{0}(N)(\mathbb{Q})_{\text{tors}}\) and that \(J_{0}(N)(\mathbb{Q})_{\text{tors}}\) is generated by the difference of the two rational cusps. The point \(D\) satisfies \([D-2P_{0}]=I\cdot[D^{\prime}]\) for some \([D^{\prime}]\in J_{0}(N)(\mathbb{Q})\). We then employ a Mordell-Weil sieve, hoping to eliminate all possibilities for \(D^{\prime}\), and therefore achieving a contradiction. In order to prove Proposition 3.2, we will make use of the following lemma. **Lemma 3.3**.: _Let \(C_{0}(N)(\mathbb{Q})\) denote the rational cuspidal divisor class group of \(X_{0}(N)\). For \(N\in\{98,100\}\) we have \(J_{0}(N)(\mathbb{Q})=C_{0}(N)(\mathbb{Q})\). In particular, the generalised Ogg conjecture holds for these values of \(N\). The structure of these groups is displayed in Tables 9 and 10 in Section 4._ _For \(N=80\), we have that \(4\cdot J_{0}(N)(\mathbb{Q})\subseteq C_{0}(N)(\mathbb{Q})\). The structure of the group \(C_{0}(N)(\mathbb{Q})\) and the possibilities for the quotient group \(J_{0}(N)(\mathbb{Q})/C_{0}(N)(\mathbb{Q})\) are displayed in Table 6 of Section 4._ Proof.: We first compute the group \(C_{0}(N)(\mathbb{Q})\subseteq J_{0}(N)(\mathbb{Q})\) using the code of [34]. In the cases \(N=98\) and \(100\) we simply compute an upper bound on \(\#J_{0}(N)(\mathbb{Q})\) by reducing modulo odd primes \(p\nmid N\). This allowed us to prove that \(J_{0}(N)(\mathbb{Q})=C_{0}(N)(\mathbb{Q})\). In the case \(N=80\) we proceed similarly, but instead compute a (finite) supergroup of \(J_{0}(N)(\mathbb{Q})\) by considering the group structure of \(J_{0}(N)(\mathbb{F}_{p})\) for some odd primes \(p\). This allowed us to prove that \(J_{0}(N)(\mathbb{Q})/C_{0}(N)(\mathbb{Q})\) is isomorphic to a subgroup of \((\mathbb{Z}/2\mathbb{Z})^{2}\). Proof of Proposition 3.2.: We applied the method described above. Thanks to Lemma 3.3, we may set \(I=4\) when \(N=80\) and set \(I=1\) for \(N\in\{98,100\}\). The method was successful in each case. ### Atkin-Lehner sieve In this section we aim to prove the following result. **Proposition 3.4**.: _Let \(N\in\mathcal{L}_{3}=\{74,85,97,103,107,109,113,121,127\}\). The finitely many non-cuspidal quadratic points on the curve \(X_{0}(N)\) are displayed in the tables in Section 4._ We will prove this proposition in two main stages: 1. Apply a Mordell-Weil sieve with respect to a suitably chosen Atkin-Lehner involution \(w_{d}\) to determine all quadratic points on \(X_{0}(N)\) that either do not arise as pullbacks of rational points on \(X_{0}(N)/w_{d}\), or do not arise as fixed points of \(w_{d}\). 2. Compute the rational points on \(X_{0}(N)/w_{d}\) and any fixed points of \(w_{d}\) (defined over quadratic fields) on \(X_{0}(N)\). The Mordell-Weil sieve we apply is an adaptation of the sieve employed by Najman and Vukorepa for \(X_{0}(131)\) in [33, p. 16], which in turn builds on the results of Box [10] and Siksek [36]. For further background on the Mordell-Weil sieve in general, we refer to [11]. Starting with a model for \(X_{0}(N)\), the sieve uses the following additional inputs: * An Atkin-Lehner involution \(w_{d}\) such that the ranks of \(J_{0}(N)(\mathbb{Q})\) and \((1+w_{d})(J_{0}(N))(\mathbb{Q})\) are equal. * Generators of \(J_{0}(N)(\mathbb{Q})_{\mathrm{tors}}\). * A set of odd primes \(\mathcal{P}\) of good reduction for our model of \(X_{0}(N)\). * A (possibly empty) set of quadratic points on \(X_{0}(N)\) that do not arise as pullbacks of rational points on \(X_{0}(N)/w_{d}\). We write \(X=X_{0}(N)\) and work with rational points on the symmetric square \(X^{(2)}\) of this curve (as described in Section 3.1). Let \(D_{\infty}\) be a sum of two rational cusps such that \(w_{d}\) acts trivially on \(D_{\infty}\) (e.g. \(D_{\infty}=\infty+w_{d}(\infty)\)). We use the divisor \(D_{\infty}\) as the basepoint of the Abel-Jacobi map \(\iota\colon X^{(2)}\hookrightarrow J(X)\), which is injective since \(X\) is non-hyperelliptic. Since the ranks of \(J(X)(\mathbb{Q})\) and \((1+w_{d})(J(X))(\mathbb{Q})\) are equal, we see that \((1-w_{d})(J(X)(\mathbb{Q}))\subseteq J(X)(\mathbb{Q})_{\mathrm{tors}}\). Let \(p\) be a prime of good reduction for \(X\). The following commutative diagram describes the set-up of the sieve: Here, \(\operatorname{red}_{p}\) denotes reduction mod \(p\) (which we note is injective on \(J(X)(\mathbb{Q})_{\mathrm{tors}}\)), and \(\tilde{\iota}\) and \(\tilde{w}_{d}\) are the reductions mod \(p\) of \(\iota\) and \(w_{d}\) respectively. Given a hypothetical unknown quadratic point \(Q\in X^{(2)}(\mathbb{Q})\) that does not map to a rational point on \(X_{0}(N)/w_{d}\), we first compute a set \(S_{Q}\subseteq X^{(2)}(\mathbb{F}_{p})\) of possibilities for \(\operatorname{red}_{p}(Q)\). In order to construct the set \(S_{Q}\), we consider each point \(\operatorname{red}_{p}(R)\in X^{(2)}(\mathbb{F}_{p})\) that is the reduction of a known non-pullback point \(R\in X^{(2)}(\mathbb{Q})\) with respect to \(w_{d}\). We attempt to prove that \(\operatorname{red}_{p}(Q)\neq\operatorname{red}_{p}(R)\) by applying a symmetric Chabauty criterion as stated in [10, Theorem 2.1]. By the commutativity of the diagram, we then have that \[((1-w_{d})\circ\iota)(Q)\in W_{p}:=\operatorname{red}_{p}^{-1}\bigl{(}((1- \tilde{w}_{d})\circ\tilde{\iota})(S_{Q})\bigr{)}\subseteq J(X)(\mathbb{Q})_{ \operatorname{tors}}.\] The set \(W_{p}\) is explicitly computable, and although it is obtained by investigating matters modulo \(p\), the information it encodes is independent of the prime \(p\). We aim to find a set of odd primes \(\mathcal{P}\) of good reduction for our model such that \[\bigcap_{p\in\mathcal{P}}W_{p}=[0]\in J(X)(\mathbb{Q})_{\operatorname{tors}}.\] If this is the case, then \[((1-w_{d})\circ\iota)(Q)=(1-w_{d})[Q-D_{\infty}]=[0].\] It follows that \([Q-D_{\infty}]=w_{d}([Q-D_{\infty}])\) and hence, since \(w_{d}\) acts trivially on \(D_{\infty}\), we have that \([Q-w_{d}(Q)]=0\). Since \(X_{0}(N)\) is non-hyperelliptic, it follows that \(Q=w_{d}(Q)\) (as points in \(X^{(2)}(\mathbb{Q})\)). Hence, \(Q\) either arises from a pair of quadratic points, each of which is a fixed point of \(w_{d}\), or \(Q\) is the pullback of a rational point on \(X_{0}(N)/w_{d}\) with respect to the quotient map \(X_{0}(N)\to X_{0}(N)/w_{d}\). This completes Stage (I) of the method. We note that instead of working with \(J(X)(\mathbb{Q})_{\operatorname{tors}}\), knowledge of generators for any group \(G\) satisfying \((1-w_{d})(J(X))(\mathbb{Q})\subseteq G\subseteq J(X)(\mathbb{Q})_{ \operatorname{tors}}\) would allow for a similar application of the sieve. **Remark 3.5**.: We make some technical remarks about the application of the symmetric Chabauty criterion ([10, Theorem 2.1]) in the sieve. * The diagonalised models we work with, combined with the fact that the ranks over \(\mathbb{Q}\) of the Jacobians of the curves \(X\) and \(X/w_{d}\) are equal, allows us to easily compute the appropriate annihilating differentials to apply this criterion. * We only apply the symmetric Chabauty criterion to points in \(X^{(2)}(\mathbb{F}_{p})\) that are the reductions of known non-pullback points. Although we could also apply it to the reductions of known pullback points, the criterion is guaranteed to fail for such points given our choice of annihilating differentials (which we compute as in [10, Proposition 3.5]). However, even if we were able to compute additional annihilating differentials and prove that \(\operatorname{red}_{p}(Q)\neq\operatorname{red}_{p}(R)\) for each known pullback point \(R\in X^{(2)}(\mathbb{Q})\), the set \(W_{p}\) will still contain the element \([0]\) unless _every_ point in \(X^{(2)}(\mathbb{F}_{p})\) on which \(w_{d}\) acts trivially happens to be the reduction of a known pullback point. This is highly unlikely to occur. * We do not make use of the _relative_ symmetric Chabauty criterion ([10, Theorem 2.4]) in the sieve as it is also very unlikely to provide any additional information (for the same reason as above). **Remark 3.6**.: We take the opportunity to discuss the advantages and disadvantages of this sieve compared to the sieves used in [10, pp. 327-328] and [30, pp. 258-260]. The main advantage of the sieve we use is the fact that it is not necessary to work with a finite index subgroup of the Jacobian. This reduces computation time and simplifies the sieving algorithm. It also allows for a greater choice of primes to be used in the sieve when the rank of the Jacobian is \(\geqslant 3\) (c.f. [30, pp. 258-260]). Another advantage is the fact that the sieve does not explicitly work with the quotient curve \(X_{0}(N)/w_{d}\). There are certain disadvantages to the sieve we use. First of all, the sieve relies on the existence of a quotient curve with equal rank. Secondly, we also require knowledge of generators for \(J(X)(\mathbb{Q})_{\operatorname{tors}}\) (or generators of a group \(G\) satisfying \((1-w_{d})(J(X))(\mathbb{Q})\subseteq G\subseteq J(X)(\mathbb{Q})_{ \operatorname{tors}}\), as mentioned above.) This is perhaps the main barrier to extending our computations for all curves \(X_{0}(N)\) up to, say, genus \(12\). One of the key inputs into the Mordell-Weil sieve is a list of quadratic points that do not arise as pullbacks of rational points \(X_{0}(N)/w_{d}\) and are not fixed points of \(w_{d}\) (if such points exist and are not input, then the sieve is guaranteed to fail). In order to find these points, we first search for rational points (up to some height bound) on each Atkin-Lehner quotient, and pull these back to \(X_{0}(N)\). This is straightforward thanks to the diagonalised model for \(X_{0}(N)\) that we work with. For each level we considered, this turned out to be sufficient to find all the necessary points. However, if this were not the case, we note that we can also search for quadratic points by intersecting a model for our curve with hyperplanes, as described in [10, p. 30]. We significantly improved upon the running time of this method by noting that quadratic points give at most two degrees of linear independence: i.e. already the first three coordinates of a quadratic point must satisfy a \(\mathbb{Z}\)-linear relationship. This allows us to look only at hyperplanes of the form \(a_{1}x_{1}+a_{2}x_{2}+a_{3}x_{3}=0\). This is notably faster than going through all the hyperplanes of the form \(a_{1}x_{1}+\cdots+a_{g}x_{g}=0\) (since our curves have \(g\geqslant 6\)). This method carries over to search for cubic and other low-degree points. In order to prove Proposition 3.4, we will first prove two lemmas. We recall from Section 3.3 that \(C_{0}(N)(\mathbb{Q})\) denotes the rational cuspidal divisor class group of \(X_{0}(N)\). The following lemma is analogous to the cases \(N=98\) and \(100\) of Lemma 3.3 and proved in the same way. **Lemma 3.7**.: _Let \(C_{0}(N)(\mathbb{Q})\) denote the rational cuspidal divisor class group of \(X_{0}(N)\). Then for \(N\in\{74,85,121\}\) we have \(J_{0}(N)(\mathbb{Q})_{\mathrm{tors}}=C_{0}(N)(\mathbb{Q})\). In particular, the generalised Ogg conjecture holds for these values of \(N\). The structure of these groups is displayed in the tables in Section 4._ **Lemma 3.8**.: _The curve \(X_{0}(74)/w_{37}\) has precisely \(9\) rational points and the curve \(X_{0}(85)/w_{85}\) has precisely \(8\) rational points._ Proof.: We start by considering the curve \(X_{0}(85)/w_{85}\). In [8, p. 107], the finitely many rational points on the full Atkin-Lehner quotient curve \(X_{0}(85)/(w_{85},w_{17})\) are determined. Since this curve is a quotient of \(X_{0}(85)/w_{85}\), it is straightforward to then verify that \(X_{0}(85)/w_{85}\) has precisely \(8\) rational points. The curve \(X_{0}(74)/w_{37}\) is non-hyperelliptic and the rank of its Jacobian over \(\mathbb{Q}\) is smaller than its genus. We successfully applied the classical Chabauty method to determine all the rational points on this curve. We did this using the Magma implementation of classical Chabauty due to Balakrishnan and Tuitman [5]. Some additional details of this computation are provided in Section 3.4.1. Proof of Proposition 3.4.: We will apply the sieve described above. We set \(d=N\), unless \(N=74\) in which case we set \(d=37\). We first verify in each case that the ranks of the Jacobians over \(\mathbb{Q}\) of the curves \(X_{0}(N)\) and \(X_{0}(N)/w_{d}\) are equal by checking that \((1-w_{d})J_{0}(N)(\mathbb{Q})\) has rank \(0\). Next, we use Lemma 3.7 combined with the fact that \(C_{0}(N)(\mathbb{Q})=J_{0}(N)(\mathbb{Q})_{\mathrm{tors}}\) when \(N\) is prime to obtain generators for \(J_{0}(N)(\mathbb{Q})_{\mathrm{tors}}\). We now apply the sieve. In each we were successful using primes in the range \(3\leqslant p\leqslant 11\) of good reduction for the curve. To complete the proof, we proceed with Stage (II) of the method. We first compute any fixed points of \(w_{d}\) defined over quadratic fields, which is a straightforward computation in Magma. Next, when \(N=d\) and \(N\neq 85\), the rational points on \(X_{0}^{+}(N)=X_{0}(N)/w_{d}\) have been computed for each \(N\) we are interested in across a series of papers [4, 2, 3]. The remaining cases are \((N,d)=(74,37)\) and \((85,85)\) which are covered by Lemma 3.8. We then simply pull back the rational points on \(X_{0}(N)/w_{d}\) to \(X_{0}(N)\) to obtain the remaining quadratic points on \(X_{0}(N)\). Propositions 3.1, 3.2, and 3.4 combine to prove Theorem 1.2. **Remark 3.9**.: The running times for each level \(N\in\mathcal{L}_{3}\) are displayed in the accompanying Magma files. The running times ranged from 23 seconds (for \(N=109\)) to 11 minutes (for \(N=85\)). Our implementation significantly reduces the computation time for some previously computed levels. For example, it took under 4 seconds to recover the classification of quadratic points on \(X_{0}(53)\) obtained in [10], whilst the original running time using the code of [10] is 33 minutes. We note that the running time of our algorithm could possibly be improved further by only using the Place command in Magma when working over finite fields (and not using it over \(\mathbb{Q}\) as we do in certain instances). #### 3.4.1. Example: quadratic points on \(X_{0}(74)\) We provide some details of the computations for \(X_{0}(74)\), which is perhaps the most interesting curve we worked with. We note that despite the fact that the quadratic points on \(X_{0}(37)\) have been classified [10], their complicated structure means that we cannot compute the quadratic points on \(X_{0}(74)\) using the 'going down' method. The curve \(X_{0}(74)\) has genus 8, and the quotient curve \(X_{0}(74)/w_{37}\) is non-hyperelliptic of genus 4. We start by computing a diagonalised model for \(X_{0}(74)\) together with a map to the quotient curve \(X_{0}(74)/w_{37}\) using the methods described in Section 2. We verify that the rank of \((1-w_{37})J_{0}(74)(\mathbb{Q})\) is 0, using the techniques of Section 3.3, and as it will be needed afterwards, we also verify that the rank of the Jacobian of \(X_{0}(74)/w_{37}\) over \(\mathbb{Q}\) is 2. We then verify the computation in Lemma 3.7 that \(C_{0}(74)(\mathbb{Q})\cong\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/171\mathbb{Z }\cong J_{0}(74)(\mathbb{Q})_{\text{tors}}\). We check the first isomorphism using the Magma code of [34], adapted to work with our diagonalised model. Next, we find that \[\#J_{0}(74)(\mathbb{F}_{3})=3^{3}\cdot 7^{2}\cdot 19\quad\text{and}\quad\#J_{0} (74)(\mathbb{F}_{5})=2^{8}\cdot 3^{3}\cdot 13\cdot 19.\] This proves that \(J_{0}(74)(\mathbb{Q})_{\text{tors}}\cong\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z }/171\mathbb{Z}\). Next, we search for quadratic points on \(X_{0}(74)\) by pulling back rational points on the three quotient curves \(X_{0}(74)/w_{m}\) for \(m\in\{2,37,74\}\). We obtained two pairs of quadratic points (the points \(P_{1}\) and \(P_{2}\) in Table 4 together with their Galois conjugates) that do not arise as pullbacks of rational points on \(X_{0}(74)/w_{37}\). Then, applying the sieve, we found that \(\#W_{3}=13\), and that \(W_{3}\cap W_{5}=[0]\), meaning that the sieving process was successful using the primes 3 and 5. Next, we continue with Stage (II) of the method. We compute the fixed points of \(w_{37}\) on \(X_{0}(74)\) and find that there is a single quadratic point in this fixed locus, the point \(P_{11}\) in Table 4. Finally, it remains to verify the computation in Lemma 3.8 that the curve \(X_{0}(74)/w_{37}\) has precisely 9 rational points. We do this using the code of Balakrishnan and Tuitman [5] on a plane model of this curve found by a function from [1]. We use the effective_chabauty function from [5] with \(p=11\) to show that \(\mathbb{Q}\)-rational points on \(X_{0}(74)/w_{37}\) consist of exactly 9 points. In order to use this function, we first check that the differences of the 9 low-height rational points on this quotient curve generate a finite index subgroup of its Jacobian (which we recall has rank 2). By considering the differences of pairs of rational points, and by working modulo 3 and 5, we were able to find two independent rational points of infinite order on the Jacobian. ## 4. Tables In the following tables we have included the non-cuspidal quadratic point data (up to Galois conjugation) for the curves \(X_{0}(N)\) for \(N\in\mathcal{L}\). For each quadratic point we have displayed its field of definition, it's \(j\)-invariant, and the corresponding CM discriminant when applicable. We have then displayed the action of the Atkin-Lehner involutions on each point. In the cases for which we computed the structure of \(J_{0}(N)(\mathbb{Q})\), this information is also included. In addition to the data presented in these tables (which is independent of our chosen models), for each curve \(X_{0}(N)\), the projective models we used, equations for the Atkin-Lehner involutions, and the coordinates of each quadratic point are available in the accompanying Magma files. \begin{table} \begin{tabular}{c c c c} \hline \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{29})\) & \(-56147767009798464000\sqrt{29}+302364978924945672000\) & \(-232\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{8}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \hline \hline \end{tabular} \end{table} Table 2. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(58)}\) \begin{table} \begin{tabular}{c c c c} \hline \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \hline \hline \end{tabular} \end{table} Table 3. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(68)}\) \begin{table} \begin{tabular}{c c c c} \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{8}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{9}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \(P_{10}\) & \(\mathbb{Q}(\sqrt{37})\) & \(-3260047059360000\sqrt{37}+19830091900536000\) & \(-148\) \\ \hline \end{tabular} \end{table} Table 4. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(74)}\) \begin{table} \begin{tabular}{l c c c} Genus: 7 & \(C_{0}(80)(\mathbb{Q})\cong\mathbb{Z}/12\mathbb{Z}\times\mathbb{Z}/24\mathbb{Z} \times\mathbb{Z}/24\mathbb{Z}\) \\ \(J_{0}(80)(\mathbb{Q})/C_{0}(80)(\mathbb{Q})\cong 0,\mathbb{Z}/2\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\) \\ No non-cuspidal quadratic points. \\ \end{tabular} \end{table} Table 6. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(80)}\) \begin{table} \begin{tabular}{c c c c} \hline \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \hline \hline \end{tabular} \end{table} Table 5. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(76)}\) \begin{table} \begin{tabular}{c c c c} \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-19})\) & \(-884736\) & \(-19\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-19})\) & \(-884736\) & \(-19\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \hline \end{tabular} \end{table} Table 7. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(85)}\) \begin{table} \begin{tabular}{c c c c} \multicolumn{3}{c}{Genus: 7} \\ & \(J_{0}(97)(\mathbb{Q})\cong\mathbb{Z}^{3}\times\mathbb{Z}/8\mathbb{Z}\) & \\ \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-163})\) & \(-262537412640768000\) & \(-163\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-2})\) & \(8000\) & \(-8\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-11})\) & \(-32768\) & \(-11\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \(P_{8}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{9}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(-12288000\) & \(-27\) \\ \hline \end{tabular} \end{table} Table 8. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(97)}\) \begin{table} \begin{tabular}{c c c c} Genus: 7 & & & \\ \(J_{0}(100)(\mathbb{Q})\cong\mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/30\mathbb{Z} \times\mathbb{Z}/30\mathbb{Z}\) & & \\ \end{tabular} \begin{tabular}{c c c c} \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \hline \end{tabular} \end{table} Table 10. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(100)}\) \begin{table} \begin{tabular}{c c c c} \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \hline \end{tabular} \end{table} Table 9. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(98)}\) \begin{table} \begin{tabular}{c c c c} & Genus: 9 & \\ \(J_{0}(107)(\mathbb{Q})\cong\mathbb{Z}^{2}\times\mathbb{Z}/53\mathbb{Z}\) & \\ \hline \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-2})\) & \(8000\) & \(-8\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-67})\) & \(-147197952000\) & \(-67\) \\ \hline \end{tabular} \end{table} Table 12. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(107)}\) \begin{table} \begin{tabular}{c c c c} & Genus: 8 & \\ & \(J_{0}(103)(\mathbb{Q})\cong\mathbb{Z}^{2}\times\mathbb{Z}/17\mathbb{Z}\) & \\ \hline \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(-12288000\) & \(-27\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-11})\) & \(-32768\) & \(-11\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-67})\) & \(-147197952000\) & \(-67\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{2885})\) & \(\begin{array}{c}-669908635472124980731701532753920\sqrt{2885}\\ +35982263935929364331785036841779200\end{array}\) & NO \\ \hline \end{tabular} \end{table} Table 11. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(103)}\) \begin{table} \begin{tabular}{c c c c} Genus: & \(9\) & \\ \(J_{0}(127)(\mathbb{Q})\cong\mathbb{Z}^{3}\times\mathbb{Z}/28\mathbb{Z}\) & \\ \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-11})\) & \(-32768\) & \(-11\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-163})\) & \(-262537412640768000\) & \(-163\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-2})\) & \(8000\) & \(-8\) \\ \hline \end{tabular} \end{table} Table 14. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(113)}\) \begin{table} \begin{tabular}{c c c c} Genus: & \(8\) & \\ \(J_{0}(109)(\mathbb{Q})\cong\mathbb{Z}^{3}\times\mathbb{Z}/9\mathbb{Z}\) & \\ \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(1728\) & \(-4\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(-12288000\) & \(-27\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \(P_{8}\) & \(\mathbb{Q}(\sqrt{-1})\) & \(287496\) & \(-16\) \\ \hline \end{tabular} \end{table} Table 13. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(109)}\) \begin{table} \begin{tabular}{c c c c} Genus: 10 & & \\ \(J_{0}(127)(\mathbb{Q})\cong\mathbb{Z}^{3}\times\mathbb{Z}/21\mathbb{Z}\) & & \\ \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-67})\) & \(-147197952000\) & \(-67\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(54000\) & \(-12\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(-12288000\) & \(-27\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{6}\) & \(\mathbb{Q}(\sqrt{-3})\) & \(0\) & \(-3\) \\ \(P_{7}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \hline \end{tabular} \end{table} Table 16. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(127)}\) \begin{table} \begin{tabular}{c c c c} Genus: 6 & & \\ \(J_{0}(121)(\mathbb{Q})\cong\mathbb{Z}\times\mathbb{Z}/5\mathbb{Z}\times\mathbb{Z }/5\mathbb{Z}\) & & \\ \hline Point & Field & \(j\)-invariant & CM \\ \hline \(P_{1}\) & \(\mathbb{Q}(\sqrt{-19})\) & \(-884736000\) & \(-19\) \\ \(P_{2}\) & \(\mathbb{Q}(\sqrt{-43})\) & \(-884736000\) & \(-43\) \\ \(P_{3}\) & \(\mathbb{Q}(\sqrt{-2})\) & \(8000\) & \(-8\) \\ \(P_{4}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(-3375\) & \(-7\) \\ \(P_{5}\) & \(\mathbb{Q}(\sqrt{-7})\) & \(16581375\) & \(-28\) \\ \hline \end{tabular} \end{table} Table 15. All non-cuspidal quadratic points on \(\boldsymbol{X_{0}(121)}\)
2308.10003
Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable Rendering Method
Recovering the shape and appearance of real-world objects from natural 2D images is a long-standing and challenging inverse rendering problem. In this paper, we introduce a novel hybrid differentiable rendering method to efficiently reconstruct the 3D geometry and reflectance of a scene from multi-view images captured by conventional hand-held cameras. Our method follows an analysis-by-synthesis approach and consists of two phases. In the initialization phase, we use traditional SfM and MVS methods to reconstruct a virtual scene roughly matching the real scene. Then in the optimization phase, we adopt a hybrid approach to refine the geometry and reflectance, where the geometry is first optimized using an approximate differentiable rendering method, and the reflectance is optimized afterward using a physically-based differentiable rendering method. Our hybrid approach combines the efficiency of approximate methods with the high-quality results of physically-based methods. Extensive experiments on synthetic and real data demonstrate that our method can produce reconstructions with similar or higher quality than state-of-the-art methods while being more efficient.
Xiangyang Zhu, Yiling Pan, Bailin Deng, Bin Wang
2023-08-19T12:48:10Z
http://arxiv.org/abs/2308.10003v1
# Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable Rendering Method ###### Abstract Recovering the shape and appearance of real-world objects from natural 2D images is a long-standing and challenging inverse rendering problem. In this paper, we introduce a novel hybrid differentiable rendering method to efficiently reconstruct the 3D geometry and reflectance of a scene from multi-view images captured by conventional hand-held cameras. Our method follows an analysis-by-synthesis approach and consists of two phases. In the initialization phase, we use traditional SfM and MVS methods to reconstruct a virtual scene roughly matching the real scene. Then in the optimization phase, we adopt a hybrid approach to refine the geometry and reflectance, where the geometry is first optimized using an approximate differentiable rendering method, and the reflectance is optimized afterward using a physically-based differentiable rendering method. Our hybrid approach combines the efficiency of approximate methods with the high-quality results of physically-based methods. Extensive experiments on synthetic and real data demonstrate that our method can produce reconstructions with similar or higher quality than state-of-the-art methods while being more efficient. ## 1 Introduction Inverse rendering, which recovers 3D geometry, reflection properties and illumination from 2D images, is a long-standing and challenging problem in computer graphics and vision [15]. Benefiting from the advances in deep neural networks, many deep learning-based methods learn to obtain the materials and plane normals of near-flat objects in a data-driven way [12, 13, 14]. However, these methods struggle to deal with complex geometry, resulting in limited application in practice. Some other deep learning-based methods use 3D geometric representations such as signed distance fields [13, 12], tetrahedral meshes [16], and occupancy functions [15, 1, 17, 18], to handle more complex geometries. However, such geometry representations may require post-processing in order to be used in traditional graphics pipelines [11, 16], which may cause material information loss and affect the rendering image quality. The emergence of differentiable rendering techniques [13] means that image loss can be back-propagated along the rendering pipeline to solve inverse rendering problems, which promotes the development of multi-view inverse rendering tasks using triangular meshes as the geometry representation. Approximate differentiable rendering methods [1, 13, 14, 15] utilize simplified rendering processes and can solve the inverse rendering problem efficiently, but the simple material models they use usually lead to poor visual effects. On the contrary, physically-based differentiable rendering methods [12, 13] can reconstruct high-quality physically-based rendering (PBR) materials in the path tracing manner, but their realistic results come at a high computational cost. Although some methods have been proposed to accelerate the reconstruction process [12], they often impose strong assumptions on the lighting conditions to narrow down the parameter search space, which hinders their wide application. In this paper, we propose a hybrid differentiable rendering method to reconstruct triangular meshes and PBR materials from multi-view real-world images captured by conventional hand-held cameras (_e.g._ mobile phones) with uncontrolled lighting conditions. Our method consists of two phases. In the initialization phase, we use traditional methods to reconstruct a rough triangular mesh for the scene. Then in the optimization phase, we take a hybrid approach to optimize the scene geometry and PBR materials, where we first optimize the geometry using an approximate method, followed by the PBR materials using a physically-based method. Our novel formulation benefits from both the efficiency of approximate methods and the high quality of physically-based methods. Extensive experiments show that our method achieves a significantly faster training and rendering speed than state-of-the-art methods, while achieving results of comparable or better quality. In summary, our contributions include: * We propose a novel hybrid differentiable rendering optimization method based on triangular meshes, which utilizes an approximate differentiable rendering method to optimize the geometry, and a physically-based differentiable rendering method to get the PBR materials. * The proposed pipeline is user-friendly and can work end-to-end. Furthermore, the optimized scene parameters can be easily deployed on mainstream commercial graphics engines without the need for conversion, making it applicable to a wide range of applications such as virtual reality and augmented reality. * We conduct extensive experiments on both synthetic and real-world data, which demonstrate that our method outperforms or achieves comparable results to the state-of-the-art methods while being significantly more efficient. ## 2 Related Work Shape Reconstruction.Reconstructing object geometry is a long-standing problem in computer vision and graphics. In traditional methods, the Structure-from-Motion (SfM) method [17] is first applied to generate sparse feature points and find their correspondence to further generate a sparse point cloud and a rough mesh. Then, the Multi-View-Stereo (MVS) method [16] is leveraged to generate dense pixel-level matching. Finally, a dense mesh with vertex color is generated by the Poisson reconstruction method [15]. A few recent learning-based methods assume that the target object mesh is homeomorphic with the sphere. The method in [21] uses an image feature network (2D CNN) to extract perceptual features from the input image, and a cascaded mesh deformation network (GCN) to progressively deform an ellipsoid mesh into the desired 3D model. These methods can only reconstruct rough geometry, which is insufficient for downstream tasks such as VR and AR. Reflectance Reconstruction.Reflectance model, _i.e._, spatially-varying bidirectional reflectance distribution function (SVBRDF), describes how light interacts with surfaces in the scene. Traditional SVBRDF reconstruction methods [13, 14, 15, 16] rely on dense input images measured using auxiliary equipment, _e.g._, gantry. Some other works focus on [15, 16, 17, 18] exploiting the structure of SVBRDF parameter spaces to reduce the requirement for the number of input images. Additionally, some data-driven works [12, 19, 20] have been introduced recently to produce plausible SVBRDF estimations for near-flat objects using a small number of input images. Despite their ease of use, these methods struggle to handle more complex objects. Differentiable Rendering.Differentiable methdos are reviewed in detail in [11]. Here we focus on methods closely related to our work. Traditional rendering pipelines, _e.g._, rasterization and ray-tracing, are not differentiable due to some discrete parts, which means that they cannot work with the gradient descent method just like neural networks. The emergence of differentiable rendering has changed all. Some approximately-based methods have been proposed recently. [15, 16, 17] compute approximated gradients to optimize scene parameters. Besides, [12, 18] replace the z-buffer-based triangle selection of a vanilla rasterization process with a probabilistic manner. Unfortunately, the result is not so good due to inaccuracies introduced by these methods. On the contrary, Monte Carlo edge sampling based methods [12, 19] provide unbiased gradient estimates capable of producing plausible results. But these methods are resource-consuming because of their path tracing module, hindering their generalization. Differentiable Rendering based Multi-view Inverse Rendering.The emergence of differentiable rendering boosts the development of the inverse rendering techniques. Several prior works leverage differentiable rendering methods to solve the inverse problem. Wu et al. [15] propose a completely unsupervised method for face reconstruction from just one face image using the approximately-based differentiable method, which does not rely on existing 3D morphable face models. Luan et al. [18] leverage the Monte Carlo edge sampling based methods to reconstruct fine geometry and plausible material. However, they assume that the camera and light are collocated. In addition, a rough geometry scanned by professional equipment is also needed. Although these settings could narrow down the solution space, they also reduce the generalization of the method. The method proposed by Li et al. [20] propose a novel Figure 1: (a) Our method takes as input a set of images obtained by conventional hand-held cameras from several viewpoints and gets a rough initial model (b) by traditional methods. Then, we perform a novel analysis-by-synthesis optimization to refine the model’s shape and reflectance separately, yielding a high-quality 3D model. In (c) and (d), we show a re-rendering of the result under a novel viewpoint and environmental lighting. In addition, we can edit the material (e). al. [2021] reduces the requirements for input. It takes as input multi-view wild scene images, and reconstructs the initial geometry through MVS, then use the general Monte Carlo path tracing differentiable renderer to optimize the material, illumination, and geometry. Although it can achieve good results, it is resource-consuming and time-consuming. ## 3 Our Method ### Overview Our inverse rendering pipeline, shown in Figure 2, takes as input a set of RGB images captured by conventional hand-held cameras from multiple viewpoints. Using traditional methods (SfM and MVS), our method first reconstructs a virtual scene (in the form of a triangle mesh with vertex color) that roughly matches the real scene in the initialization phase. Afterward, in the optimization phase, our method first uses an approximate differentiable rendering method to optimize the geometry, then uses a physically-based differentiable rendering method to optimize the reflectance. Both methods iteratively improve the image loss between rendered images and the ground truth for the same viewpoint. Finally, the optimized scene parameters can be used for a variety of applications such as novel view synthesis and relighting. Conceptually, our pipeline can be formulated as an analysis-by-synthesis problem \[\Theta^{*}=\operatorname*{arg\,min}_{\Theta}\mathcal{L}(\mathcal{I}(\Theta), \Theta;\tilde{\mathcal{I}}),\] where \(\Theta\) represents the scene parameters that need to be estimated, including geometry and reflection parameters; \(\mathcal{L}\) is a loss function to be minimized; \(\tilde{\mathcal{I}}\) is the ground truth image, and \(\mathcal{I}(\Theta)\) is the image rendered by our hybrid differentiable rendering method using the same camera parameters as \(\tilde{\mathcal{I}}\). In the following, we present the details of each phase. ### Initialization Phase Our reconstruction pipeline is designed to work in complex, unstructured scenes where the input images have no additional information, such as depth. To tackle these challenges, we leverage traditional methods to obtain an initial geometry. Specifically, as shown in Figure 2(a), we use the SfM method from [10] to generate sparse feature points in each image. These points are then used to reconstruct a sparse point cloud and a rough mesh by matching the feature points across images. Next, we use the MVS reconstruction method from [10] to generate dense pixel-level matching. Finally, a dense mesh is then obtained from Poisson surface reconstruction [15], with vertex colors derived from the input images. In terms of lighting, we use an environment map as our lighting source. The resolution of the environment map is \(512\times 128\). The environment map is a learnable parameter that is inferred during the material optimization phase. In the beginning, we assume that the light is white and the three channels of our environment map are set to \((0.5,0.5,0.5)\). ### Optimization Phase In the optimization phase, starting from the initial geometry obtained in the previous step, we adopt a hybrid differentiable rendering method to further optimize the geometry and reflectance. Specifically, we utilize an approximate differentiable rendering method to optimize the geometry, and a physically-based differentiable rendering method to optimize the reflectance. Our hybrid approach is motivated by the following observations in experiments: * Although the geometry reconstructed by SfM and MVS in the initialization phase provides a good approximation of the scene, there can be some defects in the boundary regions. * Approximate differentiable rendering methods dedicated to Figure 2: Overview of our inverse rendering pipeline. Our method takes as input a set of RGB images of some object obtained by conventional hand-held cameras from several viewpoints with uncontrolled lighting conditions. Then, we utilize SfM and MVS method to reconstruct a virtual scene roughly matching the real scene in the initialization phase. In the optimization phase, we first optimize the geometry using the approximately-based differentiable rendering method, then optimize reflection using the physically-based differentiable rendering method. Both methods work iteratively by pushing the loss between rendering images and the ground truth in the camera pose. calculating the gradient of geometry can be very efficient, but the visual effect of their results may be of lower quality due to their simplification of the material model. * Physically-based methods can accurately reconstruct complex materials, but their high computational costs and resource demands may impede their efficiency. Specifically, starting from the triangle mesh obtained in the initialization phase, we further optimize its vertex positions \(\theta_{\mathrm{p}}\) which define the geometry, as well as two 2D texture maps that contain the diffuse albedo \(\theta_{\mathrm{d}}\) and specular albedo \(\theta_{\mathrm{s}}\) respectively and define the SVBRDF. In this way, our scene parameters can be represented as \(\mathbf{\Theta}=(\theta_{\mathrm{p}},\theta_{\mathrm{d}},\theta_{\mathrm{s}})\). We first optimize the vertex positions using an approximate differentiable rendering method. Afterward, we optimize the diffuse albedo and specular albedo using a physically-based differentiable rendering method. Details of the optimization are presented below. Geometry Optimization.To optimize the geometry, we adopt a differentiable model similar to Soft Rasterizer (Liu _et al._, 2019) to compute a silhouette image for each input image using the same camera parameters (see the supplementary materials for details of the computation), to indicate the occupancy from the same view directions as the input images. Then we minimize the following loss function to derive the vertex positions \(\theta_{\mathrm{p}}\): \[\mathcal{L}_{\text{geo}}=\lambda_{\text{sil}}\mathcal{L}_{\text{sil}}+ \lambda_{\text{lap}}\mathcal{L}_{\text{lap}}+\lambda_{\text{normal}}\mathcal{ L}_{\text{normal}}+\lambda_{\text{edge}}\mathcal{L}_{\text{edge}}\;, \tag{1}\] where \(\mathcal{L}_{\text{sil}}\) is a silhouette loss that indicates the consistency between the computed silhouette images \(\mathcal{I}_{\text{sil}}(\theta_{\mathrm{p}})\) and the ground-truth ones \(\tilde{\mathcal{I}}_{\text{sil}}\) derived from the input (Liu _et al._, 2019): \[\mathcal{L}_{\text{sil}}=1-\left\|\tilde{\mathcal{I}}_{\text{sil}}\otimes \mathcal{I}_{\text{sil}}\right\|_{1}/\left\|\tilde{\mathcal{I}}_{\text{sil}} \oplus\mathcal{I}_{\text{sil}}-\tilde{\mathcal{I}}_{\text{sil}}\otimes \mathcal{I}_{\text{sil}}\right\|_{1},\] with \(\otimes\) and \(\oplus\) being the element-wise product and sum operators, respectively. The other terms in \(\mathcal{L}_{\text{geo}}\) are regularizers. Among them, \(\mathcal{L}_{\text{lap}}=\|\mathbf{L}\mathbf{V}\|^{2}\) is a mesh Laplacian loss \(\mathcal{L}_{\text{lap}}\) of a mesh with \(n\) vertices, \(\mathbf{V}\) is the \(n\times 3\) coordinates matrix, and \(\mathbf{L}\in\mathbb{R}^{n\times n}\) is the Laplacian matrix of the mesh (See (Nealen _et al._, 2006) for details). \(\mathcal{L}_{\text{normal}}\,=\sum_{i,j}\left[1-\left(\mathbf{n}_{i}\cdot\mathbf{n}_{ j}\right)\right]^{2}\) is a normal consistency loss to make the normals of adjacent faces to vary slowly, where the sum is over all triangle pairs \((i,j)\) sharing a common edge, and \(\mathbf{n}_{i}\) and \(\mathbf{n}_{j}\) are the face normals of the two specific triangles. \(\mathcal{L}_{\text{edge}}\,=\sqrt{\sum_{i}e_{i}^{2}}\) is an edge length loss to avoid long edges that can cause ill-shaped triangles, where \(e_{i}\) denotes the length of the \(i\)-th edge. Reflectance Optimization.Our reflectance optimization is based on the rendering equation proposed in (Kajiya, 1986). For a surface point \(x\) with surface normal \(n\), let \(L_{i}(\omega_{i};x)\) be the incident light intensity at location \(x\) along the direction \(\omega_{i}\), and SVBRDF \(f_{r}(\omega_{o},\omega_{i};x)\) be the reflectance coefficient of the material at location \(x\) for incident light direction \(\omega_{i}\) and viewing direction \(\omega_{o}\). Then the observed light intensity \(L_{o}(\omega_{o};x)\) is an integral over the upper hemisphere \(\Omega\): \[L_{o}(\omega_{o};x)=\int_{\Omega}L_{i}(\omega_{i})f_{r}(\omega_{o},\omega_{i}; x)(\omega_{i}\cdot n)d_{\omega_{i}}.\] We leverage a modified Cook-Torrance (CT) model (Cook and Torrance, 1982) based on (Zeltner _et al._, 2021) as our reflectance model, which contains a rough surface with diffuse and specular reflection without refraction. Mathematically, our reflectance model can be described as the following, \[f_{r}\left(\omega_{o},\omega_{i};x\right)=\theta_{d}(x)+\theta_{s}\left( \omega_{o},\omega_{i};x\right),\] where \(\theta_{d}\) and \(\theta_{s}\) are the diffuse and specular reflectance: \[\theta_{d}(x) =\rho_{d}(x)/\pi,\] \[\theta_{s}\left(\omega_{o},\omega_{i};x\right) =\rho_{s}(x)\frac{D(h,\alpha)G\left(\omega_{o},\omega_{i},n(x) \right)}{\left(n(x)\cdot\omega_{i}\right)\left(n(x)\cdot\omega_{o}\right) \pi}.\] Specifically, \(\rho_{d}\) is diffuse albedos, \(\rho_{s}\) is specular albedos, and \(h\) is the halfway vector. \(D(h,\alpha)\) is the microfacet distribution function which is GGX (Walter _et al._, 2007) used in our work. \(\alpha\) and \(n(x)\) denote the surface's roughness and normal, respectively. \(G\) is a shadowing-masking function. Like other works, we also ignore the Fresnel effect which cannot be observed in our scene. To optimize the reflectance parameters \(\theta_{\mathrm{r}}=(\theta_{\mathrm{d}},\theta_{\mathrm{s}})\), we minimize the following reflectance loss: \[\mathcal{L}_{\text{ref}}=\lambda_{\text{rgb}}\mathcal{L}_{\text{rgb}}( \mathcal{I}_{\text{rgb}}(\theta_{\mathrm{r}});\tilde{\mathcal{I}}_{\text{rgb} })+\lambda_{\text{reg}}\mathcal{R}(\theta_{\mathrm{r}}), \tag{2}\] where \(\mathcal{L}_{\text{rgb}}\,=\left\|\tilde{\mathcal{I}}_{\text{rgb}}-\mathcal{I }_{\text{rgb}}(\theta_{\mathrm{r}})\right\|_{1}\) measures the \(\ell_{1}\) norm of the difference between the rendered color image \(\mathcal{I}_{\text{rgb}}(\theta_{\mathrm{r}})\) and the ground truth \(\tilde{\mathcal{I}}_{\text{rgb}}\). \(\mathcal{R}(\theta_{\mathrm{r}})\) is a regularizer for the diffuse albedo \(\theta_{\mathrm{d}}\) and specular albedo \(\theta_{\mathrm{s}}\). Similar to (Schmitt _et al._, 2020), we assume that nearby pixels with similar diffuse albedos have similar specular albedos. So we choose \[\mathcal{R}(\theta_{\mathrm{r}})=\sum\nolimits_{\mathbf{p}}\left\|\theta_{\mathrm{s }}[\mathbf{p}]-\left(\sum\nolimits_{\mathbf{q}}\theta_{\mathrm{s}}[\mathbf{q}]k_{\mathbf{p},\bm {q}}\right)/\left(\sum\nolimits_{\mathbf{q}}k_{\mathbf{p},\mathbf{q}}\right)\right\|_{1},\] where \(k_{\mathbf{p},\mathbf{q}}=\exp\left(-\frac{\|\mathbf{p}-\mathbf{q}\|^{2}_{2}}{2\sigma_{1}^{2}}- \frac{(\theta_{\mathrm{d}}|\mathbf{p}|-\theta_{\mathrm{d}}[\mathbf{q}])^{2}}{2\sigma_{2}^ {2}}\right)\) and \(\mathbf{p},\mathbf{q}\) are two specific mesh vertices. ## 4 Experiments We perform quantitative and qualitative evaluation of our method using extensive experiments on both synthetic and real data, and compare it with state-of-the-art methods. Implementation Details.We implement the geometry reconstruction module in the initialization phase using COLMAP1(Schonberger _et al._, 2016; Schonberger and Frahm, 2016), a general-purpose Structure-from-Motion and MultiView Stereo pipeline. In the optimization phase, we implement the approximate differentiable rendering based on Pytorch3d (Ravi _et al._, 2020), and the physically-based differentiable rendering based on Mitsuba 2(Nimier-David _et al._, 2019). The rest of our optimization pipeline is implemented with PyTorch using the Adam optimizer. For geometry optimization, we use the weights \((\lambda_{\text{sil}}\,,\lambda_{\text{lap}}\,,\lambda_{\text{edge}}\,,\lambda_{ \text{normal}}\,)=(1.0,1.0,1.0,0.01)\) for Eq. 1, and \(0.001\) for the learning rate. The ground-truth silhouette images required for Eq. 1 are obtained either from rendering (for synthetic data) or using a background removal tool2 (for real data). For reflectance optimization, we use the weights \((\lambda_{\text{rgb}},\lambda_{\text{reg}})=(0.1,1.0)\) for Eq. 2, and \(0.0001\) for the learning rate. To avoid excessive memory consumption in the physically-based rendering module, we set the maximum depth of ray tracing bounce to \(3\), the downsampling factor of raw images to \(4\), and the number of sampling per pixel \(spp\) to \(4\) in the iterative optimization phase. We train and test our method on one NVIDIA RTX 3090 GPU. For geometry optimization, our implementation uses \(200\) iterations per view and takes about \(0.5\) hours per example. For reflectance optimization phase, our implementation uses \(400\) iterations per view and takes \(1.5\)\(\sim\)\(2.5\) hours per example. **Synthetic Data.** Our synthetic data is created using meshes and textures from [22] and the Internet, covering different materials and geometries. Concretely, for each object, we render 400 images with colored environmental lighting using graphics engines, 300 for training, and the left 100 for novel view synthesis testing, whose viewpoints are evenly distributed inside the upper hemisphere. In addition, we also render the same object with two other environment lighting maps as the ground truth for the following relighting performance testing. To quantitatively evaluate our method, we use three image quality metrics--LPIPS [23], SSIM and PSNR--to compare the rendering results with the corresponding ground truth images. Figure 3 shows examples of results from our method and their ground truth, as well as their evaluation metric values. Both the quantitative metrics and the qualitative visualization show that our novel views and relighting results match the ground truth closely. Fig. 6 shows the diffuse and specular albedo for the lemon model. **Real Data.** We evaluate our method on multiple real-world images from the DTU datasets [1], where the objects are glossy and the illumination is static across different views. We use two objects from the dataset, _shiny scan114 buddha_ and _scan110 ghost_, and discard photos with strong shadows. Our inverse rendering results are shown in Figure 4. We can see that our pipeline generates photo-realistic novel views, and results of relighting and material editing. **Comparison with State-of-the-art Methods.** As far as we are aware, there is no prior work tackling exactly the same problem as our work: reconstructing triangle meshes and PBR materials from multi-view real-world object images with uncontrolled lighting conditions. The methods closest to our method are [10, 11], but no source code or data is released. It is worth noting that our method addresses the shortcomings of [11] in dealing with geometry, while overcoming the limitation of [10] which requires the camera and point lighting to be in the same position and the images to be taken in dark scenes. We compare our method to the most related neural rendering approaches, including PhySG [23], NeRF [13] and IDR [27], in terms of novel view synthesis. These approaches are different from our method in the model of light transport: NeRF uses the occupancy function to describe geometry and appearance maps and gets the pixel color according to location and viewing direction in a ray-marching way, while PhySG and IDR use SDF to represent geometry and material. In addition, unlike the aforementioned two methods that focus solely on novel view synthesis, PhySG also performs inverse rendering tasks, using Spherical Gaussian functions to describe materials and illumination. Qualitative and quantitative comparisons between different methods on our synthetic data are depicted in Figure 5 and Table 1. The unsatisfactory performance of PhySG may be due to the presence of strong specular highlights in our Figure 3: Results of our method on the synthetic data. We compare our predicted novel views and relighting results to the ground truth images. The number above each result indicates LPIPS, SSIM, and PSNR metrics calculated in \(512\times 512\) size pictures, respectively. **Please zoom in to see the details, especially the challenging specular highlights on the surface of the objects.** synthetic data, which are difficult to describe using the Spherical Gaussian functions employed by PhySG. NeRF performs relatively poorly in view synthesis because its volumetric representation does not concentrate colors around surfaces as well as surface-based approaches. IDR does a better job in view synthesis because of its view-dependence model. However, it still struggles to synthesize specular highlights due to the non-physical model of appearance. In contrast, our method models highlight well, benefiting from our PBR materials. We also compare the running time, where our training time is the sum of initialization time and optimization time. Thanks to the combination of efficiency from the approximate method and the accuracy from the physically-based method, our hybrid approach runs significantly faster than the two baselines both in training and testing. In addition, our optimized geometry and material can be deployed directly on mainstream graphics engines. We also compare our method with the latest related work IRON [22] which adopts neural representations and also leverages a hybrid optimization scheme. Note that IRON assumes a point light source, while our method allows for uncontrolled lighting conditions. As a result, IRON performs poorly in complex lighting situations. Figure 9 com Figure 4: With our pipeline, we can synthesize novel views and edit the materials and lighting of the real-world captures. Figure 5: We qualitatively compare our results with PhySG [22], NeRF [17] and IDR [20]. Figure 6: Our diffuse and specular albedo. We have adjusted the tonal range to make the variation of specular albedo more visible. pares the two methods in different lighting conditions. Our method can achieve similarly good results as IRON on point-lighting data, while being superior with more complex lighting. In particular, IRON cannot handle the highlights on the surface of objects under environment lighting because their light source cameras are in the same position, while our method can handle it well. In addition, for some flat data like the hotdog in Figure 8, IRON struggles to obtain appropriate geometry, while our method can produce an accurate reconstruction. Ablation Study.Figure 7 shows our ablation study on the number of input images, which depicts how the number of input images affects our reconstruction quality. With too few input images, the optimization may become highly under-constrained, making it difficult to produce accurate synthesis results. In our experiments, \(150\) images are sufficient to produce a quite good result. With \(250\) or more input images, our results will closely match the ground truth. More ablation studies can be found in the supplementary materials. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Synthetic Pig (512\(\times\)512)} & \multicolumn{4}{c}{Synthetic Pomegranate (512\(\times\)512)} & \multicolumn{4}{c}{Synthetic Hotdog (512\(\times\)512)} \\ & \multicolumn{4}{c}{\#train=300, \#test=100} & \multicolumn{4}{c}{\#train=300, \#test=100} & \multicolumn{4}{c}{\#train=300, \#test=100} \\ \cline{2-13} & \multicolumn{4}{c}{Training} & \multicolumn{4}{c}{Testing} & \multicolumn{4}{c}{Training} & \multicolumn{4}{c}{Testing} & \multicolumn{4}{c}{Training} \\ & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & Time (h)\(\downarrow\) & Time (s)\(\downarrow\) & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & Time (h)\(\downarrow\) & Time (s)\(\downarrow\) & LPIPS\(\downarrow\) & SSIM\(\uparrow\) & PSNR\(\uparrow\) & Time (h)\(\downarrow\) & Time (s)\(\downarrow\) \\ \hline PhySG & \(0.2200\) & \(0.784\) & \(17.39\) & \(\sim 15\) & \(\sim 4\) & \(0.0425\) & \(0.928\) & \(23.98\) & \(\sim 15\) & \(\sim 4\) & \(0.1473\) & \(0.8319\) & \(22.30\) & \(\sim 15\) & \(\sim 4\) \\ NeRF & \(0.0878\) & \(0.859\) & \(27.19\) & \(\sim 15\) & \(\sim 10\) & \(0.0600\) & \(0.918\) & \(31.91\) & \(\sim 15\) & \(\sim 10\) & \(0.0453\) & \(0.921\) & \(27.90\) & \(\sim 15\) & \(\sim 10\) \\ IDR & \(0.0183\) & \(0.955\) & \(29.94\) & \(\sim 18\) & \(\sim 5\) & \(0.0211\) & \(\mathbf{0.942}\) & \(26.85\) & \(\sim 18\) & \(\sim 5\) & \(0.0303\) & \(0.945\) & \(30.97\) & \(\sim 18\) & \(\sim 5\) \\ Ours & \(\mathbf{0.0124}\) & \(\mathbf{0.975}\) & \(\mathbf{35.01}\) & \(\sim\mathbf{3}\) & \(\sim\mathbf{1}\) & \(0.0230\) & \(0.903\) & \(\mathbf{34.28}\) & \(\sim\mathbf{2.5}\) & \(\sim\mathbf{1}\) & \(\mathbf{0.0088}\) & \(\mathbf{0.951}\) & \(\mathbf{33.89}\) & \(\sim\mathbf{3}\) & \(\sim\mathbf{1}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between PhySG [22], NeRF [17], IDR [21] and our method. #train and #test denote the size of training set and test set, respectively. Training time is in hours and rendering time is in seconds. Figure 8: Comparison between our method and IRON [22] on the hotdog data. Figure 7: Ablation study on the number of input images. We show a novel view synthesis result of the reconstruction for each object, given a different number of input images, whose viewpoints are evenly distributed inside the upper hemisphere. **We recommend that readers zoom in to see the details of the picture, especially the folds at the top and the specular highlights on the face.** Figure 9: Comparison between our method and IRON [22]. The first and third rows are point lighting, and the second and fourth rows are environment lighting. The numbers above each result show the LPIPS, SSIM and PSNR in 512\(\times\)512 pictures, respectively. Conclusion We introduce a novel efficient hybrid differentiable rendering method to reconstruct triangular object mesh and PBR material from multi-view real-world images with uncontrolled lighting conditions. Unlike prior works that require massive resource consumption or approximated rendering process, we utilize an approximate method to optimize the geometry and a physically-based method to estimate the reflectance so that we could benefit from both the efficiency of the former and the high quality of the latter. In general, our method can handle a wide range of real-world scenes, providing an attractive and efficient solution and enabling photo-realistic novel view synthesis and relighting applications. Limitations and future work.Our method can have difficulties with very thin geometry, which is a common problem of mesh-based methods. In addition, our method optimizes geometry and material separately. A potential future work is a unified pipeline to optimize geometry and material simultaneously, which should further improve the result quality. ## Acknowledgments This work was supported by the NSFC under Grant 62072271.
2301.13001
On the minimum size of linear sets
Recently, a lower bound was established on the size of linear sets in projective spaces, that intersect a hyperplane in a canonical subgeometry. There are several constructions showing that this bound is tight. In this paper, we generalize this bound to linear sets meeting some subspace $\pi$ in a canonical subgeometry. We obtain a tight lower bound on the size of any $\mathbb F_q$-linear set spanning $\text{PG}(d,q^n)$ in case that $n \leq q$ and $n$ is prime. We also give constructions of linear sets attaining equality in the former bound, both in the case that $\pi$ is a hyperplane, and in the case that $\pi$ is a lower dimensional subspace.
Sam Adriaensen, Paolo Santonastaso
2023-01-30T15:44:01Z
http://arxiv.org/abs/2301.13001v2
# On the minimum size of linear sets ###### Abstract Recently, a lower bound was established on the size of linear sets in projective spaces, that intersect a hyperplane in a canonical subgeometry. There are several constructions showing that this bound is tight. In this paper, we generalize this bound to linear sets meeting some subspace \(\pi\) in a canonical subgeometry. We also give constructions of linear sets attaining equality in this bound, both in the case that \(\pi\) is a hyperplane, and in the case that \(\pi\) is a lower dimensional subspace. **Keywords:** Projective geometry, Linear set, Subgeometry. **MSC2020:** 51E20, 05B25. ## 1 Introduction Linear sets are certain point sets in projective spaces, generalizing the notion of a subgeometry. They have proven themselves to be very useful in constructing interesting objects in projective spaces, such as blocking sets [26] and KM-arcs [9], and have been used to construct Hamming and rank metric codes [27, 23, 24, 18, 1, 20]. For a survey on linear sets, we refer the reader to [19, 13]. Given the usefulness of linear sets, their recent spurt in popularity within the field of finite geometry is far from surprising. One of the most natural questions arising in the study of linear sets is establishing lower and upper bounds on their size. There is a quite trivial upper bound on the size of linear sets, and the study of linear sets attaining equality in this bound can be traced back to a paper by Blokhuis and Lavrauw [5]. However, finding good lower bounds on the size of linear sets seems to be a harder problem. Yet it is an interesting endeavor, e.g. due to its connection with the weight distribution of linear rank metric codes [21]. As a consequence of the celebrated result on the number of directions determined by a function over finite field [2, 4], Bonoli and Polverino established a lower bound on the size of certain linear sets on a projective line. More specifically, they proved the following result (for the definitions, we refer to Section 2). **Result 1.1** ([6, Lemma 2.2]).: _If \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(n\) on \(\operatorname{PG}(1,q^{n})\), and \(L_{U}\) contains at least one point of weight 1, then \(|L_{U}|\geq q^{n-1}+1\)._ De Beule and Van de Voorde managed to remove the condition on the rank from this bound. We note that linear sets of rank greater than \(n\) on \(\operatorname{PG}(1,q^{n})\) are not interesting to study, since they necessarily contain all the points of the projective line. Hence it is natural to limit the study to linear sets whose rank is at most \(n\). **Result 1.2** ([8, Theorem 1.2]).: _If \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(k\), with \(1<k\leq n\) on \(\operatorname{PG}(1,q^{n})\), and \(L_{U}\) contains at least one point of weight 1, then \(|L_{U}|\geq q^{k-1}+1\)._ Using an inductive argument, they obtained a bound on the size of a linear set in a higher dimensional projective space. Using Lemma 3.2, which we prove later in this paper, this is equivalent to the following result. **Result 1.3** ([8, Theorem 4.4]).: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set of rank \(k>d\) in \(\mathrm{PG}(d,q^{n})\). If \(L_{U}\) meets some hyperplane \(\Omega\) in a canonical \(\mathbb{F}_{q}\)-subgeometry of \(\Omega\), then_ \[|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-d}+1.\] De Beule and Van de Voorde note directly after their statement of the above result that they would like to find lower bounds on linear sets satisfying less restrictive conditions. Furthermore, Jena and Van de Voorde [11, SS2.5 (B)] state that they believe the above lower bound to hold for all \(\mathbb{F}_{q}\)-linear sets of rank \(k\) that span \(\mathrm{PG}(d,q^{n})\), if \(n\) is prime and \(k\leq d+n\). In this article we will generalize the above result by dropping the condition that \(\Omega\) is a hyperplane. **Theorem 1.4**.: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set of rank \(k\) in \(\mathrm{PG}(d,q^{n})\). Suppose that there exists some \((r-1)\)-space \(\Omega\), with \(r<k\), such that \(L_{U}\) meets \(\Omega\) in a canonical \(\mathbb{F}_{q}\)-subgeometry of \(\Omega\). Then_ \[|L_{U}|\geq q^{k-1}+\ldots+q^{k-r}+I_{\Omega},\] _where \(I_{\Omega}\) denotes the number of \(r\)-spaces through \(\Omega\), containing a point of \(L_{U}\setminus\Omega\)._ This yields the following recursive lower bound on the size of a linear set. **Theorem 1.5**.: _Let \(B_{q,n}(r,k,d)\) denote the minimum size of an \(\mathbb{F}_{q}\)-linear set of rank \(k\) that spans \(\mathrm{PG}(d,q^{n})\) and intersects an \((r-1)\)-space in a canonical subgeometry. Then_ \[B_{q,n}(r,k,d)\geq\begin{cases}q^{k-1}+B_{q,n}(r-1,k-1,d-1)&\text{if }r>0,\\ q^{k-\left\lfloor\frac{k}{d+1}\right\rfloor}+B_{q,n}\left(0,k-\left\lfloor \frac{k}{d+1}\right\rfloor,d-1\right)&\text{if }r=0.\end{cases}\] Note in particular that for the linear set \(L_{U}\) in Theorem 1.4, this implies that \[|L_{U}|\geq q^{k-1}+\ldots+q^{k-r}+q^{k-r-\left\lfloor\frac{k-r}{d-r+1} \right\rfloor}+B_{q,n}\left(0,k-r-\left\lfloor\frac{k-r}{d-r+1}\right\rfloor, d-r-1\right).\] In \(\mathrm{PG}(1,q^{n})\), the bound of De Beule and Van de Voorde is tight. For every rank \(k\leq n\), there exist so-called \((k-1)\)-clubs of rank \(k\). These linear sets contain (an abundance of) points of weight \(1\), and their size matches the bound in Result 1.2. Lunardon and Polverino [15] provided the first less trivial family of linear sets of rank \(n\) reaching equality in Result 1.1. Their example was extended by Jena and Van de Voorde [11] to a very large family of linear sets of general rank, attaining equality in Result 1.2. More recently, there have been other constructions of such linear sets, and partial classification results, see Napolitano et al. [16]. Moreover, Jena and Van de Voorde generalized their constructions to higher dimensions, to obtain linear sets attaining equality in the bound of Result 1.3, some of which also satisfy the conditions of Result 1.3 [11, SS2.5 (B)]. In this article, we study the construction by Jena and Van de Voorde in general dimension, and we provide a sufficient condition for these linear sets to satisfy the hypothesis of Result 1.3. We also generalize the construction of Napolitano et al. to higher dimensions. Furthermore, we construct linear sets in \(\mathrm{PG}(d,q^{n})\) satisfying the conditions of Theorem 1.4, and attaining equality in the corresponding bound, where \(n\) is not prime. The size of these linear sets is smaller than the bound from Result 1.3, hence this illustrates the necessity of the conditions imposed in Result 1.3 in case \(n\) is not prime. **Structure of the paper.** Section 2 contains preliminary results on linear sets. Section 3 contains the proof of Theorems 1.4 and 1.5. We also discuss some sufficient conditions on linear sets for the hypothesis of Result 1.3 to hold. In addition, we deduce from Theorem 1.4 that the rank of a linear set is determined by its size and the minimum weight of its points, and that it is spanned by its points of minimum weight. In Section 4 we discuss linear sets attaining equality in Result 1.3. More specifically, we show a sufficient condition for the minimum size linear sets of [11] to satisfy the hypothesis of Result 1.3, and we generalize the construction from [16] to higher dimension. Section 5 contains constructions of linear sets attaining equality in Theorem 1.4. Finally, Section 6 contains some concluding remarks. ## 2 Preliminaries Throughout this article, \(q\) will always denote a prime power, and \(\mathbb{F}_{q}\) will denote the finite field of order \(q\). The \(d\)-dimensional projective space over \(\mathbb{F}_{q}\) will be denoted by \(\operatorname{PG}(d,q)\). If the projective space is constructed from a \((d+1)\)-dimensional \(\mathbb{F}_{q}\)-vector space \(V\), and we want to emphasize the underlying vector space, we might also denote the projective space as \(\operatorname{PG}(V,\mathbb{F}_{q})\). We note that the number of points in \(\operatorname{PG}(d,q)\) equals \(\frac{q^{d+1}-1}{q-1}=q^{d}+q^{d-1}+\ldots+q+1\). **Notation 2.1**.: Throughout the article, when working in \(\operatorname{PG}(d,q)=\operatorname{PG}(\mathbb{F}_{q}^{d+1},\mathbb{F}_{q})\), we denote the vectors of \(\mathbb{F}_{q}^{d+1}\) as \((x_{0},\ldots,x_{d})\), i.e. we label the coordinate positions from \(0\) to \(d\). The \(i^{\text{th}}\) standard basis vector will be denoted as \[\mathbf{e}_{i}=(0,\ldots,0,\underbrace{1}_{i^{\text{th position}}},0\ldots,0),\] and the corresponding point in \(\operatorname{PG}(d,q)\) will be denoted as \(E_{i}\). ### Linear sets Let \(V\) be a \((d+1)\)-dimensional vector space over \(\mathbb{F}_{q^{n}}\). Then \(V\) is also a \((d+1)n\)-dimensional vector space over \(\mathbb{F}_{q}\). Let \(U\) denote an \(\mathbb{F}_{q}\)-subspace of \(V\). Then \[L_{U}=\{\langle u\rangle_{\mathbb{F}_{q^{n}}}:u\in U\setminus\{\mathbf{0}\}\}\] is a set of points in \(\operatorname{PG}(d,q^{n})\). Sets of this type are called \(\mathbb{F}_{q}\)_-linear sets_, and the \(\mathbb{F}_{q}\)-dimension of \(U\) is called the _rank_ of \(L_{U}\). We note that if \(U_{1}\) and \(U_{2}\) are \(\mathbb{F}_{q}\)-subspaces, and \(L_{U_{1}}\) and \(L_{U_{2}}\) are equal as point set in \(\operatorname{PG}(d,q^{n})\), this need not imply that \(\dim_{\mathbb{F}_{q}}U_{1}=\dim_{\mathbb{F}_{q}}U_{2}\). Hence, the rank of a linear set \(L_{U}\) is generally not unambiguously defined by \(L_{U}\) as point set in \(\operatorname{PG}(d,q^{n})\), without taking into account the underlying subspace \(U\). Given an \(\mathbb{F}_{q^{n}}\)-subspace \(W\leq V\), we define the _weight_ of \(\Omega=\operatorname{PG}(W,\mathbb{F}_{q^{n}})\) to be \[\operatorname{w}_{L_{U}}(\Omega)=\dim_{\mathbb{F}_{q}}(U\cap W).\] Note that \(\operatorname{w}_{L_{U}}(\Omega)\) equals the rank of the linear set \(L_{U\cap W}=L_{U}\cap\Omega\). For each \(i\in\{1,\ldots,n\}\), let \(N_{i}(L_{U})\) denote the number of points in \(\operatorname{PG}(d,q^{n})\) of weight \(i\). We will simply denote this as \(N_{i}\) if \(L_{U}\) is clear from context. The numbers \(N_{1},\ldots,N_{n}\) are called the _weight distribution_ of \(L_{U}\). In addition, the _weight spectrum_ of \(L_{U}\) is the vector \((i_{1},\ldots,i_{t})\) with \(i_{1}\leq\ldots\leq i_{t}\) and \[\{i_{1},\ldots,i_{t}\}=\{\operatorname{w}_{L_{U}}(P):P\in L_{U}\}=\{i\in\{1, \ldots,n\}:N_{i}>0\}.\] Let \(k>0\) denote the rank of \(L_{U}\). Then the weight distribution satisfies the following properties. \[|L_{U}| =N_{1}+\ldots+N_{n}, \tag{1}\] \[\sum_{i=1}^{n}N_{i}\frac{q^{i}-1}{q-1} =\frac{q^{k}-1}{q-1}, \tag{2}\] \[|L_{U}| \leq\frac{q^{k}-1}{q-1}, \tag{3}\] \[|L_{U}| \equiv 1\;(\mathrm{mod}\,q). \tag{4}\] Let \(T\) be an \(\mathbb{F}_{q}\)-subspace of \(V\) with \(\dim_{\mathbb{F}_{q}}(T)=r\leq d+1\). If \(\dim_{\mathbb{F}_{q^{n}}}(\langle T\rangle_{\mathbb{F}_{q^{n}}})=r\), we will say that \(L_{T}\cong\mathrm{PG}(T,\mathbb{F}_{q})=\mathrm{PG}(r-1,q)\) is an \(\mathbb{F}_{q}\)-_subgeometry_ of \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\) and \(r\) is the _rank_ of the subgeometry \(L_{T}\). When \(r=d+1\), we say that \(L_{T}\) is a _canonical subgeometry_ of \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})=\mathrm{PG}(d,q^{n})\). Note that each point of a subgeometry \(L_{T}\) has weight \(1\) and hence \(|L_{T}|=\frac{q-1}{q-1}\), if \(L_{T}\) has rank \(r\). Regarding the linearity of a linear set, we recall these definitions explored in [12]. **Definition 2.2** ([12, Definitions 1.1, 1.2]).: An \(\mathbb{F}_{q}\)-linear set \(L_{U}\) is an \(\mathbb{F}_{q^{s}}\)-_linear set_ if \(U\) is also an \(\mathbb{F}_{q^{s}}\)-vector space. We say that \(\mathbb{F}_{q^{s}}\) is the _maximum field of linearity_ of \(L_{U}\) if \(s\) is the largest exponent such that \(L_{U}\) is \(\mathbb{F}_{q^{s}}\)-linear. **Definition 2.3** ([12, Definitions 1.3, 1.4]).: An \(\mathbb{F}_{q}\)-linear set \(L_{U}\) has _geometric field of linearity_\(\mathbb{F}_{q^{s}}\) if there exists an \(\mathbb{F}_{q^{s}}\)-linear set \(L_{U^{\prime}}\) such that \(L_{U}=L_{U^{\prime}}\). An \(\mathbb{F}_{q}\)-linear set \(L_{U}\) has _maximum geometric field of linearity_\(\mathbb{F}_{q^{s}}\) if \(s\) is the largest integer such that \(L_{U}\) has geometric field of linearity \(\mathbb{F}_{q^{s}}\). The maximum field of linearity and the maximum geometric field of linearity do not always coincide. Clearly if \(L_{U}\) is an \(\mathbb{F}_{q^{s}}\)-linear set, it has geometric field of linearity \(\mathbb{F}_{q^{s}}\), but the converse need not hold, see e.g. [12, Example 1.5]. **Remark 2.4**.: Note that if there exists a line \(\ell\) that is \((q+1)\)-secant to a linear set \(L_{U}\), then by (4) \(L_{U}\) has maximum geometric field of linearity \(\mathbb{F}_{q}\), see also [12]. We refer to [19] and [13] for comprehensive references on linear sets. ### Subspaces of complementary weights Recently, there has been an interest in linear sets admitting subspaces of complementary weights (see below for the definition), due to their application in coding theory, see e.g. [18, 20, 28]. Linear sets on the projective line admitting two points of complementary weights have been studied in [17] (see also [12, 16]). The higher dimensional analogue has been studied in [28]. For the sake of completeness, we state the definition and prove the structural description of such linear sets here in full generality. Call subspaces \(W_{1},\ldots,W_{m}\leq_{q^{n}}\mathbb{F}_{q^{n}}^{d+1}\)_independent_ if each subspace \(W_{i}\) intersects \(\langle W_{j}:j\neq i\rangle_{\mathbb{F}_{q^{n}}}\) trivially, or equivalently if \(\dim_{\mathbb{F}_{q^{n}}}\langle W_{i}:i=1,\ldots,m\rangle=\dim_{\mathbb{F}_ {q^{n}}}W_{1}+\ldots+\dim_{\mathbb{F}_{q^{n}}}W_{m}\). **Lemma 2.5**.: _Let \(W_{1},\ldots,W_{m}\) be independent subspaces in \(\mathbb{F}_{q^{n}}^{d+1}\), and let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(d,q^{n})\) of rank \(k\), that spans the entire space. Then_ \[\mathrm{w}_{L_{U}}(\mathrm{PG}(W_{1},\mathbb{F}_{q^{n}}))+\ldots+\mathrm{w}_{ L_{U}}(\mathrm{PG}(W_{m},\mathbb{F}_{q^{n}}))\leq k.\] _If equality holds, then \(\mathbb{F}_{q^{n}}^{d+1}=W_{1}\oplus\ldots\oplus W_{m}\)._ Proof.: Since no \(W_{i}\) intersects the span of the others, it is permitted to consider the direct sum \(W_{1}\oplus\ldots\oplus W_{m}\). Then \[k =\dim_{\mathbb{F}_{q}}U\geq\dim_{\mathbb{F}_{q}}(U\cap(W_{1} \oplus\ldots\oplus W_{m}))\geq\dim_{\mathbb{F}_{q}}(U\cap W_{1})+\ldots+\dim _{\mathbb{F}_{q}}(U\cap W_{m})\] \[=\mathrm{w}_{L_{U}}(\mathrm{PG}(W_{1},\mathbb{F}_{q^{n}}))+\ldots +\mathrm{w}_{L_{U}}(\mathrm{PG}(W_{m},\mathbb{F}_{q^{n}}))\] If equality holds, then \[U\cap(W_{1}\oplus\ldots\oplus W_{m})=U.\] Since \(W_{1}\oplus\ldots\oplus W_{m}\) is an \(\mathbb{F}_{q^{n}}\)-subspace, and \(\langle U\rangle_{\mathbb{F}_{q^{n}}}=\mathbb{F}_{q^{n}}^{d+1}\), we get that \[W_{1}\oplus\ldots\oplus W_{m}=\mathbb{F}_{q^{n}}^{d+1}.\qed\] **Definition 2.6**.: If the subspaces \(W_{1},\ldots,W_{m}\) attain equality in Lemma 2.5, we say that \(\operatorname{PG}(W_{1},\mathbb{F}_{q^{n}}),\ldots,\operatorname{PG}(W_{m}, \mathbb{F}_{q^{n}})\) are _subspaces of complementary weight_ (w.r.t. \(L_{U}\)). **Lemma 2.7**.: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set spanning \(\operatorname{PG}(d,q^{n})\). Then there exist subspaces \(\Omega_{1},\ldots,\Omega_{m}\) in \(\operatorname{PG}(d,q^{n})\) of complementary weight, with \(\dim\Omega_{i}=d_{i}\) and \(\operatorname{w}_{L_{U}}(\Omega_{i})=k_{i}\), if and only if \(U\) is \(\operatorname{GL}(d+1,q^{n})\)-equivalent to an \(\mathbb{F}_{q}\)-subspace \(U_{1}\times\ldots\times U_{m}\), with each \(U_{i}\) a \(k_{i}\)-dimensional \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{n}}^{d+1}\) satisfying \(\langle U_{i}\rangle_{\mathbb{F}_{q^{n}}}=\mathbb{F}_{q^{n}}^{d_{i}+1}\)._ Proof.: First suppose that such subspaces \(\Omega_{i}=\operatorname{PG}(W_{i},\mathbb{F}_{q^{n}})\) exist. Then there exists a map \(\varphi\in\operatorname{GL}(d+1,q^{n})\) such that \(\varphi(W_{1})=\langle\mathbf{e}_{0},\ldots,\mathbf{e}_{d_{1}}\rangle_{ \mathbb{F}_{q^{n}}}\), \(\varphi(W_{2})=\langle\mathbf{e}_{d_{1}+1},\ldots,\mathbf{e}_{d_{1}+d_{2}+1} \rangle_{\mathbb{F}_{q^{n}}}\), and so on. As can be seen in the proof of the previous lemma, \[\varphi(U)=\varphi(U\cap W_{1})\oplus\ldots\oplus\varphi(U\cap W_{m}),\] which equals \(U_{1}\times\ldots\times U_{m}\), with \[U_{i}=\{u\in\mathbb{F}_{q^{n}}^{d_{i}+1}:(0,\ldots,0,u,0,\ldots,0)\in\varphi( U)\cap\varphi(W_{i})\}.\] Clearly, \[\dim_{\mathbb{F}_{q}}U_{i}=\dim_{\mathbb{F}_{q}}\varphi(U\cap W_{i})=\dim_{ \mathbb{F}_{q}}U\cap W_{i}=\operatorname{w}_{L_{U}}(\Omega_{i})=k_{i}.\] Vice versa, suppose that \(\varphi(U)=U_{1}\times\ldots\times U_{m}\), with each \(U_{i}\) a \(k_{i}\)-dimensional \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{n}}^{d_{i}+1}\), for some \(\varphi\in\operatorname{GL}(d+1,q^{n})\). Then define \(W_{1}=\langle\mathbf{e}_{0},\ldots,\mathbf{e}_{d_{1}}\rangle_{\mathbb{F}_{q^{ n}}}\), \(W_{2}=\langle\mathbf{e}_{d_{1}+1},\ldots,\mathbf{e}_{d_{1}+d_{2}+1}\rangle_{ \mathbb{F}_{q^{n}}}\), and so on. Then clearly \(\operatorname{PG}(W_{1},\mathbb{F}_{q^{n}}),\ldots,\operatorname{PG}(W_{m}, \mathbb{F}_{q^{n}})\) are subspaces of complementary weights w.r.t. \(L_{U}\). Having subspaces of complementary weights is \(\operatorname{GL}(d+1,q^{n})\)-invariant, which finishes the proof. ## 3 General bounds This section is devoted to prove Theorem 1.4. Afterwards, we provide some sufficient conditions on linear sets for the hypothesis of Result 1.3 to hold. Lastly, from Theorem 1.4 we derive that if a linear set \(L_{U}\) contains a point of weight \(1\), its rank equals \(\lceil\log_{q}(|L_{U}|)\rceil\), and \(\langle L_{U}\rangle\) is spanned by the points of \(L_{U}\) of weight \(1\). ### Proof of Theorems 1.4 and 1.5 De Beule and Van de Voorde proved the following bound. **Result 3.1** ([8, Theorem 4.4]).: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set spanning \(\operatorname{PG}(d,q^{n})\) of rank \(k\). Suppose that \(L_{U}\) meets some hyperplane \(\Omega\) in exactly \(\frac{q^{d}-1}{q-1}\) points, spanning \(\Omega\). Then_ \[|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-d}+1.\] Note that if \(d=1\), this result is exactly Result 1.2. We now prove that this result is equivalent to Result 1.3. This follows directly from the following lemma. **Lemma 3.2**.: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set in \(\operatorname{PG}(d-1,q^{n})\), with \(d\geq 2\). Then \(L_{U}\) spans \(\operatorname{PG}(d-1,q^{n})\) and satisfies \(|L_{U}|=\frac{q^{d}-1}{q-1}\) if and only if \(L_{U}\) is a canonical \(\mathbb{F}_{q}\)-subgeometry._ Proof.: If \(L_{U}\) is a canonical subgeometry, then it immediately follows that \(L_{U}\) spans the entire space, and \(|L_{U}|=\frac{q^{d}-1}{q-1}\). So suppose that \(L_{U}\) spans the space, and that \(|L_{U}|=\frac{q^{d}-1}{q-1}\). We need to prove that all points of \(L_{U}\) have weight \(1\). Indeed in that case, by equations (1) and (2), \(L_{U}\) must then have rank \(d\), which proves that \(L_{U}\) is a canonical subgeometry. So suppose by way of contradiction that \(L_{U}\) has points of weight greater than \(1\). Note that by Equations (1) and (2), the rank of \(L_{U}\) is some number \(k>d\). Let \[\sigma=\langle P\in L_{U}\colon\operatorname{w}_{L_{U}}(P)>1\rangle\] denote the subspace of \(\operatorname{PG}(d-1,q^{n})\) spanned by points of weight greater than \(1\). Suppose that \(\sigma\) is not \(\operatorname{PG}(d-1,q^{n})\). Then \(L_{U}\not\subseteq\sigma\), and every point in \(L_{U}\setminus\sigma\) is a point of weight \(1\). Hence, there are at least \(q^{k-1}\) points in \(L_{U}\setminus\sigma\) corresponding to (necessarily distinct) points of weight \(1\) of \(L_{U}\). Thus, \(|L_{U}|>q^{k-1}>\frac{q^{d-1}}{q-1}\) since \(k>d\), a contradiction. Hence \(\sigma\) equals \(\operatorname{PG}(d-1,q^{n})\). Let \[m=\max_{P\in L_{U}}\operatorname{w}_{L_{U}}(P)\] denote the maximum weight of the points of \(L_{U}\). Then we can choose \(d\) independent points \(P_{1},\ldots,P_{d}\) in \(L_{U}\) such that \(\operatorname{w}_{L_{U}}(P_{1})=m\), and \(\operatorname{w}_{L_{U}}(P_{i})\geq 2\) for each \(i\). By Lemma 2.5, \[k\geq\sum_{i=1}^{d}\operatorname{w}_{L_{U}}(P_{i})\geq m+2(d-1).\] Let \(N_{1},\ldots,N_{m}\) denote the weight distribution of \(L_{U}\). Then by Equations (1) and (2), \[\frac{q^{m}-1}{q-1}|L_{U}|=\sum_{i=1}^{m}N_{i}\frac{q^{m}-1}{q-1}\geq\sum_{i=1 }^{m}N_{i}\frac{q^{i}-1}{q-1}=\frac{q^{k}-1}{q-1}\geq\frac{q^{m+2(d-1)}-1}{q-1}.\] This implies that \[\frac{q^{d}-1}{q-1}=|L_{U}|\geq\frac{q^{m+2(d-1)}-1}{q^{m}-1},\] which yields a contradiction if \(d\geq 2\). We will now prove Theorem 1.4. Proof of Theorem 1.4.: Consider the \(r\)-spaces \(\Pi_{1},\Pi_{2},\ldots\) of \(\operatorname{PG}(d,q^{n})\) through \(\Omega=\operatorname{PG}(W,\mathbb{F}_{q^{n}})\), with \(\Pi_{i}=\operatorname{PG}(W_{i},\mathbb{F}_{q^{n}})\), for each \(i\). We can order the \(r\)-spaces in such a way that \(\Pi_{i}\) contains a point of \(L_{U}\setminus\Omega\) if and only if \(i\leq I_{\Omega}\). Let \[k_{i}=\dim_{\mathbb{F}_{q}}(U\cap W_{i})\] denote the rank of the \(\mathbb{F}_{q}\)-linear set \(L_{U\cap W_{i}}\). Then the sets \(W_{i}\cap U\setminus W\) partition the vectors in \(U\setminus W\). Since \(L_{U}\) intersects \(\Omega\) in a canonical subgeometry, \(\dim_{\mathbb{F}_{q}}W=r\). This yields \[q^{k}-q^{r}=\sum_{i=1}^{I_{\Omega}}(q^{k_{i}}-q^{r})\qquad\qquad\implies \qquad\qquad q^{k-r}=1+\sum_{i=1}^{I_{\Omega}}(q^{k_{i}-r}-1). \tag{5}\] Analogously, the points of \(\Pi_{i}\setminus\Omega\) partition the points of \(L_{U}\setminus\Omega\). Note that for \(i\leq I_{\Omega}\), we have that \(L_{U}\cap\Pi_{i}=L_{U\cap W_{i}}\) is an \(\mathbb{F}_{q}\)-linear set in \(\Pi_{i}\) of rank \(k_{i}\), satisfying the hypothesis of Result 1.3. Hence, \[|L_{U}| =|L_{U}\cap\Omega|+\sum_{i=1}^{I_{\Omega}}\left(|L_{U}\cap\Pi_{i} |-|L_{U}\cap\Omega|\right)\] \[\geq\frac{q^{r}-1}{q-1}+\sum_{i=1}^{I_{\Omega}}\left((q^{k_{i}-1} +\ldots+q^{k_{i}-r}+1)-\frac{q^{r}-1}{q-1}\right)\] \[=\frac{q^{r}-1}{q-1}+\sum_{i=1}^{I_{\Omega}}\left(q^{k_{i}-r}\frac{q^{r}-1}{q-1}- \frac{q^{r}-1}{q-1}+1\right)\] Using Equation (5) this implies that \[|L_{U}|\geq\frac{q^{r}-1}{q-1}q^{k-r}+I_{\Omega}=q^{k-1}+q^{k-2}+\ldots+q^{k-r }+I_{\Omega}.\qed\] **Remark 3.3**.: If one wants to apply Theorem 1.4 to a particular linear set \(L_{U}\), different choices of the \((r-1)\)-space \(\Omega\) can yield different bounds. In other words, \(I_{\Omega}\) need not be the same for all \((r-1)\)-spaces meeting \(L_{U}\) in a canonical \(\mathbb{F}_{q}\)-subgeometry. This is illustrated in the example below. **Example 3.4**.: Consider the \((n+1)\)-dimensional \(\mathbb{F}_{q}\)-subspace \[U=\{(x,x^{q}):x\in\mathbb{F}_{q^{n}}\}\times\mathbb{F}_{q}\] of \(\mathbb{F}_{q^{n}}^{3}\). Consider the corresponding \(\mathbb{F}_{q}\)-linear set \(L_{U}\) of rank \(n+1\) in \(\mathrm{PG}(2,q^{n})\). Every point of \(L_{U}\) has weight \(1\), so we can apply Theorem 1.4 with \(\Omega\) any point of \(L_{U}\). However, for a point \(P\in L_{U}\), \(I_{P}=q^{n-1}+1\) if \(P\) lies on the line \(X_{2}=0\), and \(I_{P}=\frac{q^{n}-1}{q-1}\) if \(P\) does not lie on \(X_{2}=0\). These numbers are distinct if \(n>2\). We also remark that in Theorem 1.4 the number \(I_{\Omega}\) of \(r\)-spaces through \(\Omega\) containing a point of \(L_{U}\setminus\Omega\) equals the size of a certain linear set. **Definition 3.5**.: Consider an \(\mathbb{F}_{q}\)-linear set \(L_{U}\) in \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\) and a subspace \(\Omega=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\). Let \(\overline{U}\) denote the subspace \((U+W)/W\) of the quotient space \(V/W\). Then the _projection_ of \(L_{U}\) from \(\Omega\) is the \(\mathbb{F}_{q}\)-linear set \(L_{\overline{U}}\) of \(\mathrm{PG}(V/W,\mathbb{F}_{q^{n}})\). **Lemma 3.6**.: _Suppose that \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(k\) in \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\) and let \(L_{\overline{U}}\) be the projection of \(L_{U}\) from an \((r-1)\)-space \(\Omega=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\). Then for each \(\mathbb{F}_{q^{n}}\)-subspace \(W^{\prime}\leq V\) through \(W\),_ \[\mathrm{w}_{L_{\overline{U}}}(\mathrm{PG}((W^{\prime}+W)/W,\mathbb{F}_{q^{n}}) )=\mathrm{w}_{L_{U}}(\mathrm{PG}(W^{\prime},\mathbb{F}_{q^{n}}))-\mathrm{w}_{ L_{U}}(\Omega).\] _In particular, \(L_{\overline{U}}\) has rank \(k-\mathrm{w}_{L_{U}}(\Omega)\), and \(|L_{\overline{U}}|\) equals the number of \(r\)-spaces in \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\) through \(\Omega\) that contain a point of \(L_{U}\setminus\Omega\). Furthermore, if \(L_{U}\) spans \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\), then \(L_{\overline{U}}\) spans \(\mathrm{PG}(V/W,\mathbb{F}_{q^{n}})\)._ Proof.: We can find \(\mathbb{F}_{q}\)-subspaces \(U_{1},U_{2},U_{3}\) of \(U\) such that * \(U_{1}=W\cap U\), * \(U_{1}\oplus U_{2}=W^{\prime}\cap U\), * \(U_{1}\oplus U_{2}\oplus U_{3}=U\). Then \[\mathrm{w}_{L_{\overline{U}}}(\mathrm{PG}((W^{\prime}+W)/W, \mathbb{F}_{q^{n}})) =\dim_{\mathbb{F}_{q}}(\overline{U}\cap((W^{\prime}+W)/W))=\dim _{\mathbb{F}_{q}}(\langle U,W\rangle_{\mathbb{F}_{q}}\cap W^{\prime})-\dim_{ \mathbb{F}_{q}}W\] \[=\dim_{\mathbb{F}_{q}}(W\oplus U_{2})-\dim_{\mathbb{F}_{q}}(W)= \dim_{\mathbb{F}_{q}}(U_{2})\] \[=\dim_{\mathbb{F}_{q}}(U_{1}\oplus U_{2})-\dim_{\mathbb{F}_{q}}(U _{1})=\mathrm{w}_{L_{U}}(\mathrm{PG}(W^{\prime},\mathbb{F}_{q^{n}}))-\mathrm{w }_{L_{U}}(\Omega).\] If we put \(W^{\prime}=V\), we see that \(L_{\overline{U}}\) has rank \(k-\mathrm{w}_{L_{U}}(\Omega)\). It also follows that the points of \(L_{\overline{U}}\) are in 1-1 correspondence with the \((r+1)\)-spaces \(W^{\prime}\) of \(V\) with \(\mathrm{w}_{L_{U}}(\mathrm{PG}(W,\mathbb{F}_{q^{n}}))>\mathrm{w}_{L_{U}}(\Omega)\), which are exactly the \(r\)-spaces through \(\Omega\) in \(\mathrm{PG}(V,\mathbb{F}_{q^{n}})\) containing a point of \(L_{U}\setminus\Omega\). This tells us the following about the quantity \(I_{\Omega}\) in Theorem 1.4. **Proposition 3.7**.: _In the hypothesis of Theorem 1.4, let \(L_{\overline{U}}\) be the projection of \(L_{U}\) from \(\Omega\). Then_ \[|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-r}+|L_{\overline{U}}|.\] _Moreover, \(L_{\overline{U}}\) has rank \(k-r\)._ Together with the following lemma, we can now prove Theorem 1.5. **Lemma 3.8**.: _Suppose that \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(k\) of \(\operatorname{PG}(n,q)\). Let \(m=\min\{\operatorname{w}_{L_{U}}(P)\,:\,P\in L_{U}\}\) denote the minimum weight of the points of \(L_{U}\). Then there exists an \(\mathbb{F}_{q}\)-linear set \(L_{U^{\prime}}\) of rank \(k-m+1\) in \(\operatorname{PG}(n,q)\) containing points of weight 1 such that \(L_{U}\) and \(L_{U^{\prime}}\) coincide as point sets._ Proof.: Take a vector \(u\in U\) such that \(P=\langle u\rangle_{\mathbb{F}_{q^{n}}}\) has weight \(m\) in \(L_{U}\). Then there exists a \((k-m+1)\)-dimensional \(\mathbb{F}_{q}\)-subspace \(U^{\prime}\) of \(U\) that intersects \(\langle u\rangle_{\mathbb{F}_{q^{n}}}\) in a 1-dimensional subspace. Then \(\operatorname{w}_{L_{U^{\prime}}}(P)=1\). It remains to show that \(L_{U}\) and \(L_{U^{\prime}}\) coincide as points sets. The inclusion \(L_{U^{\prime}}\subseteq L_{U}\) is evident. On the other hand, take a non-zero \(v\in U\). Then, by Grassmann's identity \[\operatorname{w}_{L_{U^{\prime}}}(\langle v\rangle_{\mathbb{F}_{ q^{n}}}) =\dim_{\mathbb{F}_{q}}(\langle v\rangle_{\mathbb{F}_{q^{n}}}\cap U ^{\prime})=\dim_{\mathbb{F}_{q}}((\langle v\rangle_{\mathbb{F}_{q^{n}}}\cap U )\cap U^{\prime})\] \[=\dim_{\mathbb{F}_{q}}(\langle v\rangle_{\mathbb{F}_{q^{n}}}\cap U )+\dim_{\mathbb{F}_{q}}(U^{\prime})-\dim_{\mathbb{F}_{q}}(\langle v\rangle_{ \mathbb{F}_{q^{n}}}\cap U,U^{\prime}\rangle_{\mathbb{F}_{q}})\] \[\geq m+(k-m+1)-\dim_{\mathbb{F}_{q}}(U)=1.\] This shows that \(L_{U}\subseteq L_{U^{\prime}}\). Thus, \(L_{U}\) and \(L_{U^{\prime}}\) coincide as point sets. Proof of Theorem 1.5.: Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set spanning \(\operatorname{PG}(d,q^{n})\) and intersecting an \((r-1)\)-space \(\operatorname{PG}(W,\mathbb{F}_{q^{n}})\) in a canonical \(\mathbb{F}_{q}\)-subgeometry. If \(r>0\), then \(L_{U}\) contains a point \(P=\langle u\rangle_{\mathbb{F}_{q^{n}}}\in\operatorname{PG}(W,\mathbb{F}_{q^{n }})\) of weight 1. By Proposition 3.7, \[|L_{U}|\geq q^{k-1}+|L_{\overline{U}}|,\] where \(L_{\overline{U}}\) is the projection of \(L_{U}\) from \(P\). Then \(L_{\overline{U}}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(k-1\) spanning \(\operatorname{PG}(d-1,q^{n})\). Furthermore, \(W+\langle u\rangle_{\mathbb{F}_{q^{n}}}\) is an \((r-2)\)-space in the quotient space, containing points of complementary weights, all weights equal to 1. Hence, it intersects \(L_{\overline{U}}\) in a canonical \(\mathbb{F}_{q}\)-subgeometry. This proves that \(B_{q,n}(r,k,d)\geq q^{k-1}+B_{q,n}(r-1,k-1,d-1)\) if \(r>0\). Now suppose that \(r=0\). Then the minimum of the minimum weight of the points of \(L_{U}\), which we will denote by \(m\), is at least 2. By Lemma 3.8, \(|L_{U}|\geq B_{q,n}(1,k-m+1,d)\). Since this lower bound on \(|L_{U}|\) is decreasing in \(m\), we need to find an upper bound on \(m\). There are \(d+1\) independent points in \(L_{U}\), since \(L_{U}\) spans \(\operatorname{PG}(d,q^{n})\). Lemma 2.5 then tells us that \(m(d+1)\leq k\), which implies \[m\leq\left\lfloor\frac{k}{d+1}\right\rfloor.\] Hence, \[|L_{U}|\geq B_{q,n}\left(1,k-\left\lfloor\frac{k}{d+1}\right\rfloor+1,d \right)\geq q^{k-\left\lfloor\frac{k}{d+1}\right\rfloor}+B_{q,n}\left(0,k- \left\lfloor\frac{k}{d+1}\right\rfloor,d-1\right).\qed\] In the next sections, we will investigate linear sets attaining equality in the bound of Theorem 1.4. To this end, we introduce some relevant terminology. **Definition 3.9**.: Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set of rank \(k\) in \(\operatorname{PG}(d,q^{n})\). If \[|L_{U}|=q^{k-1}+\ldots+q^{k-d}+1,\] we say that \(L_{U}\) is of \(d\)_-minimum size_. If there is some \((r-1)\)-space \(\Omega\) such that \(L_{U}\) and \(\Omega\) satisfy the hypothesis of Theorem 1.4, and \[|L_{U}|=q^{k-1}+\ldots+q^{k-r}+I_{\Omega}\leq q^{k-1}+\ldots+q^{k-d}+1,\] then we say that \(L_{U}\) is of \((r,d,\Omega)\)_-minimum size_, or simply of \((r,d)\)_-minimum size_. A linear set of \((d,d)\)-minimum size, will also be called of _proper \(d\)-minimum size_. **Remark 3.10**.: By Remark 2.4, an \((r,d)\)-minimum size linear set has maximum field of linearity \(\mathbb{F}_{q}\) whenever \(r\geq 2\). In the next proposition, we also prove that if a linear set is of \((r,d)\)-minimum size, it is of \((r^{\prime},d)\)-minimum size for every \(r^{\prime}\leq r\). **Proposition 3.11**.: _Let \(L_{U}\) be an \((r,d)\)-minimum size \(\mathbb{F}_{q}\)-linear set of rank \(k\) in \(\mathrm{PG}(d,q^{n})\). Then \(L_{U}\) is of \((r^{\prime},d)\)-minimum size as well, for every \(0<r^{\prime}\leq r\)._ Proof.: It is enough to prove the statement for \(r^{\prime}=r-1\). By hypothesis, we know that there is some \((r-1)\)-space \(\Omega=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\) of \(\mathrm{PG}(d,q^{n})\) meeting \(L_{U}\) in a canonical subgeometry, such that \[|L_{U}|=q^{k-1}+q^{k-2}+\ldots+q^{k-r}+|L_{\overline{U}}|, \tag{6}\] where \(L_{\overline{U}}\) is the \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(V/W,\mathbb{F}_{q^{n}})=\mathrm{PG}(d-r,q^{n})\) defined by \(\overline{U}=U+W\subseteq V/W\). Let \(\Omega^{\prime}=\mathrm{PG}(W^{\prime},\mathbb{F}_{q^{n}})\) be an \((r-2)\)-space of \(\Omega\) that meets \(L_{U}\) in a canonical subgeometry. So, by Proposition 3.7, we have that \[|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-r+1}+|L_{\overline{U^{\prime}}}|, \tag{7}\] where \(L_{\overline{U^{\prime}}}\) is the \(\mathbb{F}_{q}\)-linear set of rank \(k-r+1\) in \(\mathrm{PG}(V/W^{\prime},\mathbb{F}_{q^{n}})=\mathrm{PG}(d-r+1,q^{n})\) defined by \(\overline{U^{\prime}}=U+W^{\prime}\subseteq V/W^{\prime}\). Therefore, by (6), it follows \(q^{k-r}+|L_{\overline{U^{\prime}}}|\geq|L_{\overline{U^{\prime}}}|\). On the other hand, since \(w_{L_{U}}(\Omega)=r\), we get \(w_{L_{\overline{U^{\prime}}}}(\mathrm{PG}(W/W^{\prime},\mathbb{F}_{q^{n}}))=1\) and so \(L_{\overline{U^{\prime}}}\) has a point of weight \(1\). Now, by Proposition 3.7, we get that \[|L_{\overline{U^{\prime}}}|\geq q^{k-r}+\left|L_{\overline{U^{\prime}}}\right|\] with \(\overline{U^{\prime}}=U/W^{\prime}+W/W^{\prime}\leq(V/W^{\prime})/W\), which is equal to \(\overline{U}=U+W\leq V/W\). Hence, \[|L_{\overline{U^{\prime}}}|\geq q^{k-r}+|L_{\overline{U^{\prime}}}|.\] Then, by (6), equality holds in (7) and so \(L_{U}\) is of \((r-1,d,\Omega^{\prime})\)-minimum size. ### Sufficient conditions to apply Result 1.3 In this part of the section, we show some conditions on a linear set that ensure the existence of at least one hyperplane meeting the linear set in a canonical subgeometry. **Theorem 3.12**.: _Let \(k,d\) and \(r\) be non negative integers with \(r<k,d\). Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(d,q^{n})\) of rank \(k+d-r\) spanning \(\mathrm{PG}(d,q^{n})\). Suppose that there is an \(r\)-space \(\Omega\) of \(\mathrm{PG}(d,q^{n})\) such that \(\mathrm{w}_{L_{U}}(\Omega)=k\), and \(\Omega\) contains an \((r-1)\)-space \(\Omega^{\prime}\) that meets \(L_{U}\) in a canonical \(\mathbb{F}_{q}\)-subgeometry. Then some hyperplane \(\Pi\) of \(\mathrm{PG}(d,q^{n})\) meets \(L_{U}\) in a canonical \(\mathbb{F}_{q}\)-subgeometry, implying \(|L_{U}|\geq q^{k+d-r-1}+q^{k+d-r-2}+\ldots+q^{k-r}+1\)._ Proof.: Suppose that \(\mathrm{PG}(d,q^{n})=\mathrm{PG}(V,\mathbb{F}_{q^{n}})\), \(\Omega=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\), and \(\Omega^{\prime}=\mathrm{PG}(W^{\prime},\mathbb{F}_{q^{n}})\). Consider the projection of \(L_{U}\) from \(\Omega^{\prime}\), which equals the linear set \(L_{\overline{U}}\), with \(\overline{U}=U+W^{\prime}\subseteq V/W^{\prime}\). Write \(P_{0}=W/W^{\prime}\), and choose any point \(P_{1}\in L_{\overline{U}}\setminus\{P_{0}\}\). Since \(L_{U}\) spans \(\mathrm{PG}(d,q^{n})\), we can extend \(P_{0},P_{1}\) to a subset \(P_{0},P_{1},\ldots,P_{d-r}\) of \(L_{\overline{U}}\) that spans \(\mathrm{PG}(V/W^{\prime},\mathbb{F}_{q^{n}})\). Also, \(\mathrm{w}_{L_{\overline{U}}}(P_{0})=k-r\), and the rank of \(L_{\overline{U}}\) equals \(k+d-2r=(k-r)+(d-r)\). Hence, by Lemma 2.5, \[(k-r)+(d-r)\geq\sum_{i=0}^{d-r}\mathrm{w}_{L_{\overline{U}}}(P_{i})=(k-r)+\sum_ {i=1}^{d-r}\mathrm{w}_{L_{\overline{U}}}(P_{i}),\] which implies that \(\operatorname{w}_{L_{\overline{U}}}(P_{i})=1\) for all \(i\geq 1\), and \(P_{0},\ldots,P_{d-r}\) are points of complementary weights. Therefore, \(L_{\overline{U}}\) meets \(\langle P_{1},\ldots,P_{d-r}\rangle\) in a canonical subgeometry. There is a unique \(\mathbb{F}_{q^{n}}\)-space \(W^{\prime\prime}\) through \(W^{\prime}\) such that \(\langle P_{1},\ldots,P_{d-r}\rangle=\operatorname{PG}(W^{\prime\prime}+W^{ \prime},\mathbb{F}_{q^{n}})\). It follows that \(\operatorname{PG}(W^{\prime\prime},\mathbb{F}_{q^{n}})\) meets \(L_{U}\) in a canonical subgeometry. For linear sets on a projective line \(\operatorname{PG}(1,q^{n})\) of rank \(n\), the notion of maximum field of linearity and maximum geometric field of linearity coincide. **Result 3.13** ([7, Proposition 2.3][12, Proposition 1.10]).: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set on \(\operatorname{PG}(1,q^{n})\) of rank \(n\). Define \(s=\min_{P\in L_{U}}\operatorname{w}_{L_{U}}(P)\). Then the maximum field of linearity and the maximum geometric field of linearity are both \(\mathbb{F}_{q^{s}}\)._ In particular, when \(n\) is prime, the above result implies the existence of a point of weight 1. **Corollary 3.14**.: _Assume that \(n\) is prime and \(d\geq 2\). Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set in \(\operatorname{PG}(d,q^{n})\) of rank \(n+d-1\) spanning \(\operatorname{PG}(d,q^{n})\). Suppose that there is a line \(\ell\) of \(\operatorname{PG}(d,q^{n})\) such that \(\operatorname{w}_{L_{U}}(\ell)=n\). Then \(|L_{U}|\geq q^{n+d-2}+q^{n+d-3}+\ldots+q^{n-1}+1\)._ Proof.: On the one hand, \(n\) is prime, so the maximum field of linearity of \(L_{U}\cap\ell\) is \(\mathbb{F}_{q}\). On the other hand, \(\operatorname{w}_{L_{U}}(\ell)=n\) and by Result 3.13, the maximum field of linearity of \(L_{U}\cap\ell\) is \(\mathbb{F}_{q^{s}}\) with \(s=\min_{P\in L_{U}\cap\ell}\operatorname{w}_{L_{U}}(P)\). Hence, \[\min_{P\in L_{U}\cap\ell}\operatorname{w}_{L_{U}}(P)=1,\] and \(\ell\) contains a point of weight 1. The statement now follows from Theorem 3.12. Also when \(n\) is a prime and the rank of the linear set is \(n+d-1\), in order to have the bound from Result 1.3, it is enough to impose the existence of a \((d-2)\)-space meeting the linear set in a canonical subgeometry of this space. **Corollary 3.15**.: _Assume that \(n\) is prime and \(d\geq 2\). Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set in \(\operatorname{PG}(d,q^{n})\) of rank \(k=n+d-1\) spanning \(\operatorname{PG}(d,q^{n})\). Suppose that some \((d-2)\)-space meets \(L_{U}\) in a canonical subgeometry. Then \(|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-d}+1\)._ Proof.: Let \(\Omega=\operatorname{PG}(W,\mathbb{F}_{q^{n}})\) be the \((d-2)\)-space of \(\operatorname{PG}(d,q^{n})\) meeting \(L_{U}\) in a canonical subgeometry. By Proposition 3.7, we know that \[|L_{U}|\geq q^{k-1}+q^{k-2}+\ldots+q^{k-d+1}+|L_{\overline{U}}|,\] where \(\overline{U}=U+W\) is an \(\mathbb{F}_{q}\)-subspace of the quotient \(V/W\). Since \(L_{U}\) has rank \(n+d-1\) and \(\dim_{\mathbb{F}_{q}}(U\cap W)=d-1\), then \(L_{\overline{U}}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(n\) in \(\operatorname{PG}(1,q^{n})\) spanning the whole line. Therefore, since \(n\) is a prime, by Result 3.13 it follows that \(L_{\overline{U}}\) has at least one point of weight 1 and so \(|L_{\overline{U}}|\geq q^{n-1}+1\) and the assertion follows. ### Consequences of Theorem 1.4 When \(r=1\), Theorem 1.4 looks as follows. **Corollary 3.16**.: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set of rank \(k\geq 2\) in \(\operatorname{PG}(d,q^{n})\), admitting at least one point of weight 1. Let \(I\) be the number of secant lines through some point of weight 1. Then \(|L_{U}|\geq q^{k-1}+I\)._ In particular, this result implies that the rank of a linear set is determined by its size and the minimum weight of its points. **Proposition 3.17**.: _Let \(L_{U}\) be an \(\mathbb{F}_{q}\)-linear set spanning \(\mathrm{PG}(d,q^{n})\), containing more than one point. Denote \(m=\min_{P\in L_{U}}\mathrm{w}_{L_{U}}(P)\). Then the rank of \(L_{U}\) is the unique integer \(k\) satisfying_ \[q^{k-m}+B_{q,n}(0,k-m,d-1)\leq|L_{U}|\leq\frac{q^{k}-1}{q^{m}-1},\] _i.e. \(k=\lceil\log_{q}(|L_{U}|)\rceil+m-1=\lfloor\log_{q}(|L_{U}|)\rfloor+m\)._ Proof.: As in the proof of Theorem 1.5, we have that \(|L_{U}|\geq B_{q,n}(1,k-m+1,d)\geq q^{k-m}+B_{q,n}(0,k-m,d-1)\). The lower bound follows from Lemma 3.8 and Corollary 3.16. By Equation (2), \[(q^{m}-1)|L_{U}|=(q^{m}-1)\sum_{i=m}^{n}N_{i}\leq\sum_{i=m}^{n}N_{i}(q^{i}-1)=q ^{k}-1.\qed\] Another consequence of Corollary 3.16 is that any \(\mathbb{F}_{q}\)-linear set is spanned by its points of minimum weight, (cf. [6, Lemma 2.2] for linear sets on \(\mathrm{PG}(1,q^{n})\)). **Proposition 3.18**.: _If an \(\mathbb{F}_{q}\)-linear \(L_{U}\) spans \(\mathrm{PG}(d,q^{n})\), then its points of minimum weight also span \(\mathrm{PG}(d,q^{n})\)._ Proof.: Suppose that \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(k\), spanning \(\mathrm{PG}(d,q^{n})\), and denote \(m=\min_{P\in L_{U}}\mathrm{w}_{L_{U}}(P)\). By Corollary 3.16, \(|L_{U}|>q^{k-m}\). Now assume that the points of weight \(m\) of \(L_{U}\) lie in a hyperplane \(\pi=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\) of \(\mathrm{PG}(d,q^{n})\). Suppose that \(U_{1}=U\cap W\). Then there exits a subspace \(U_{2}\) of \(U\) such that \(U=U_{1}\oplus U_{2}\). Now let \(U^{\prime}_{1}\) be an \(\mathbb{F}_{q}\)-subspace of \(U_{1}\) of codimension \(m-1\), and let \(U^{\prime}_{2}\) be an \(\mathbb{F}_{q}\)-subspace of \(U_{2}\) of codimension \(1\). Let \(U^{\prime}=U^{\prime}_{1}\oplus U^{\prime}_{2}\), then \(L_{U^{\prime}}\) and \(L_{U}\) coincide as point sets. Indeed, if \(P\in L_{U}\cap\pi\), then \(\mathrm{w}_{L_{U}}(P)=\mathrm{w}_{L_{U_{1}}}(P)\geq m\). As in the proof of Lemma 3.8, this implies that \(\mathrm{w}_{L_{U^{\prime}}}(P)=\mathrm{w}_{L_{U^{\prime}_{1}}}(P)\geq 1\). If \(P\in L_{U}\setminus\pi\), then \(\mathrm{w}_{L_{U}}(P)\geq m+1\), and as in the proof of Lemma 3.8, \(\mathrm{w}_{L_{U^{\prime}}}(P)\geq 1\). But \[\dim_{q}U^{\prime}=(\dim_{q}U_{1}-(m-1))+(\dim_{q}U_{2}-1)=k-m.\] Hence, by Equation (3), \(|L_{U}|=|L_{U^{\prime}}|\leq\frac{q^{k-m}-1}{q-1}<q^{k-m}\), a contradiction. ## 4 Constructions of \(d\)-minimum size linear sets ### Exploring the Jena-Van de Voorde construction Recently, Jena and Van de Voorde constructed \(d\)-minimum size linear sets admitting points of complementary weights, and they completely determined their weight spectrum and weight distribution. Recall that if \(\lambda\in\mathbb{F}_{q^{n}}\), then the _degree_ of \(\lambda\) over \(\mathbb{F}_{q}\) equals the degree of the minimal polynomial of \(\lambda\) over \(\mathbb{F}_{q}\), or equivalently the smallest integer \(t\) such that \(\lambda\in\mathbb{F}_{q^{t}}\). **Construction 4.1** ([11, Theorem 2.17]).: _Suppose that \(\lambda\in\mathbb{F}_{q^{n}}\) has degree \(t>1\) over \(\mathbb{F}_{q}\). Choose positive integers \(k_{0}\geq\ldots\geq k_{d}\) such that \(k_{0}+k_{1}\leq t+1\). Define_ \[JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d}) =\langle 1,\lambda,\ldots,\lambda^{k_{0}-1}\rangle_{\mathbb{F}_{q}} \times\ldots\times\langle 1,\lambda,\ldots,\lambda^{k_{d}-1}\rangle_{\mathbb{F}_{q}}\] \[=\{(f_{0}(\lambda),\ldots,f_{d}(\lambda))\colon f_{i}\in\mathbb{F} _{q}[X],\,\deg(f_{i})<k_{i}\}.\] _Then \(L_{JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})}\) is a \(d\)-minimum size \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(d,q^{n})\) of rank \(k_{0}+\ldots+k_{d-1}\)._ Note that since \(JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\) is a Cartesian product of \(\mathbb{F}_{q}\)-subspaces of \(\mathbb{F}_{q^{n}}\), it indeed admits points of complementary weights. Before proceeding, we make some conventions regarding polynomials. **Definition 4.2**.: Given two polynomials \(f,g\in\mathbb{F}_{q}[X]\), let \(\gcd(f,g)\) denote the unique monic polynomial of maximal degree that divides \(f\) and \(g\). We call \(f\) and \(g\)_coprime_ if \(\gcd(f,g)=1\). Furthermore, we will use the convention that the degree of the zero polynomial is \(-\infty\), so that the equality \(\deg(f\cdot g)=\deg f+\deg g\) still holds if \(f\) or \(g\) is the zero polynomial. **Remark 4.3** ([11, Remark 2.19]).: Jena and Van de Voorde also determined the weight spectrum of the above linear set. It is \((1,\ldots,k_{0})\) if \(k_{1}=k_{0}\), and \((1,\ldots,k_{1},k_{0})\) if \(k_{1}<k_{0}\), in which case \(E_{0}\) is the unique point of weight \(k_{0}\). They also described the weight distribution, but since it is rather involved, we omit it here. It follows from their arguments that if \(\gcd(f_{0},\ldots,f_{d})=1\), then \[\operatorname{w}_{L_{U}}\left(\langle f_{0}(\lambda),\ldots,f_{d}(\lambda) \rangle_{\mathbb{F}_{q^{n}}}\right)=\min_{0\leq i\leq d}\{k_{i}-\deg(f_{i})\}. \tag{8}\] This makes it relatively easy to determine \(N_{i}\) for some large values of \(i\). For instance, let \[U=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\subseteq\mathbb{F}_{q^{n}}^{d+1},\] and assume that \(k_{1}<k_{0}\). As stated above, \(E_{0}\) is the unique point of weight \(k_{0}\), and the second largest weight of \(L_{U}\) is \(k_{1}\). We can determine \(N_{k_{1}}(L_{U})\). Let \(m\) denote the number of indices \(j\) with \(k_{j}=k_{1}\), i.e. \(k_{1}=\ldots=k_{m}>k_{m+1}\). Let \(P=\langle f_{0}(\lambda),\ldots,f_{d}(\lambda)\rangle_{\mathbb{F}_{q^{n}}}\in L _{U}\), with \(\gcd(f_{0},\ldots,f_{d})=1\). Then, by (8), \(P\) has weight \(k_{1}\) if and only if \(\deg(f_{0})\leq k_{0}-k_{1}\), \(\deg(f_{i})\leq 0\), for \(i=1,\ldots,m\) and \(f_{i}=0\), for \(i>m\) and there exists some \(j\in\{1,\ldots,m\}\) such that \(\deg(f_{j})>0\). Then, \[N_{k_{1}}(L_{U})=q^{k_{0}-k_{1}+1}\frac{q^{m}-1}{q-1}.\] The above construction has the following consequence on the existence of \(d\)-minimum size linear sets in \(\operatorname{PG}(d,q^{n})\). **Corollary 4.4** ([11, Corollary 2.18]).: _There exists a \(d\)-minimum size \(\mathbb{F}_{q}\)-linear set of rank \(k\) in \(\operatorname{PG}(d,q^{n})\) whenever_ \[d<k\leq\begin{cases}(d+1)\frac{n+1}{2}&\text{if $n$ is odd,}\\ (d+1)\frac{n}{2}+1&\text{if $n$ is even.}\end{cases}\] We now present a sufficient condition for the linear set of Construction 4.1 to be of proper \(d\)-minimum size. **Theorem 4.5**.: _Consider \(U=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\) as in Construction 4.1. Suppose that there exist pairwise coprime polynomials \(g_{0},\ldots,g_{d}\in\mathbb{F}_{q}[X]\) such that for each \(i\), \(\deg g_{i}=k_{i}-1\). If \(k_{0}+\ldots+k_{d}\leq t+d\), then \(L_{U}\) is of proper \(d\)-minimum size._ Proof.: By Construction 4.1, we know that \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set in \(\operatorname{PG}(d,q^{n})\) of rank \(k=k_{0}+\ldots+k_{d}\) of \(d\)-minimum size. So it remains to prove that there exists a hyperplane of \(\operatorname{PG}(d,q^{n})\) meeting \(L_{U}\) in a canonical subgeometry. Consider the points \(P_{i}=\langle\mathbf{e}_{0}+g_{i}(\lambda)\mathbf{e}_{i}\rangle_{\mathbb{F}_ {q^{n}}}\) for \(i=1,\ldots,d\). Clearly, \(P_{1},\ldots,P_{d}\) are independent, hence they span a hyperplane. Define the polynomial \[G(X)=\prod_{i=1}^{d}g_{i}(X).\] Note that for each \(i\geq 1\), the polynomial \[\left(\frac{G}{g_{i}}\right)(X)=\prod_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{d}g_{j}(X)\] is well-defined. Then the equation of the hyperplane \(\Pi=\langle P_{1},\ldots,P_{d}\rangle_{\mathbb{F}_{q^{n}}}\) of \(\mathrm{PG}(d,q^{n})\) is \[G(\lambda)X_{0}=\sum_{i=1}^{d}\left(\frac{G}{g_{i}}\right)(\lambda)X_{i}. \tag{9}\] Let \(k=k_{0}+\ldots+k_{d}\) denote the rank of \(L_{U}\). Then \[\deg G=\sum_{i=1}^{d}(k_{i}-1)=k-k_{0}-d<t,\] hence \(G(\lambda)\neq 0\), and Equation (9) does indeed define a hyperplane. Now take a non-zero vector \(v=(f_{0}(\lambda),\ldots,f_{d}(\lambda))\in U\), and suppose that \(\langle v\rangle_{\mathbb{F}_{q^{n}}}\in\Pi\). Then \[G(\lambda)f_{0}(\lambda)=\sum_{i=1}^{d}\left(\frac{G}{g_{i}}\right)(\lambda) f_{i}(\lambda). \tag{10}\] Every term in Equation (10) is a polynomial in \(\lambda\), and \[\deg(Gf_{0}) =\deg G+\deg f_{0}=(k-k_{0}-d)+\deg f_{0}<t,\] \[\deg((G/g_{i})f_{i}) =\deg G+\deg f_{i}-\deg(g_{i})\leq\deg(G)<t.\] Since \(1,\lambda,\ldots,\lambda^{t-1}\) are \(\mathbb{F}_{q}\)-linearly independent, Equation (10) implies that \[G(X)f_{0}(X)=\sum_{i=1}^{d}\left(\frac{G}{g_{i}}\right)(X)f_{i}(X).\] On the one hand, this implies that \(f_{0}\) is a constant polynomial. Otherwise, the left-hand side has degree greater than \(\deg(G)\), but the degree of the right-hand side is at most \(\deg(G)\), a contradiction. On the other hand, for each \(i\), \[g_{i}(X)\mid\left(G(X)f_{0}(X)-\sum_{1\leq j\neq i}\left(\frac{G}{g_{j}} \right)(X)f_{j}(X)\right)=\left(\frac{G}{g_{i}}\right)(X)f_{i}(X).\] Since \(G/g_{i}\) is coprime with \(g_{i}\), and \(\deg(f_{i})\leq\deg(g_{i})\) this is only possible if \(f_{i}\) is a multiple of \(g_{i}\). Hence, \[v=(\alpha_{0},\alpha_{1}g_{1}(\lambda),\ldots,\alpha_{d}g_{d}(\lambda)),\] for some scalars \(\alpha_{0},\ldots,\alpha_{d}\in\mathbb{F}_{q}\). Moreover, since \(\langle v\rangle_{\mathbb{F}_{q^{n}}}\in\Pi\), \(\alpha_{0}=\alpha_{1}+\ldots+\alpha_{d}\). Hence, \(L_{U}\) intersects \(\Pi\) in the linear set \(L_{W}\), with \[W=\left\{\sum_{i=1}^{d}\alpha_{i}(\mathbf{e}_{0}+g_{i}(\lambda)\mathbf{e}_{i} )\colon\alpha_{i}\in\mathbb{F}_{q}\right\}.\] Therefore, \(L_{U}\) intersects \(\Pi\) in a canonical subgeometry. A sufficient condition to ensure the existence of pairwise coprime polynomials \(g_{0},\ldots,g_{d}\in\mathbb{F}_{q}[X]\) such that \(\deg(g_{i})=k_{i}-1\), is to choose the size of the ground field large enough. **Proposition 4.6**.: _Consider \(U=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\) as in Construction 4.1, with \(k_{0}+\ldots+k_{d}\leq t+d\). Assume that_ \[\sum_{i=0}^{d}k_{i}-d-1\leq q.\] _Then \(L_{U}\) is of proper \(d\)-minimum size._ Proof.: By the hypothesis, we can consider \(d+1\) subsets \(S_{0},\ldots,S_{d}\) of \(\mathbb{F}_{q}\) that are pairwise disjoint and such that \(|S_{i}|=k_{i}-1\), for each \(i\in\{0,\ldots,d\}\). Then, we can define \(g_{i}(x)=\prod_{\alpha\in S_{i}}(x-\alpha_{i})\). So the assertion follows by Theorem 4.5. Another sufficient condition to ensure the existence of pairwise coprime polynomials \(g_{0},\ldots,g_{d}\in\mathbb{F}_{q}[X]\) such that \(\deg(g_{i})=k_{i}-1\), is that the \(g_{i}\)'s are different monic irreducible polynomials over \(\mathbb{F}_{q}\). It is well known, see e.g. [14, Theorem 3.25], that the number of monic irreducible polynomials of degree \(s\) over the finite field \(\mathbb{F}_{q}\) is given by Gauss's formula \[\frac{1}{s}\sum_{h|s}\mu(s/h)q^{h},\] where \(h\) runs over the set of all positive divisors of \(s\) and \(\mu\) denotes the Mobius function. **Remark 4.7**.: We note the following lower bound on the number of monic irreducible polynomials of degree \(s\) over \(\mathbb{F}_{q}\), see e.g. [3]: \[\frac{1}{s}\sum_{h|s}\mu\left(s/h\right)q^{h}\geq\frac{q^{s}-2q^{s/2}}{s}.\] So we get the following corollary. **Corollary 4.8**.: _Consider \(U=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\) as in Construction 4.1, with \(k_{0}+\ldots+k_{d}\leq t+d\). For each \(s=1,\ldots,t\), suppose that_ \[|\{i\colon k_{i}-1=s\}|\leq\frac{q^{s}-2q^{s/2}}{s}.\] _Then \(L_{U}\) is of proper \(d\)-minimum size._ Clearly, if the rank of a linear set \(L_{U}\) obtained from Construction 4.1 is greater than \(n+d\), then every hyperplane has weight at least \(d+1\) in \(L_{U}\), so \(L_{U}\) cannot be of proper \(d\)-minimum size. In case the rank exceeds \(n+d\), we can prove that \(L_{U}\) is of \((1,d)\)-minimum size under some constraints on the rank. **Proposition 4.9**.: _Let \(U=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d})\) be as in Construction 4.1. If \(k_{0}+k_{d-1}+k_{d}\leq t+2\), then \(L_{U}\) is a \((1,d)\)-minimum size \(\mathbb{F}_{q}\)-linear set._ Proof.: Let \[U^{\prime}=\{(f_{0}(\lambda),\ldots,f_{d-1}(\lambda)+\lambda^{k_{d-1}-1}f_{d}( \lambda),f_{d}(\lambda))\colon f_{i}\in\mathbb{F}_{q}[X],\,\deg(f_{i})<k_{i}\}.\] Then \(U^{\prime}\) is \(\operatorname{GL}(d+1,q^{n})\)-equivalent to \(U\) via the \(\mathbb{F}_{q^{n}}\)-linear map \[\varphi:v=(v_{0},\ldots,v_{d})\mapsto v+v_{d}\lambda^{k_{d-1}-1}\mathbf{e}_{d -1}.\] The point \(\langle(0,\ldots,0,-\lambda^{k_{d-1}-1},1)\rangle_{\mathbb{F}_{q^{n}}}\) has weight \(1\) in \(L_{U}\) and it is mapped to point \(E_{d}\) by \(\varphi\). So \(E_{d}\) has weight \(1\) in \(L_{U^{\prime}}\). We prove that \(|L_{U^{\prime}}|=q^{k-1}+|L_{\overline{U}}|\), where \(\overline{U}=U^{\prime}+E_{d}\leq_{q}\mathbb{F}_{q^{n}}^{d+1}/E_{d}\). Note that \(\mathbb{F}_{q^{n}}^{d+1}/E_{d}\) can be identified with \(\mathbb{F}_{q^{n}}^{d}\) and \(\overline{U}=U^{\prime}+E_{d}\leq_{q}\mathbb{F}_{q^{n}}^{d+1}/E_{d}\) with \[\overline{U}=JV_{q,n}(\lambda,t;k_{0},\ldots,k_{d-2},k_{d-1}+k_{d}-1).\] By hypothesis \(k_{0}+k_{d-1}+k_{d}-1\leq t+1\), and so \(k_{0},\ldots,k_{d-2},k_{d-1}+k_{d}-1\) indeed satisfy the hypothesis of Construction 4.1 when rearranged in descending order. Therefore, \(|L_{\overline{U}}|=q^{k-2}+\ldots+q^{k-d}+1\). Since \(|L_{U}|=|L_{U^{\prime}}|=q^{k-1}+\ldots+q^{k-d}+1=q^{k-1}+|L_{\overline{U}}|\), we have the assertion. The above proposition together with Corollary 4.4, allows to construct \((1,d)\)-minimum size linear sets whose ranks exceed \(n+d\). **Corollary 4.10**.: _There exist \((1,d)\)-minimum size \(\mathbb{F}_{q}\)-linear sets in \(\mathrm{PG}(d,q^{n})\), \(d\geq 2\), of rank \(k\), whenever_ \[d<k\leq\begin{cases}d\frac{n+1}{2}+1&\text{if $n$ is odd},\\ d\frac{n}{2}+2&\text{if $n$ is even}.\end{cases}\] ### Generalizing the Caserta construction In [16, Theorem 4.1], a construction is given of linear sets on the projective line, based on the more general framework exploited in [10] and [22]. In this subsection, we generalize this to higher dimensions. The construction starts from an \(\mathbb{F}_{q}\)-linear set \(L_{U^{\prime}}\) in \(\mathrm{PG}(d,q^{t})\), and yields an \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(d,q^{st})\). Moreover, the weight distribution of \(L_{U}\) is completely determined by the weight distribution of \(L_{U^{\prime}}\). **Construction 4.11**.: _Suppose that \(n=st\) with \(s,t>1\). Let \(U^{\prime}\) be an \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{t}}^{d+1}\subseteq\mathbb{F}_{q^{n}}^{d+1}\) with \(\dim_{\mathbb{F}_{q}}(U^{\prime})=k^{\prime}>0\). Let \(Z\) be an \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}\) of dimension \(r>0\), such that \(1\notin Z\). Define_ \[C_{q,s,t}(Z,U^{\prime}):=\{(z+u_{0},u_{1},\ldots,u_{d})\colon z\in Z,(u_{0}, \ldots,u_{d})\in U^{\prime}\}\subseteq\mathbb{F}_{q^{t}}^{d+1},\] _which we will simply denote by \(U\). Then_ 1. _the_ \(\mathbb{F}_{q}\)_-linear set_ \(L_{U}\subseteq\mathrm{PG}(d,q^{n})\) _has rank_ \(rt+k^{\prime}\)_,_ 2. \(|L_{U}|=q^{rt}|L_{U^{\prime}}\setminus\{E_{0}\}|+1\)_,_ 3. \(\mathrm{w}_{L_{U}}(E_{0})=rt+\mathrm{w}_{L_{U^{\prime}}}(E_{0})\)_,_ 4. \(N_{i}(L_{U})=q^{rt}(N_{i}(L_{U^{\prime}})-\delta_{i,\mathrm{w}_{L_{U^{\prime}} }(E_{0})})+\delta_{i,\mathrm{w}_{L_{U}}(E_{0})}\)_,_ _where \(\delta_{i,j}\) denotes the Kronecker symbol._ Proof.: (1) Since \(Z\) is an \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}\), and \(1\notin Z\), \(Z\cap\mathbb{F}_{q^{t}}=\{0\}\). Furthermore, since \(U^{\prime}\) is an \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{t}}^{d+1}\), \(Z\cap\{u_{0}:(u_{0},\ldots,u_{d})\in U^{\prime}\}=\{0\}\). Hence, \[U=(Z\times\{0\}^{d})\oplus_{\mathbb{F}_{q}}U^{\prime}.\] Therefore, \[\dim_{\mathbb{F}_{q}}U=\dim_{\mathbb{F}_{q}}Z+\dim_{\mathbb{F}_{q}}U^{\prime }=rt+k^{\prime}.\] (3) Similarly, \[\mathrm{w}_{L_{U}}(E_{0}) =\dim_{\mathbb{F}_{q}}\left(Z\oplus_{\mathbb{F}_{q}}\left\{u_{0 }:u_{0}\mathbf{e}_{0}\in U^{\prime}\right\}\right)=\dim_{\mathbb{F}_{q}}Z+ \dim_{\mathbb{F}_{q}}(\{u_{0}:u_{0}\mathbf{e}_{0}\in U^{\prime}\})\] \[=rt+\mathrm{w}_{L_{U^{\prime}}}(E_{0}).\] (2,4) Suppose that \[\langle z\mathbf{e}_{0}+u\rangle_{\mathbb{F}_{q^{n}}}=\langle z^{\prime} \mathbf{e}_{0}+v\rangle_{\mathbb{F}_{q^{n}}},\] with \(z,z^{\prime}\in Z\), and \(u,v\in U^{\prime}\setminus\langle\mathbf{e}_{0}\rangle_{\mathbb{F}_{q^{n}}}\). Then \(z\mathbf{e}_{0}+u=\alpha(z^{\prime}\mathbf{e}_{0}+v)\) for some \(\alpha\in\mathbb{F}_{q^{n}}\). Since \(u,v\) are not multiples of \(\mathbf{e}_{0}\), there must exist some position \(j>0\) such that \(u_{j},v_{j}\neq 0\). This implies that \(\alpha=v_{j}/u_{j}\in\mathbb{F}_{q^{t}}\). We also have that \(z+u_{0}=\alpha(z^{\prime}+v_{0})\), hence \[z-\alpha z^{\prime}=\alpha v_{0}-u_{0}\] Recall that \(Z\) is an \(\mathbb{F}_{q^{t}}\)-subspace, and that \(u_{0},v_{0},\alpha\in\mathbb{F}_{q^{t}}\). Therefore, the left-hand side of the above equality is in \(Z\), and the right-hand side is in \(\mathbb{F}_{q^{t}}\). Since \(Z\cap\mathbb{F}_{q^{t}}=\{0\}\), this implies that \(z=\alpha z^{\prime}\) and therefore \(u=\alpha v\). Vice versa, if \(z\in Z\), \(u\in U^{\prime}\setminus\langle\mathbf{e}_{0}\rangle_{\mathbb{F}_{q^{n}}}\) and \(\alpha u\in U^{\prime}\) for some \(\alpha\in\mathbb{F}_{q^{t}}\), then \(\langle z\mathbf{e}_{0}+u\rangle_{\mathbb{F}_{q^{n}}}=\langle\alpha z\mathbf{e }_{0}+\alpha u\rangle_{\mathbb{F}_{q^{n}}}\). This proves that \[\operatorname{w}_{L_{U}}(\langle z+u\rangle_{\mathbb{F}_{q^{n}}})=\dim_{ \mathbb{F}_{q}}\{\alpha\in\mathbb{F}_{q^{t}}:\alpha u\in U^{\prime}\}= \operatorname{w}_{L_{U^{\prime}}}(\langle u\rangle_{\mathbb{F}_{q^{n}}}).\] Hence, varying \(z\), we see that every point of \(L_{U^{\prime}}\setminus\{E_{0}\}\) gives rise to \(|Z|\) points of \(L_{U}\setminus\{E_{0}\}\) of the same weight, and this accounts for all points of \(L_{U}\setminus\{E_{0}\}\). Points (2) and (4) follow directly from this observation and the fact that \(E_{0}\in L_{U}\). **Remark 4.12**.: We remark that \(L_{U^{\prime}}\) is contained in \(L_{U}\) and the weight distribution and rank of \(L_{U}\) in the above construction only depends on the weight distribution of \(L_{U^{\prime}}\) and \(\operatorname{w}_{L_{U^{\prime}}}(E_{0})\), but not on the specific structure of \(U^{\prime}\). In particular, if \(\varphi\in\Gamma\mathrm{L}(d+1,q^{t})\), and \(\varphi\) fixes \(\langle\mathbf{e}_{0}\rangle\), then \(C_{q,s,t}(Z,U^{\prime})\) and \(C_{q,s,t}(Z,\varphi(U^{\prime}))\) have the same rank and weight distribution. Given some minor conditions, the above construction preserves the property of being \((r,d)\)-minimum size. **Proposition 4.13**.: _Let \(U=C_{q,s,t}(Z,U^{\prime})\) be as in Construction 4.11. If \(L_{U^{\prime}}\) is an \(\mathbb{F}_{q}\)-linear set of \((r,d,\Omega)\)-minimum size, and \(E_{0}\in L_{U^{\prime}}\setminus\Omega\), then \(L_{U}\) is also of \((r,d,\Omega)\)-minimum size._ Proof.: Suppose that \(\Omega=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\), and that the rank of \(L_{U^{\prime}}\) is \(k^{\prime}\). Since \(L_{U^{\prime}}\) is of \((r,d,\Omega)\)-minimum size, \(L_{U^{\prime}}\) meets \(\Omega\) in a canonical subgeometry of \(\Omega\), and \[|L_{U^{\prime}}|=q^{k^{\prime}-1}+\ldots+q^{k^{\prime}-r}+|L_{\overline{U^{ \prime}}}|,\] where \(\overline{U^{\prime}}:=U^{\prime}+W\leq_{q}\mathbb{F}_{q^{n}}^{d+1}/W\). Since \(E_{0}\notin\Omega\), up to \(\mathrm{GL}(d+1,q^{n})\)-equivalence, we can suppose that \(\Omega\) is defined by the equations \(X_{0}=\ldots=X_{d-r}=0\). Hence \(\mathbb{F}_{q^{n}}^{d+1}/W\) can be identified with \(\mathbb{F}_{q^{n}}^{d-r+1}\) in an obvious way. Now, an element \(z+u\in U\) belongs to \(W\) if and only if \(z+u_{0}=u_{1}=\ldots=u_{d-r}=0\) if and only if \(z=u_{0}=\ldots=u_{d-r}=0\). Therefore, \(U\cap W=U^{\prime}\cap W\) and so \(\Omega\) also meets \(L_{U}\) in a canonical subgeometry. Moreover, by Construction 4.11, \[\overline{U}=U+W=\{z+\overline{u}\colon\overline{u}\in\overline{U^{\prime}} \}\subseteq\mathbb{F}_{q^{n}}^{d+1}/W,\] has size \(q^{rt}(|L_{\overline{U^{\prime}}}|-1)+1\). Therefore we have \[|L_{U}| =q^{rt}(|L_{U^{\prime}}|-1)+1=q^{rt}(q^{k^{\prime}-1}+\ldots+q^{k^ {\prime}-r}+|L_{\overline{U^{\prime}}}|-1)+1\] \[=q^{k-1}+\ldots+q^{k-r}+|L_{\overline{U}}|,\] with \(k=rt+k^{\prime}\) the rank of \(L_{U}\). We can apply Construction 4.11 with \(U^{\prime}\) as in Construction 4.1, obtaining the following families of \(d\)-minimum size linear sets. **Theorem 4.14**.: _Consider \(U^{\prime}=JV_{q,t}(\lambda,t^{\prime};k_{0},\ldots,k_{d})\) where \(t^{\prime}\mid t\) as in Construction 4.1, and choose \(\varphi\in\mathrm{GL}(d+1,q^{t})\) such that \(E_{0}\in L_{\varphi(U^{\prime})}\). Now define \(U=C_{q,s,t}(Z,\varphi(U^{\prime}))\) as in Construction 4.11, with \(Z\) an \(\mathbb{F}_{q^{t}}\)-subspace of rank \(r>0\), not containing 1. Then \(L_{U}\) is a \(d\)-minimum size \(\mathbb{F}_{q}\)-linear set of rank \(k=rt+k_{0}+\ldots+k_{d}\). Moreover, the weight spectrum of \(L_{U}\) is_ \[\begin{cases}\left(1,\ldots,k_{1},k_{0},rt+\operatorname{w}_{L_{\varphi(U^{ \prime})}}(E_{0})\right)&\text{if }\operatorname{w}_{L_{\varphi(U^{\prime})}}(E_{0})<k_{0}\text{ and }k_{1}<k_{0},\\ \left(1,\ldots,k_{1},rt+\operatorname{w}_{L_{\varphi(U^{\prime})}}(E_{0}) \right)&\text{otherwise.}\end{cases}\] Proof.: The \(\mathbb{F}_{q}\)-linear set \(L_{\varphi(U^{\prime})}\) has the same weight spectrum, weight distribution, and size as \(L_{U^{\prime}}\). So the assertions follow by applying Construction 4.11 and Remark 4.3. **Remark 4.15**.: Using [11, Remark 2.19] and Construction 4.11 (3,4), one could in fact also determine the weight distribution of the linear set in the above theorem. The above construction gives new examples of proper \(d\)-minimum size linear sets. **Corollary 4.16**.: _In the hypothesis of Theorem 4.14, suppose that \(L_{U^{\prime}}\) is a \((d,d,\Pi)\)-minimum size \(\mathbb{F}_{q}\)-linear set, with \(\Pi=\operatorname{PG}(W,\mathbb{F}_{q^{n}})\). Suppose that \(E_{0}\in L_{\varphi(U^{\prime})}\backslash\tilde{\Pi}\), with \(\tilde{\Pi}=\operatorname{PG}(\varphi(W),\mathbb{F}_{q^{n}})\). Then \(L_{U}\) is a proper \(d\)-minimum size \(\mathbb{F}_{q}\)-linear set in \(\operatorname{PG}(d,q^{n})\)._ Proof.: The linear set \(L_{U^{\prime}}\) is of \((d,d,\Pi)\)-minimum size, so the hyperplane \(\Pi=\operatorname{PG}(W,\mathbb{F}_{q^{n}})\) of \(\operatorname{PG}(d,q^{n})\) meets \(L_{U^{\prime}}\) in a canonical subgeometry of \(\Pi\) and \[|L_{U^{\prime}}|=q^{m-1}+\ldots+q^{m-d}+1.\] It follows that \(\tilde{\Pi}\) also meets \(L_{\varphi(U^{\prime})}\) in a canonical subgeometry of \(\tilde{\Pi}\), that is \(L_{\varphi(U^{\prime})}\) is of proper \(d\)-minimum size as well. The assertion follows by Proposition 4.13. Construction 4.1 provides constructions of \(d\)-minimum size linear sets admitting points of complementary weights. Using Theorem 4.14, it is possible to construct proper \(d\)-minimum size linear sets that do not have this property, as we will see in the next example. This proves that in general a \(d\)-minimum size linear set need not contain independent points whose weights sum to the rank of the linear set. So in general, as already observed in [16] for the projective line, being minimum size does not determine the weight spectrum and distribution of a linear set. **Example 4.17**.: Consider \[U^{\prime}=JV_{q,6}(\lambda,6;2,2,2)\] as in Construction 4.1. Then \(L_{U^{\prime}}\) is an \(\mathbb{F}_{q}\)-linear set of rank \(6\) in \(\operatorname{PG}(2,q^{6})\) having size \(q^{5}+q^{4}+1\) and points of weight at most \(2\). Moreover, \(\operatorname{w}_{L_{U^{\prime}}}(E_{0})+\operatorname{w}_{L_{U^{\prime}}}(E _{1})+\operatorname{w}_{L_{U^{\prime}}}(E_{2})=2+2+2=6\) is equal to the rank of \(L_{U^{\prime}}\). Define \[\varphi\in\operatorname{GL}(3,q^{6}):(x,y,z)\mapsto(x,y-\lambda x,z).\] Then the \(\mathbb{F}_{q}\)-linear set \(L_{U^{\prime\prime}}\) in \(\operatorname{PG}(2,q^{6})\), with \[U^{\prime\prime}=\varphi(U^{\prime})=\{(\alpha_{0}+\alpha_{1}\lambda,\beta_{0 }+\beta_{1}\lambda-\alpha_{1}\lambda^{2},\gamma_{0}+\gamma_{1}\lambda)\colon \alpha_{i},\beta_{i},\gamma_{i}\in\mathbb{F}_{q}\}\subseteq\mathbb{F}_{q^{6}} ^{3}\] has the same rank, weight spectrum and weight distribution as \(L_{U^{\prime}}\). Note that \(\operatorname{w}_{L_{U^{\prime\prime}}}(E_{0})=1\). Choose a \(1\)-dimensional \(\mathbb{F}_{q^{6}}\)-subspace \(Z\neq\mathbb{F}_{q^{6}}\) of \(\mathbb{F}_{q^{12}}\). By Theorem 4.14, the \(\mathbb{F}_{q}\)-linear set \(L_{U}\) of \(\operatorname{PG}(2,q^{12})\), with \[U=C_{q,2,6}(Z,U^{\prime\prime})\] has rank \(12\) and size \(q^{11}+q^{10}+1\). So, it is a \(2\)-minimum size linear set. Note that the weight spectrum of \(L_{U}\) is \((1,2,7)\), and so there do not three points of complementary weights. In particular, \(L_{U}\) cannot be obtained from Construction 4.1. In some cases, Theorem 4.14 gives us linear sets admitting points of complementary weights, but with a different weight distribution than those of Construction 4.1, as stated in the following theorem. **Theorem 4.18**.: _Consider_ \[U^{\prime}=JV_{q,t}(\lambda,t;k_{0},\ldots,k_{d})\] _and let \(\varphi_{i}\) be the linear map swapping coordinates 0 and \(i\in\{0,\ldots,d\}\) (with \(\varphi_{0}=\operatorname{id}\)). Consider_ \[U=C_{q,s,t}(Z,\varphi_{i}(U^{\prime})),\] 1. _If_ \(k_{i}=k_{0}\)_, then there exists an_ \(\mathbb{F}_{q}\)_-linear set obtained from Construction_ 4.1 _with the same weight distribution as_ \(L_{U}\)_._ 2. _If_ \(k_{i}<k_{0}-1\)_, then there does not exist an_ \(\mathbb{F}_{q}\)_-linear set obtained from Construction_ 4.1 _with the same weight distribution as_ \(L_{U}\)_._ Proof.: Write \(n=st\). (1) If \(k_{i}=k_{0}\), choose a primitive element \(\mu\) of \(\mathbb{F}_{q^{n}}\). Consider \(U_{2}=JV_{q,n}(\mu,n;k_{0}+rt,k_{1},\ldots,k_{d})\). Then \(L_{U}\) and \(L_{U_{2}}\) have the same weight distribution by Remark 4.3 (see also [11, Remark 2.19]) and Construction 4.11 (3,4). (2) Now assume that \(k_{i}<k_{0}-1\), and suppose that there exists some \(\mathbb{F}_{q}\)-subspace \[U_{3}=JV_{q,n}(\mu,t^{\prime};k^{\prime}_{0},\ldots,k^{\prime}_{d})\subseteq \mathbb{F}_{q^{n}}^{d+1}\] such that \(L_{U}\) and \(L_{U_{3}}\) have the same weight distribution. Since \(L_{U}\) has a unique point of weight \(rt+k_{i}\) by Construction 4.11 (3,4), we see that by Remark 4.3, \(k^{\prime}_{0}=rt+k_{i}\). Furthermore, the second largest weight of \(L_{U}\) and \(L_{U_{3}}\) is respectively \(k_{0}\) and \(k^{\prime}_{1}\), hence \(k_{0}=k^{\prime}_{1}\). Let \(m^{\prime}\) denote the number of indices \(j\) with \(k^{\prime}_{j}=k_{0}\), i.e. \(k^{\prime}_{1}=\ldots=k^{\prime}_{m^{\prime}}>k^{\prime}_{m^{\prime}+1}\). Then, using Remark 4.3 (see also [11, Remark 2.19]), \[N_{k_{0}}(L_{U_{3}})=q^{k^{\prime}_{0}-k^{\prime}_{1}+1}\frac{q^{m^{\prime}}- 1}{q-1}=q^{rt+k_{i}-k_{0}+1}\frac{q^{m^{\prime}}-1}{q-1}.\] On the other hand, by Construction 4.11 (4) and Remark 4.3 (see also [11, Remark 2.19]), \[N_{k_{0}}(L_{U})=q^{rt}N_{k_{0}}(L_{U^{\prime}})=q^{rt}\frac{q^{m}-1}{q-1},\] with \(m\) the number indices \(j\) with \(k_{j}=k_{0}\). Therefore, \[q^{rt+k_{i}-k_{0}+1}\frac{q^{m^{\prime}}-1}{q-1}=q^{rt}\frac{q^{m}-1}{q-1}.\] Since \(\frac{q^{m}-1}{q-1}\) and \(\frac{q^{m^{\prime}}-1}{q-1}\) are coprime with \(q\), this implies that \(k_{i}=k_{0}-1\). ### Regarding equivalence We show that the two different types of \(\mathbb{F}_{q}\)-subspaces that define the \(d\)-minimum size linear sets defined in Construction 4.1 and Theorem 4.14 are \(\Gamma\mathrm{L}(d+1,q^{n})\)-inequivalent, even if the associated linear sets have the same weight spectrum and distribution (see Theorem 4.18 (1)), when the dimension of \(Z\) is the maximum possible. The trace function \(\mathrm{Tr}_{q^{n}/q}\) of \(\mathbb{F}_{q^{n}}\) over \(\mathbb{F}_{q}\), defines a non-degenerate symmetric bilinear form as follows: \[(a,b)\in\mathbb{F}_{q^{n}}\times\mathbb{F}_{q^{n}}\mapsto\mathrm{Tr}_{q^{n}/q} (ab)\in\mathbb{F}_{q}.\] Hence, for any subset \(S\) of \(\mathbb{F}_{q^{n}}\) we can define the orthogonal complement as \[S^{\perp}=\{a\in\mathbb{F}_{q^{n}}\colon\mathrm{Tr}_{q^{n}/q}(ab)=0,\ \ \forall b\in S\}.\] Note that if \(S\) is an \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}\), then \(S^{\perp}\) is an \(\mathbb{F}_{q^{t}}\)-subspace as well. Given an ordered \(\mathbb{F}_{q}\)-basis \(\mathcal{B}=(\xi_{0},\ldots,\xi_{n-1})\) of \(\mathbb{F}_{q^{n}}\), there exists a unique ordered \(\mathbb{F}_{q}\)-basis \(\mathcal{B}^{*}=(\xi_{0}^{*},\ldots,\xi_{n-1}^{*})\) of \(\mathbb{F}_{q^{n}}\) such that \(\mathrm{Tr}_{q^{n}/q}(\xi_{i}\xi_{j}^{*})=\delta_{ij}\), for \(i,j\in\{0,\ldots,n-1\}\), called the _dual basis_ of \(\mathcal{B}\), see e.g. [14, Definition 2.30]. **Lemma 4.19** ([17, Corollary 2.7]).: _Let \(\lambda\in\mathbb{F}_{q^{n}}\) and suppose that \(\mathcal{B}=(1,\lambda,\ldots,\lambda^{n-1})\) is an ordered \(\mathbb{F}_{q}\)-basis of \(\mathbb{F}_{q^{n}}\). Let \(f(x)=a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+x^{n}\) be the minimal polynomial of \(\lambda\) over \(\mathbb{F}_{q}\). Then the dual basis \(\mathcal{B}^{*}\) of \(\mathcal{B}\) is_ \[\mathcal{B}^{*}=(\delta^{-1}\gamma_{0},\ldots,\delta^{-1}\gamma_{n-1}),\] _where \(\delta=f^{\prime}(\lambda)\) and \(\gamma_{i}=\sum_{j=1}^{n-i}\lambda^{j-1}a_{i+j}\), for every \(i\in\{0,\ldots,n-1\}\)._ **Theorem 4.20**.: _Suppose that \(n=(s+1)t\), with \(s,t>1\). Consider \(U^{\prime}=JV_{q,t}(\mu,t;k_{0},\ldots,k_{d})\) as in Construction 4.1, with \(k_{0}<t-1\). Let \(\varphi_{i}\) be the linear map swapping coordinates 0 and \(i\in\{0,\ldots,d\}\) (with \(\varphi_{0}=\mathrm{id}\)) and define_ \[U_{1}=C_{q,s,t}(Z,\varphi_{i}(U^{\prime})),\] _as in Construction 4.11, with \(Z\) an \(\mathbb{F}_{q^{t}}\)-subspace of dimension \(s\), not containing 1. Consider \(U_{2}=JV_{q,n}(\lambda,n;h_{0},k_{1},\ldots,k_{d})\) as in Construction 4.1, with \(h_{0}=st+k_{0}\). Then the \(\mathbb{F}_{q}\)-subspaces \(U_{1}\) are \(U_{2}\) are \(\Gamma\mathrm{L}(d+1,q^{n})\)-inequivalent._ Proof.: Suppose that \(k_{i}<k_{0}-1\). Then, by Theorem 4.18, \(L_{U_{1}}\) and \(L_{U_{2}}\) have a distinct weight distribution, hence \(U_{1}\) and \(U_{2}\) cannot be \(\Gamma\mathrm{L}(d+1,q^{n})\)-equivalent. So suppose that \(k_{i}\in\{k_{0}-1,k_{0}\}\) and suppose by contradiction that \(U_{1}\) and \(U_{2}\) are \(\Gamma\mathrm{L}(d+1,q^{n})\)-equivalent via an element \(\varphi\). Since \(h_{0}>k_{i}\), for every \(i\in\{1,\ldots,d\}\), the point \(E_{0}\) is the only point in \(L_{U_{1}}\) and in \(L_{U_{2}}\) of weight \(h_{0}\). So, we have that \(\varphi(U_{1}\cap E_{0})=U_{2}\cap E_{0}\), that is \[aS_{1}^{\rho}=S_{2},\] some \(a\in\mathbb{F}_{q^{n}}^{*}\) and \(\rho\in\mathrm{Aut}(\mathbb{F}_{q^{n}})\), with \(S_{1}=Z\oplus\langle 1,\mu,\ldots,\mu^{k_{0}-1}\rangle_{\mathbb{F}_{q}}\) and \(S_{2}=\langle 1,\lambda,\ldots,\lambda^{h_{0}-1}\rangle_{\mathbb{F}_{q}}\). In particular, we have that \(aZ^{\rho}\subseteq S_{2}\) and so \((aZ^{\rho})^{\perp}\supseteq S_{2}^{\perp}\). Note that \(\dim_{\mathbb{F}_{q^{t}}}(aZ^{\rho})=\dim_{\mathbb{F}_{q^{t}}}(Z)=s\). This implies that \(\dim_{\mathbb{F}_{q}}((aZ^{\rho})^{\perp})=n-st=t\) and hence \((aZ^{\rho})^{\perp}\) is an \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}\) of dimension one. Consider the ordered \(\mathbb{F}_{q}\) basis \(\mathcal{B}=(1,\lambda,\ldots,\lambda^{n-1})\) of \(\mathbb{F}_{q^{n}}\) and its dual basis \(\mathcal{B}^{*}=(\lambda_{0}^{*},\ldots,\lambda_{n-1}^{*})\). So we have that \(S_{2}^{\perp}=\langle\lambda_{h_{0}}^{*},\ldots,\lambda_{n-1}^{*}\rangle_{ \mathbb{F}_{q}}\) and since \(k_{0}<t-1\), we have that \(h_{0}<n-1\). By Lemma 4.19 it follows that \[\lambda_{n-2}^{*}=\delta^{-1}(a_{n-1}+\lambda),\] and \[\lambda_{n-1}^{*}=\delta^{-1},\] where \(f(x)=a_{0}+a_{1}x+\ldots+a_{n-1}x^{n-1}+x^{n}\) is the minimal polynomial of \(\lambda\) over \(\mathbb{F}_{q}\) and \(\delta=f^{\prime}(\lambda)\). Now, since \(\lambda_{n-2}^{*},\lambda_{n-1}^{*}\in(aZ^{\rho})^{\perp}\) and since \((aZ^{\rho})^{\perp}\) has dimension one over \(\mathbb{F}_{q^{t}}\), it follows \[\frac{\lambda_{n-2}^{*}}{\lambda_{n-1}^{*}}=a_{n-1}+\lambda\in\mathbb{F}_{q^{t}},\] that is \(\lambda\in\mathbb{F}_{q^{t}}\), a contradiction. ## 5 Below the De Beule-Van de Voorde bound In this section, we will provide constructions of linear sets \(L_{U}\) that in \(\mathrm{PG}(d,q^{n})\), with \(d>2\), that are of \((r,d)\)-minimum size but not of \(d\)-minimum size. They have maximum geometric field of linearity \(\mathbb{F}_{q}\), and admit two subspaces of complementary weights. For our aims, we will suppose that one of these subspaces intersects \(L_{U}\) in a linear set with greater field of linearity. This gives us the following constructions. **Theorem 5.1**.: _Let \(n=st\), with \(s,t>1\), and suppose that_ \(\blacktriangleright\)\(U_{1}\) is a \(k_{1}\)-dimensional \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}^{d_{1}+1}\),_ \(\blacktriangleright\)\(U_{2}\) _is a \(k_{2}\)-dimensional \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{2}}^{d_{2}+1}\subseteq\mathbb{F}_{q^{2}}^{d_{2}+1}\)._ _Define \(U=U_{1}\times U_{2}\), and \(d=d_{1}+d_{2}+1\). Then \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of \(\mathrm{PG}(d,q^{n})\) of rank \(k_{1}t+k_{2}\), with_ \[|L_{U}|=|L_{U_{1}}|+q^{k_{1}t}|L_{U_{2}}|.\] _Moreover, its weight distribution satisfies_ \[N_{i}(L_{U})=N_{i}(L_{U_{1}})+q^{k_{1}t}N_{i}(L_{U_{2}}).\] Proof.: Take a vector \(u\in U_{1}\) and \(v\in U_{2}\) with \((u,v)\neq\mathbf{0}\). Then \[\mathrm{w}_{L_{U}}(\langle(u,v)\rangle_{\mathbb{F}_{q^{n}}})=\dim_{\mathbb{F} _{q}}\{\alpha\in\mathbb{F}_{q^{n}}:\alpha(u,v)\in U\}.\] Evidently, \(\alpha(u,v)\in U\) if and only if \(\alpha u\in U_{1}\) and \(\alpha v\in U_{2}\). If \(v\neq\mathbf{0}\), then \(\alpha v\in U_{2}\) implies that \(\alpha\in\mathbb{F}_{q^{t}}\), and since \(U_{1}\) is an \(\mathbb{F}_{q^{t}}\)-subspace, \(\alpha u\) is automatically in \(U_{1}\). Therefore, every point \(\langle v\rangle_{\mathbb{F}_{q^{n}}}\) of \(L_{U_{2}}\) gives rise to the \(q^{k_{1}t}\) points \(\{\langle(u,v)\rangle_{\mathbb{F}_{q^{n}}}:u\in U_{1}\}\) of \(L_{U}\) with the same weight. If \(v=\mathbf{0}\), then we just need that \(\alpha u\in U_{1}\), hence in this way, every point of \(L_{U_{1}}\) gives rise to one point of \(L_{U}\) of the same weight. Since this accounts for all points of \(L_{U}\), the statement of the theorem follows. Using the above theorem, we are able to obtain constructions of linear sets in \(\mathrm{PG}(d,q^{n})\), with \(d\geq 3\), having maximum geometric field of linearity \(\mathbb{F}_{q}\) that are \((r,d)\)-minimum size with \(2\leq r<d\) and that are not \(d\)-minimum size. **Theorem 5.2**.: _Let \(n=st\), with \(s,t>1\), and suppose that_ \(\blacktriangleright\)\(U_{1}\) _is a \(k_{1}\)-dimensional \(\mathbb{F}_{q^{t}}\)-subspace of \(\mathbb{F}_{q^{n}}^{d_{1}+1}\), with \(k_{1}\leq d_{1}s\),_ \(\blacktriangleright\)\(U_{2}\) _is a \(k_{2}\)-dimensional \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{t}}^{d_{2}+1}\), such that \(L_{U_{2}}\) is a proper \(d_{2}\)-minimum size \(\mathbb{F}_{q}\)-linear set._ _Define \(U=U_{1}\times U_{2}\), \(d=d_{1}+d_{2}+1\), and \(k=k_{1}t+k_{2}\). Then \(L_{U}\) is a \((d_{2},d)\)-minimum size \(\mathbb{F}_{q}\)-linear set of size_ \[|L_{U}|=q^{k-1}+q^{k-2}+\ldots+q^{k-d_{2}}+q^{k_{1}t}+|L_{U_{1}}|.\] _Hence, \(L_{U}\) is not \(d\)-minimum size if \(k_{2}\geq d_{2}+2\). Furthermore, if \(d_{2}\geq 2\), then \(\mathbb{F}_{q}\) is the maximum geometric field of linearity of \(L_{U}\)._ Proof.: The \(\mathbb{F}_{q}\)-linear set \(L_{U_{2}}\subseteq\mathrm{PG}(d_{2},q^{n})\) is of proper \(d_{2}\)-minimum size, and so its size is \[|L_{U_{2}}|=q^{k_{2}-1}+\ldots+q^{k_{2}-d_{2}}+1.\] By Theorem 5.1, \(L_{U}\) has rank \(k=k_{1}t+k_{2}\) and size \[|L_{U_{1}}|+q^{k_{1}t}(q^{k_{2}-1}+\ldots+q^{k_{2}-d_{2}}+1)=|L_{U_{1}}|+q^{k- 1}+q^{k-2}+\ldots+q^{k-d_{2}}+q^{k_{1}t}.\] Moreover there exists a \((d_{2}-1)\)-space \(\Gamma=\mathrm{PG}(W,\mathbb{F}_{q^{n}})\) of \(\mathrm{PG}(d_{2},q^{n})\), with \(W\subseteq\mathbb{F}_{q^{n}}^{d_{2}+1}\), meeting \(L_{U_{2}}\) in a canonical subgeometry. Now, let \(W^{\prime}=\{0\}^{d_{1}+1}\times W\). Then \(W^{\prime}\) defines a \(d_{2}\)-space of \(\mathrm{PG}(d,q^{n})\) meeting \(L_{U}\) in a canonical subgeometry. Identifying \(\mathbb{F}_{q^{n}}^{d_{1}+1}/W\) with \(\mathbb{F}_{q^{n}}^{d_{1}+1}\) we have that \(\overline{U}=U_{1}+W\leq_{q}\mathbb{F}_{q^{n}}^{d+1}/W\) with \[\overline{U}=U_{1}\times U^{\prime},\] where \(U^{\prime}\) is an \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{n}}\) of dimension \(k_{2}-d_{2}\). So again, by Theorem 5.1, we have \(|L_{\overline{U}}|=q^{k_{1}t}+|L_{U_{1}}|\). Moreover, by (3), \(|L_{U_{1}}|\leq q^{(k_{1}-1)t}+\ldots+q^{t}+1\), and since \(k_{2}>d_{2}+1\) it follows that \[|L_{U_{1}}|<q^{k-d_{2}-1}+\ldots+q^{k-d}+1-q^{k_{1}t}.\] This implies that \(L_{U}\) is not of \(d\)-minimum size. Finally, the assertion on the geometric field of linearity follows from Remark 3.10. By the above corollary and Proposition 4.5, we get the following construction. **Corollary 5.3**.: _Let \(n=st\), with \(s,t>1\), and suppose that_ * \(U_{1}=JV_{q^{t},n}(\lambda,n;l_{0},\ldots,l_{d_{1}})\)_, and denote_ \(k_{1}=l_{0}+\ldots+l_{d_{1}}\)_,_ * \(U_{2}=JV_{q,t}(\mu,t;m_{0},\ldots,m_{d_{2}})\)_, and denote_ \(k_{2}=m_{0}+\ldots+m_{d_{2}}\)_,_ _with \(L_{U_{2}}\) satisfying the condition of Theorem 4.5. Define \(U=U_{1}\times U_{2}\). Then \(L_{U}\) is a \((d_{2},d)\)-minimum size \(\mathbb{F}_{q}\)-linear set, but not of \(d\)-minimum size. Moreover, if \(d_{2}\geq 2\), then \(\mathbb{F}_{q}\) is the maximum geometric field of linearity of \(L_{U}\)._ Proof.: Note that \[|L_{U_{1}}|=q^{(k_{1}-1)t}+\ldots+q^{(k_{1}-d_{1})t}+1<q^{k-d_{2}-1}+\ldots+q^ {k-d}+1-q^{k_{1}t},\] and then the assertion follows by Theorem 5.2. **Remark 5.4**.: Other examples of \((d_{2},d)\)-minimum size linear set can be obtained by using as \(L_{U_{1}}\) or \(L_{U_{2}}\) in Theorem 5.2 the minimum size linear sets constructed in Corollary 4.14. **Remark 5.5**.: It is natural to consider \(\mathrm{PG}(d,q^{n})\), \(n\) not prime, and wonder what the maximal value of \(d_{2}\) is such that the above corollary implies the existence of an \(\mathbb{F}_{q}\)-linear set in \(\mathrm{PG}(d,q^{n})\) that is of \((d_{2},d)\)-minimum size, but not of \(d\)-minimum size, and has maximum geometric field of linearity \(\mathbb{F}_{q}\). So let \(t\) be the largest proper divisor of \(n\). Note that \(t\geq\sqrt{n}\). We want to construct a set \(U_{2}=JV_{q,t}(\mu,t;m_{0},\ldots,m_{d_{2}})\) with \(d_{2}\) maximal, such that it satisfies the conditions of Theorem 4.5. Hence, there must exist pairwise coprime polynomials \(g_{i}\) of degree \(m_{i}-1\) such that \((m_{0}-1)+\ldots+(m_{d_{2}}-1)\leq t-1\). Let \(\delta(x)\) denote the maximum number of distinct monic irreducible polynomials over \(\mathbb{F}_{q}\) such that the sum of their degrees is smaller than \(x\). Then for any \(m\geq 1\), \(\delta(q^{m})\geq\frac{q^{m}-1}{m}\). Indeed, consider the minimal polynomials of the elements of \(\mathbb{F}_{q^{m}}^{*}\). Since every element of \(\mathbb{F}_{q^{m}}^{*}\) is the root of a unique such polynomial, their degrees sum to \(q^{m}-1\). Furthermore, the maximum degree equals \(m\), so there are at least \(\frac{q^{m}-1}{m}\) such polynomials. Hence, to answer the original question, asymptotically, \(d_{2}=\Omega(t/\log_{q}(t))=\Omega(\sqrt{n}/\log_{q}(n))\). We conclude this subsection with examples of linear sets of \((1,2)\)-minimum size that are not of \((2,2)\)-minimum size and have maximum geometric field of linearity \(\mathbb{F}_{q}\). **Proposition 5.6**.: _Let \(n=st\), with \(s>1\), and \(t>2\) prime. Suppose that the smallest prime that divides \(s\) is at least \(t\). Let_ * \(U_{1}\) _be a_ \(k_{1}\)_-dimensional_ \(\mathbb{F}_{q^{t}}\)_-subspace of_ \(\mathbb{F}_{q^{n}}\)_,_ * \(U_{2}=JV_{q,t}(\mu,t;m_{0},m_{1})\)_, with_ \(t=m_{0}+m_{1}\)_._ _Define \(U=U_{1}\times U_{2}\). Then \(L_{U}\) is a \((1,2)\)-minimum size \(\mathbb{F}_{q}\)-linear set, but not of \(2\)-minimum size. Moreover, \(\mathbb{F}_{q}\) is the maximum geometric field of linearity of \(L_{U}\)._ Proof.: By Theorem 5.1, \(L_{U}\) is an \(\mathbb{F}_{q}\)-linear set of \(\mathrm{PG}(2,q^{n})\) of rank \((k_{1}+1)t\) having size \[|L_{U}|=q^{(k_{1}+1)t-1}+q^{k_{1}t}+1,\] that is not of \(2\)-minimum size. Since \(L_{U_{2}}\) has a point of weight \(1\), there exists \(\varphi\in\mathrm{GL}(2,q^{t})\), such that the \(\mathbb{F}_{q}\)-linear set \(L_{U^{\prime}}\), with \(U^{\prime}=U_{1}\times\varphi(U_{2})\) has \(E_{2}\) as a point of weight \(1\). Hence \(\mathbb{F}_{q^{n}}^{3}/E_{2}\) can be identified with \(\mathbb{F}_{q^{n}}^{2}\) in an obvious way. Clearly, \(L_{U}\) and \(L_{U^{\prime}}\) are \(\mathrm{GL}(3,q^{n})\)-equivalent. In this way, \(U^{\prime}/E_{2}\) can be identified as an \(\mathbb{F}_{q}\)-subspace \(\overline{U}=U_{1}\times U_{2}^{\prime}\), where \(U_{2}^{\prime}\) is an \((t-1)\)-dimensional \(\mathbb{F}_{q}\)-subspace of \(\mathbb{F}_{q^{t}}\). Again, by Theorem 5.1, we have that \(|L_{\overline{U}}|=q^{k_{1}t}+1\) and hence \(L_{U}\) is a \((1,2)\)-minimum size \(\mathbb{F}_{q}\)-linear set. Suppose now, that \(L_{U}=L_{W}\) for some \(\mathbb{F}_{q^{r}}\)-linear set \(L_{W}\). If \(r<t\), then by our hypothesis, \(r\) is coprime with \(s\) and \(t\), hence \(r\) is coprime with \(n=st\), and \(\mathbb{F}_{q^{r}}\) is not a subfield of \(\mathbb{F}_{q^{n}}\). Therefore, \(r\geq t\). Let \(\ell\) be the line of \(\mathrm{PG}(2,q^{n})\) having equation \(X_{0}=0\). Then \[q^{t-1}+1=|L_{U_{2}}|=|\ell\cap L_{U}|=|\ell\cap L_{W}|.\] Since \(\ell\cap L_{W}\) is an \(\mathbb{F}_{q^{r}}\)-linear set we have that \(|\ell\cap L_{W}|\geq q^{r}+1\). So \(t-1\geq r\), a contradiction. Conclusion De Beule and Van de Voorde [8] provided a lower bound on the size of an \(\mathbb{F}_{q}\)-linear set \(L_{U}\) in \(\operatorname{PG}(d,q^{n})\) that intersects some hyperplane \(\Omega\) in a canonical subgeometry. In this paper, we generalized their result by allowing \(\Omega\) to be an \((r-1)\)-space, with \(1\leq r\leq d\). Our bound looks like \[|L_{U}|\geq q^{k-1}+\ldots+q^{k-r}+|L_{\overline{U}}|,\] where \(k\) is the rank of \(L_{U}\), and \(L_{\overline{U}}\) is the projection of \(L_{U}\) from \(\Omega\). Unfortunately, the bound still depends on the size of the linear set \(L_{\overline{U}}\). This raises the question what the minimum size of \(L_{\overline{U}}\) is. Taking \(r\) as large as possible, we may assume that \(L_{\overline{U}}\) does not have any points of weight \(1\). We presented a recursive lower bound on the size of linear sets without points of weight \(1\). Since we recursively use a rather naive upper bound on the minimum weight of the points of \(L_{\overline{U}}\), one should not expect this bound to be tight, except for some particular cases. **Open problem 1**.: Find a good lower bound on the size of an \(\mathbb{F}_{q}\)-linear set of rank \(k-r\) spanning \(\operatorname{PG}(d-r,q^{n})\), containing no points of weight \(1\). The rest of the paper was concerned with finding examples of linear sets that attain equality in the bound. We note that all constructions but the one of Jena and Van de Voorde [11] use a subfield in between \(\mathbb{F}_{q}\) and \(\mathbb{F}_{q^{n}}\). So it looks like the most restrictive setting to study linear sets of minimum size is the case where such a subfield does not exist, i.e. the case where \(n\) is prime. We reiterate that for \(n\) prime, Jena and Van de Voorde [11, SS2.5 (B)] stated their belief that the bound of Result 1.3 is still correct with less restrictive conditions. **Open problem 2**.: Is it true that if \(n\) is prime, all \(\mathbb{F}_{q}\)-linear sets \(L_{U}\) in \(\operatorname{PG}(d,q^{n})\) of rank \(k\leq d+n\) satisfy \(|L_{U}|\geq q^{k-1}+\ldots+q^{k-d}+1\)? If so, can we classify the linear sets attaining equality in this bound? ## Acknowledgment We would like to thank Jan De Beule, Olga Polverino and Ferdinando Zullo for fruitful discussions. Paolo Santonastaso is very grateful for the hospitality of the Department of Mathematics and Data Science, Vrije Universiteit Brussel, Brussels, Belgium, where he was a visiting PhD student for 2 months during the preparation of this paper. Paolo Santonastaso was supported by the project "VALERE: VAnviteLli pEr la RicErca" of the University of Campania "Luigi Vanvitelli" and by the Italian National Group for Algebraic and Geometric Structures and their Applications (GNSAGA - INdAM).
2306.11869
The Conditioning of Hybrid Variational Data Assimilation
In variational assimilation, the most probable state of a dynamical system under Gaussian assumptions for the prior and likelihood can be found by solving a least-squares minimization problem . In recent years, we have seen the popularity of hybrid variational data assimilation methods for Numerical Weather Prediction. In these methods, the prior error covariance matrix is a weighted sum of a climatological part and a flow-dependent ensemble part, the latter being rank deficient. The nonlinear least squares problem of variational data assimilation is solved using iterative numerical methods, and the condition number of the Hessian is a good proxy for the convergence behavior of such methods. In this paper, we study the conditioning of the least squares problem in a hybrid four-dimensional variational data assimilation (Hybrid 4D-Var) scheme by establishing bounds on the condition number of the Hessian. In particular, we consider the effect of the ensemble component of the prior covariance on the conditioning of the system. Numerical experiments show that the bounds obtained can be useful in predicting the behavior of the true condition number and the convergence speed of an iterative algorithm
Shaerdan Shataer, Amos S. Lawless, Nancy K. Nichols
2023-06-20T20:02:53Z
http://arxiv.org/abs/2306.11869v1
# The Conditioning of Hybrid Variational Data Assimilation ###### Abstract In variational assimilation, the most probable state of a dynamical system under Gaussian assumptions for the prior and likelihood can be found by solving a least-squares minimization problem. In recent years, we have seen the popularity of hybrid variational data assimilation methods for Numerical Weather Prediction. In these methods, the prior error covariance matrix is a weighted sum of a climatological part and a flow-dependent ensemble part, the latter being rank deficient. The nonlinear least squares problem of variational data assimilation is solved using iterative numerical methods, and the condition number of the Hessian is a good proxy for the convergence behavior of such methods. In this paper, we study the conditioning of the least squares problem in a hybrid four-dimensional variational data assimilation (Hybrid 4D-Var) scheme by establishing bounds on the condition number of the Hessian. In particular, we consider the effect of the ensemble component of the prior covariance on the conditioning of the system. Numerical experiments show that the bounds obtained can be useful in predicting the behavior of the true condition number and the convergence speed of an iterative algorithm ## 1 Introduction In weather forecasting, we use mathematical and numerical models to describe the system dynamics of the ocean and atmosphere. These models are highly nonlinear and often sensitive to noise in initial conditions. Because of the nonlinearity and instability, random perturbations in the initial data and errors in the model amplify rapidly through time, producing unreliable predictions [1, 2]. In variational data assimilation, the goal is to find the maximum Bayesian a posteriori estimate of the system state from which to initialize the model. Major operational centres worldwide have adopted the four-dimensional variational assimilation scheme (4D-Var) for environmental forecasting in recent years. Similar applications arise in other fields such as physics, biology, and economics [1, 3, 4]. In 4D-Var, we aim to obtain an optimal initial state variable (conventionally called the 'analysis') by solving a nonlinear least squares problem, in which we try to find the best fit between a set of observations over a time window and an _a priori_ estimate of the state at the start of the window, known as the background. In the case where observations are only given at one time, the method becomes three-dimensional variational assimilation, or 3D-Var. We assume that the background state and observations have Gaussian, unbiased errors, with covariance matrices \(B\) and \(R\) respectively. Traditional variational data assimilation methods have used a climatological estimate[4] for the background error covariance matrix \(B\), where flow-dependent ensemble information is not incorporated into the system[5, 6]. Recent developments utilise a hybrid approach, in which an ensemble background error covariance matrix is estimated with ensemble members and then combined with the climatological part. This has a clear advantage of bringing in the variability of the system and updates the statistics in each prediction window, giving flow-dependent information that can improve the accuracy of predictions [4]. However, due to computational restrictions, the affordable number of ensemble members is normally small, which leads to a rank deficient ensemble background error covariance matrix. The method of combining the ensemble parts with the conventional 4D-Var is called Hybrid 4D-Var. In this method, \(B\) is given by \(B=(1-\beta)B_{0}+\beta P_{f}\). Here, \(B_{0},\ P_{f}\) are the climatological background error covariance and the ensemble error covariance matrix, and \((1-\beta),\ \beta\) are their weights, where \(\beta\) is a scalar. For the details of the hybrid method, we refer readers to the review paper of Bannister [4]. In this paper, we are especially interested in establishing the relationship between the conditioning of Hybrid 4D-Var and the weight \(\beta\) on \(P_{f}\). The 4D-Var problem is usually solved using iterative gradient methods, such as conjugate gradient or Quasi-Newton methods[1, 4]. The condition number of the Hessian can be used to estimate the number of iterations required for convergence[7, 8]. In addition, the condition number also reveals the sensitivity of the minimisation problem with respect to random noise[7]. Here we establish that, in Hybrid 4D-Var, a transition point exists where the condition number of \(B\) sharply increases with the weight on \(P_{f}\). Since \(P_{f}\) is rank deficient [1, 4, 5, 6], adding the ensemble background error covariance matrix may cause difficulties for solving the nonlinear least-squares minimisation problem [1, 4, 5, 6] and an adequate preconditioning scheme becomes desirable. This is typically achieved through a Control Variable Transformation (CVT) [1, 4]. CVT uses a decomposition of \(B_{0}\) and \(P_{f}\) to transform the state variable such that the conditioning of the Hessian matrix is improved [1, 4]. Implementation details of CVT are frequently described in previous works on 4D-Var[3, 5, 6, 9] and Hybrid 4D-Var[4]. In terms of practical applications of Hybrid 4D-Var, we note that the nonlinear least-squares problem is often linearised and solved as a sequence of linear least-squares minimisations. This is known as the incremental 4D-Var method [1, 3, 4] and is equivalent to an approximate Gauss-Newton method[10]. In practice, it is useful to understand the contribution of each component of DA, in such a way that the impact on the conditioning can be predicted when these components change. This then motivates a comprehensive theory that can predict the conditioning while separating the contribution of each error component \((B_{0},P_{f},R)\), and relating it to the parameters that characterize these components. We note that both preconditioned and unpreconditioned 4D-Var are implemented by major operational centres (such as the UK Met Office). Such theories have previously been constructed for conventional 4D-Var. Haben et al (2011)[3] established a theory to estimate the conditioning of a preconditioned 4D-Var system. This work is later developed by Tabeart et al (2018, 2021) [8, 9] for both unpreconditioned 3D-Var and preconditioned 4D-Var with Control Variable Transformation (CVT). In Tabeart et al's studies, a bound estimation is proved for the conditioning of the system, in which the contributions of the background error covariance matrix and observation error covariance matrix are separated. The impact of each component is then associated with its characterizing parameters. The research of Haben et al[3] and Tabeart et al[8, 9] are tested and analysed using small scale examples. However, they are extrapolated to justify observations in large scale real-life applications. To give a few examples, Mercier et al[11] used the analysis of Haben et al[3] to explain the convergence behaviour of a block Krylov method for a 3D-Var application[11]; Desroziers et al[12] cited the same result of Haben et al [3] to guide their design of a preconditioned Lanczos/Conjugate Gradient algorithm. Hatfield et al[13] cited Haben et al's analysis [3] to explain the effect of an increasing model error on the convergence of an incremental 4D-Var; Aabaribaoune et al [14] used the result of Tabeart et al[8] to analyse the convergence speed of a BFGS algorithm for solving a 3D-Var system, in the application of ozone profiling. In this paper, we aim to extend previous studies of Haben et al[3] and Tabeart et al [8, 9] to a hybrid system. In particular, we study the impact of \(P_{f}\) on the condition number of the Hessian matrix, for both unpreconditioned cases and preconditioned cases with CVT. We outline this paper as follows. In section 2, we briefly formulate the problem of Hybrid 4D-Var and introduce the Hessian matrix; in section 3, we establish the theory of conditioning for unpreconditioned Hybrid 4D-Var and preconditioned Hybrid 4D-Var with CVT; in sections 4 to 6, we provide multiple numerical experiments to illustrate the theories and analyse the conditioning; in section 7, we show how the behaviour of the condition number of the Hessian predicted by our theory is reflected in the convergence speed of a conjugate gradient algorithm. Section 8 gives a general summary of the results. ## 2 Problem Formulation In this section we introduce the Hybrid 4D-Var method, the incremental method, the preconditioning technique of Control Variable Transformation, and the Hessian matrix associated with the unpreconditioned and preconditioned least-squares problems. ### A General Formulation of the Hybrid 4D-Var A general formulation of the 4D-Var is given by Problem 1. **Problem 1**.: _Solve for the optimal initial state \(\mathbf{x_{a}}\) by solving a minimisation problem given by,_ \[\begin{cases}&\mathbf{x_{a}}=\arg\min_{\mathbf{x_{0}}\in\mathbb{R}^{n}}\mathcal{J}(\bm {x_{0}}),\\ &\mathcal{J}(\mathbf{x_{0}}):=\frac{1}{2}||\mathbf{x_{0}}-\mathbf{x^{b}}||^{2}_{B^{-1}}+ \frac{1}{2}\sum_{i=0}^{N}||\mathbf{y_{i}}-\mathcal{H}_{i}(\mathbf{x_{i}})||^{2}_{R_{i} ^{-1}},\\ &\mathbf{x_{i}}\in\mathbb{R}^{n},\ \mathbf{y_{i}}\in\mathbb{R}^{p},\ B\in\mathbb{R}^{n,n },\ R_{i}\in\mathbb{R}^{p,p},\ \mathcal{H}_{i}:\mathbb{R}^{n}\to\mathbb{R}^{p},\end{cases} \tag{1}\] _where \(\mathcal{H}_{i}\) is the nonlinear observational operator; \(\mathbf{y_{i}}\) is the vector of observational measurements, taken at time \(t_{i}\); \(\mathbf{x_{i}}\) is the state variable at time \(t_{i}\), given by,_ \[\mathbf{x_{i}}=\mathcal{M}_{i,i-1}(\mathbf{x_{i-1}})=\mathcal{M}_{i,0}(\mathbf{x_{0}}), \tag{2}\] \(\mathcal{M}_{i,i-1}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) _is the nonlinear model, and \(\mathcal{M}_{i,0}=\mathcal{M}_{i,i-1}\cdot\mathcal{M}_{i-1,i-2}\cdots\mathcal{ M}_{1,0}\) is a direct product (composition) of them. The vector \(\mathbf{x^{b}}\in\mathbb{R}^{n}\) is the prior information given by the model at \(t_{0}\), known as the background state. A simplified problem, given by 3D-Var, is a special case of the general 4D-Var problem where \(N=0\), meaning there are only observations at time \(t_{0}\)._ In Hybrid 4D-Var and 3D-Var, we follow the formulation given by (1), but replace the background error covariance matrix \(B\) with \(B=(1-\beta)B_{0}+\beta P_{f}\). We note that the problem involves solving a nonlinear least squares problem, and it is difficult to implement directly when the problem is large scale. Instead, this is often replaced with a linearised incremental formulation, which we describe next. ### Incremental Hybrid 4D-Var, Control Variable Transformation and The Hessian Matrix In practice, especially in NWP, Problem 1 is often solved using the incremental method. In incremental 4D-Var, the nonlinear least squares problem is replaced with a sequence of linear least squares problems with a cost function of \[\mathcal{J}(\delta\mathbf{x}_{0}^{k})=\frac{1}{2}||\delta\mathbf{x}_{0}^{k}-\delta\bm {x}_{b}^{k}||_{B^{-1}}^{2}+\frac{1}{2}\sum_{i=0}^{N}||\mathbf{d}_{i}^{k}-H_{i} \delta\mathbf{x}_{i}^{k}||_{R_{i}^{-1}}, \tag{3}\] where the vector \(\mathbf{d}_{i}^{k}\) is known as the innovation, defined by \(\mathbf{d}_{i}^{k}=y_{i}-\mathcal{H}_{i}(\mathbf{x}_{i}^{k})\); the linear operator \(H_{i}\) is the Jacobian of \(\mathcal{H}_{i}\), and the vector \(\delta\mathbf{x}_{b}^{k}\) is the increment, given by \(\delta\mathbf{x}_{b}^{k}=\mathbf{x}_{b}^{k}-\mathbf{x}_{b}^{k}\); the vector \(\delta\mathbf{x}_{i}^{k}\) is computed from \(\delta\mathbf{x}_{i}^{k}=M_{i,0}\delta\mathbf{x}_{0}^{k}\). The model operator \(M_{i,0}\) is given by \(M_{i,0}=\prod_{j=1}^{i}M_{j}\), where \(M_{j}\) is the Jacobian of \(\mathcal{M}_{j,j-1}\). At each outer iteration \(k\) (the outer loop), we minimise (3) to solve for the increment, and then \(\mathbf{x}_{b}^{k}\) is updated for the next iteration. We now introduce the Hessian matrix of the cost function \(\mathcal{J}\) in Problem 1, which is given by[3, 9], \[S_{4D}=((1-\beta)B_{0}+\beta P_{f})^{-1}+\sum_{i=0}^{N}(H_{i}M_{i,0})^{T}R_{i} ^{-1}H_{i}M_{i,0}.\] The sum of the matrix product above can be written in a simple compact form. **Notation 1**.: _The matrix \(\hat{H}\) is the general observation operator, given by_ \[\hat{H}:=\left[H_{0}^{T},(H_{1}M_{1,0})^{T},\cdots,(H_{N}M_{N,0})^{T}\right]^ {T}\in\mathbb{R}^{p(N+1),n}\] _, where \(\hat{R}\) is a block diagonal matrix with its \(i\)th diagonal block given by \(R_{i}\)._ Following Notation 1, we can rewrite the Hessian matrix as follows[3, 9], \[S_{4D}=((1-\beta)B_{0}+\beta P_{f})^{-1}+\hat{H^{T}}\hat{R}^{-1}\hat{H}. \tag{4}\] In the case of Hybrid 3D-Var, we recall that it is a special case of 4D-Var with \(N=0\) and its Hessian is given by \[S_{3D}=((1-\beta)B_{0}+\beta P_{f})^{-1}+H_{0}^{T}{R_{0}}^{-1}H_{0}. \tag{5}\] In applications, preconditioning techniques are often applied to improve the conditioning of the system. Control Variable Transformation (CVT) is one of the popular preconditioning techniques. The detail of CVT is described by Nichols (2010) [1], Bannister (2017) [4] and Buehner (2005) [15]. In this approach, we utilise factorizations of \(B_{0}\) and \(P_{f}\), given by \[B_{0}=U^{T}U,\ P_{f}=X_{f}^{T}X_{f},\ \text{where}\ U=B_{0}^{1/2} \ \text{and} \tag{6}\] \[X_{f}=\frac{1}{\sqrt{m-1}}\left[\mathbf{x}_{1}-\bar{\mathbf{x}},\mathbf{x}_ {2}-\bar{\mathbf{x}},\cdots,\mathbf{x}_{m}-\bar{\mathbf{x}}\right],\] and \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{m}\) are ensemble members, \(m\) is the number of samples and \(\bar{\mathbf{x}}\) is the ensemble mean. These matrices are then used to transform the state variable as follows [4], \[U_{h}\delta\mathbf{v}=\delta\mathbf{x},\ \text{where}\ U_{h}=\left[\sqrt{(1-\beta)}U \ \sqrt{\beta}X_{f}\right]\in\mathbb{R}^{n,n+m}. \tag{7}\] Here \(\delta\mathbf{x}\in\mathbb{R}^{n}\) is the increment of the state variable and \(\delta\mathbf{v}\) is the increment of the control variable. Applying this transform to (3) leads to a new cost function that reads \[J(\delta\mathbf{v}_{0}^{k})=\frac{1}{2}||\delta\mathbf{v}_{0}^{k}-\delta \mathbf{v}_{b}^{k}||_{I_{n+m}}+\frac{1}{2}\sum_{i=0}^{N}||y_{i}-\mathcal{H}_{i}(\bm {x}^{k})-H_{i}M_{i,0}U_{h}\delta\mathbf{v}_{i}^{k}||_{R_{i}^{-1}}. \tag{8}\] A direct calculation then yields the Hessian of \(J(\delta\mathbf{v}_{0}^{k})\) with respect to \(\delta\mathbf{v}_{0}^{k}\) as \[S_{P4D}=I_{n+m}+U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h}. \tag{9}\] In the following sections we will demonstrate that CVT prevents the condition number from going to infinity when \(\beta\) (the weight of the ensemble part) approaches \(1\). In addition, we also note that the adjoint of \(U_{h}\) does not need to be computed explicitly. This is discussed in detail by Smith et al[5] and Bannister[4]. ## 3 Theory of the conditioning of Hybrid 4D-Var To better analyse the convergence of the nonlinear least squares problem of Hybrid 4D-Var, we aim to establish a set of theories that can predict changes in the conditioning of the system prompted by varying parameters (such as correlation length scale, error variance, etc). Typically, we are also keen to understand the impact of the rank deficient ensemble part on the conditioning. These motivate us to develop an estimation of the condition number that is informative of the actual conditioning of the system. In order to achieve such a goal, we use spectral theories to establish bounds for the condition number of the Hessian matrix. We outline the structure of this section as follows. We start by introducing some pre-established results, then extend them to the Hybrid method. We will discuss the unpreconditioned cases and the preconditioned cases separately. ### Eigenvalues and Conditioning We begin with a brief review of previous work by Tabeart et al [8] and some fundamental eigenvalue inequalities[16]. We will extend these results to Hybrid 4D-Var. In the scope of this paper we always assume that \(B\) and \(R\) are symmetric. **Notation 2**.: _Let \(\lambda_{k}(A)\) be the \(k\)th largest eigenvalue of a matrix \(A\in\mathbb{R}^{n,n}\), \(\lambda_{1}(A),\lambda_{n}(A)\) be the largest and smallest eigenvalues of \(A\), and \(\kappa(A)\) be the condition number of \(A\)._ **Definition 1**.: _For a symmetric positive definite matrix \(A\in\mathbb{R}^{n,n}\), its condition number is defined by \(\kappa(A)=\lambda_{1}(A)/\lambda_{n}(A)\)._ For the eigenvalues of the sum of two Hermitian matrices, H.Weyl (1912)[17] proved the following theorem: **Theorem 1**.: _Let \(A_{1},A_{2}\) be two symmetric matrices. Then the eigenvalues of \(A=A_{1}+A_{2}\) satisfy the following:_ \[\lambda_{k}(A_{1})+\lambda_{n}(A_{2})\leq\lambda_{k}(A)\leq \lambda_{k}(A_{1})+\lambda_{1}(A_{2}). \tag{10}\] The inequality for products is given by Wang and Zhang as follows[18], **Theorem 2**.: _Let \(A_{1},A_{2}\in R^{n,n}\) be positive semidefinite Hermitian matrices. Then_ \[\max\left[\lambda_{1}(A_{1})\lambda_{n}(A_{2}),\lambda_{1}(A_{n})\lambda_{1}(A_{ 2})\right]\leq\lambda_{1}(A_{1}A_{2})\leq\lambda_{1}(A_{1})\lambda_{1}(A_{2}). \tag{11}\] Applying these results to the Hessian matrix of 3D-Var, Tabeart et al[8] established that, **Theorem 3**.: _Let \(S_{3D}=B^{-1}+H_{0}^{T}R_{0}^{-1}H_{0}\) and given that \(B,R\) are symmetric positive definite, the condition number of \(S_{3D}\) is bounded as follows,_ \[\begin{cases}&\kappa(S_{3D})\geq\max\left[\frac{\kappa(B)}{1+\lambda_{1}(B) \lambda_{1}(H_{0}^{T}R_{0}^{-1}H_{0})},\frac{1+\lambda_{1}(B)\lambda_{1}(H_{0 }^{T}R_{0}^{-1}H_{0})}{\kappa(B)}\right]\\ &\kappa(S_{3D})\leq(1+\lambda_{n}(B)\lambda_{1}(H_{0}^{T}R_{0}^{-1}H_{0})) \kappa(B).\end{cases} \tag{12}\] ### Conditioning of Unpreconditioned Hybrid 4D-Var In Hybrid 4D-Var, the background error covariance matrix \(B\) is replaced by a weighted sum of the static and ensemble background error covariance matrix, such that \(B=(1-\beta)B_{0}+\beta P_{f}\). Although we can still use (12) to estimate the condition number of the Hessian, we note that it does not have the capacity to separate the contribution of \(P_{f}\) and predict the condition number as the weight of \(P_{f}\) grows. This motivates us to establish theories that are specific to the hybrid case of 4D-Var. **Lemma 1**.: _Assume that \(P_{f}\) is rank deficient and let \(B=(1-\beta)B_{0}+\beta P_{f}\). Then the extreme eigenvalues of \(B\) satisfy the following:_ \[\begin{cases}\max\left[(1-\beta)\lambda_{1}(B_{0}),\ \beta\lambda_{1}(P_{f})+(1- \beta)\lambda_{n}(B_{0})\right]\leq\lambda_{1}(B)\leq(1-\beta)\lambda_{1}(B_{ 0})+\beta\lambda_{1}(P_{f}),\\ (1-\beta)\lambda_{n}(B_{0})\leq\lambda_{n}(B)\leq\min\left[(1-\beta)\lambda_{1 }(B_{0}),\ \beta\lambda_{1}(P_{f})+(1-\beta)\lambda_{n}(B_{0})\right].\end{cases} \tag{13}\] Proof.: The conclusion follows from Theorem 1 and that \(\lambda_{n}(P_{f})=0\) (since \(P_{f}\) is rank deficient). **Lemma 2**.: _Given \(B_{0},P_{f}\) are two symmetric matrices, let \(B=(1-\beta)B_{0}+\beta P_{f}\), and assuming that \(P_{f}\) is rank deficient, the condition number of \(B\) is then bounded as follows,_ \[\max\left[\frac{1}{\kappa(B_{0})}+\frac{\beta\lambda_{1}(P_{f})}{(1-\beta) \lambda_{1}(B_{0})},\left(\frac{1}{\kappa(B_{0})}+\frac{\beta\lambda_{1}(P_{f })}{(1-\beta)\lambda_{1}(B_{0})}\right)^{-1}\right]\leq\kappa(B)\leq\kappa(B_ {0})\left(1+\frac{\beta\lambda_{1}(P_{f})}{(1-\beta)\lambda_{1}(B_{0})} \right). \tag{14}\] Proof.: By the definition of \(\kappa(B)\) and Lemma 1, we find that, \[\kappa(B)=\frac{\lambda_{1}(B)}{\lambda_{n}(B)}\leq\frac{(1-\beta)\lambda_{1} (B_{0})+\beta\lambda_{1}(P_{f})}{(1-\beta)\lambda_{n}(B_{0})}=\kappa(B_{0})+ \frac{\beta\lambda_{1}(P_{f})}{(1-\beta)\lambda_{n}(B_{0})},\] and similarly, \[\kappa(B)\geq\max\left[\frac{(1-\beta)\lambda_{n}(B_{0})+\beta \lambda_{1}(P_{f})}{(1-\beta)\lambda_{1}(B_{0})},\frac{(1-\beta)\lambda_{1}(B _{0})}{(1-\beta)\lambda_{n}(B_{0})+\beta\lambda_{1}(P_{f})}\right]=\] \[\max\left[\frac{1}{\kappa(B_{0})}+\frac{\beta\lambda_{1}(P_{f})} {(1-\beta)\lambda_{1}(B_{0})},\left(\frac{1}{\kappa(B_{0})}+\frac{\beta \lambda_{1}(P_{f})}{(1-\beta)\lambda_{1}(B_{0})}\right)^{-1}\right].\] We highlight that the condition number of \(B\) diverges to infinity as \(\beta\to 1\). This is reflected by the lemma above as the lower bound of \(\kappa(B)\) goes to infinity. We now use these results to obtain a bound on the condition number in Theorem 4. We emphasize that the conclusion of these results cannot be directly derived from Theorem 3 by simply replacing \(B\) with \((1-\beta)B_{0}+\beta P_{f}\) in the upper and lower bounds. In fact, such an effort leads to a less sharp bound that does not provide analytical or theoretical insight on how the bound changes with \(\beta\). Moreover, this approach does not reveal how the bound responds to changes in physical parameters related to \(B_{0}\) and \(P_{f}\). Using different strategies leads to a bound that clearly separates the contribution of each component \((B_{0},P_{f})\) and their weights, making it possible to analyze their impact on the condition number and their interaction as well. **Notation 3**.: _For simplicity of presentation, we introduce \(\gamma_{z},\ \Gamma_{z}\) to represent the lower and upper bound of any \(z\in\mathbb{R}\)._ **Theorem 4**.: _Recall that \(S_{4D}=B^{-1}+\hat{H}^{T}\hat{R}^{-1}\hat{H}\), \(B=(1-\beta)B_{0}+\beta P_{f}\), and let,_ \[\Gamma_{\lambda_{n}(B)}=\min((1-\beta)\lambda_{n}(B_{0})+\beta \lambda_{1}(P_{f}),(1-\beta)\lambda_{1}(B_{0})),\] \[\gamma_{\kappa(B)}=\max\left[\frac{1}{\kappa(B_{0})}+\frac{\beta \lambda_{1}(P_{f})}{(1-\beta)\lambda_{1}(B_{0})},\left(\frac{1}{\kappa(B_{0}) }+\frac{\beta\lambda_{1}(P_{f})}{(1-\beta)\lambda_{1}(B_{0})}\right)^{-1} \right],\] \[\Gamma_{\kappa(B)}=\kappa(B_{0})\left(1+\frac{\beta\lambda_{1}(P _{f})}{(1-\beta)\lambda_{1}(B_{0})}\right).\] _Given that \(B,R\) are symmetric, the condition number of \(S_{4D}\) is then bounded as follows,_ \[\begin{cases}\kappa(S_{4D})\geq\max\left[\frac{1}{\Gamma_{\kappa(B)}}+(1- \beta)\lambda_{n}(B_{0})\lambda_{1}(\hat{H}^{T}\hat{R}^{-1}\hat{H}),\ \left(\frac{1}{\gamma_{\kappa(B)}}+\Gamma_{\lambda_{n}(B)}\lambda_{1}(\hat{H} ^{T}\hat{R}^{-1}\hat{H})\right)^{-1},\ 1\right]\\ \kappa(S_{4D})\leq\Gamma_{\kappa(B)}+\left((1-\beta)\lambda_{1}(B_{0})+\beta \lambda_{1}(P_{f})\right)\lambda_{1}(\hat{H}^{T}\hat{R}^{-1}\hat{H}).\end{cases} \tag{15}\] Proof.: In Theorem 3, replacing \(H_{0}^{T}R^{-1}H_{0}\) with \(\hat{H}\hat{R}^{-1}\hat{H}\), then the left hand side of Inequality (12) can be written \[\max\left[\frac{1}{\kappa(B)}+\lambda_{n}(B)\lambda_{1}(\hat{H}^{T}\hat{R}^{- 1}\hat{H}),\ \left(\frac{1}{\kappa(B)}+\lambda_{n}(B)\lambda_{1}(\hat{H}^{T}\hat{R}^{-1} \hat{H})\right)^{-1}\right]\leq\kappa(S_{4D}).\] For the upper bound, we note that the upper bound can be written as follows, \[\kappa(S_{4D})\leq\kappa(B)+\lambda_{1}(B)\lambda_{1}(\hat{H}^{T}\hat{R}^{-1 }\hat{H}).\] Substituting \(\kappa(B)\) and \(\lambda_{1}(B)\) with their upper bounds, this then produces the upper bound on \(\kappa(S_{4D})\). We note that the same bound can be derived by using eigenvalue inequalities. Crucially, the upper bound in Theorem 4 does not require any explicit computation of \(S_{4D}\). We only need to compute the largest eigenvalues of \(\lambda_{1}(B_{0}),\lambda_{1}(P_{f})\) and the condition number of \(B_{0}\). In the case of a special observation operator we can simplify the Hessian in the 3D-Var case as follows. **Corollary 1**.: _Assuming that each row of \(H_{0}\) has one unit entry, with all other entries being zero, and that \(R_{0}\) is diagonal such that \(R_{0}=\sigma_{R_{0}}^{2}I_{p}\), where \(I_{p}\in\mathbb{R}^{p,p}\) is the identity matrix, then the bounds in Theorem 4 in the case of 3D-Var simplify to:_ \[\max\left[\frac{1}{\Gamma_{\kappa(B)}}+\frac{(1-\beta)\lambda_{n}(B_{0})}{ \sigma_{R_{0}}^{2}},\ \left(\frac{1}{\gamma_{\kappa(B)}}+\frac{\Gamma_{\lambda_{n}(B)}}{\sigma_{R_{0 }}^{2}}\right)^{-1}\right]\leq\kappa(S_{3D})\leq\Gamma_{\kappa(B)}+\frac{(1- \beta)\lambda_{1}(B_{0})+\beta\lambda_{1}(P_{f})}{\sigma_{R_{0}}^{2}}. \tag{16}\] Proof.: Given that \(R_{0}=\sigma_{R_{0}}^{2}I_{p}\), and with the assumption on \(H\), the largest eigenvalue of \(H_{0}^{T}R_{0}^{-1}H_{0}\) is always \(\lambda_{1}(H_{0}^{T}R_{0}^{-1}H_{0})=\sigma_{R_{0}}^{-2}\). Replacing relevant terms in Bounds (15), we then reach the conclusion. We emphasise that the upper bound on \(\kappa(S_{4D})\) diverges to infinity as \(\beta\to 1\). In such a case, the background error covariance matrix is dominated by \(P_{f}\). However, the theory does not predict the same behaviour for the lower bound. In terms of of the observation part, we observe that the number \(p\) of observation points does not alter the bounds; thus the theory can not predict the behaviour of \(\kappa(S_{4D})\) when \(p\) varies. Although unpreconditioned 4D-Var is still in use for real-life applications[8], it is now a common practice to implement CVT for Hybrid 4D-Var. In the next section, we focus on developing similar theories for such cases. ### Conditioning of Preconditioned Hybrid 4D-Var Method with CVT The preconditioning of Hybrid 4D-Var is broadly adopted in major operational centres, and the impact of different error components on the conditioning is important to determine the convergence of numerical schemes for incremental Hybrid 4D-Var, or more generally, for solving the nonlinear least squares problem in Hybrid 4D-Var. In this section we prove two versions of the bounds for \(\kappa(S_{4D})\) with different strategies. Firstly we remind readers some useful notation as follows: let \(K:=\hat{H}^{T}\hat{R}\hat{H}\), \(U:=B_{0}^{1/2}\) and \(X_{f}:=\frac{1}{\sqrt{m-1}}\left[\mathbf{x}_{1}-\bar{\mathbf{x}},\mathbf{x}_{2}-\bar{\mathbf{ x}},\cdots,\mathbf{x}_{m}-\bar{\mathbf{x}}\right]\), where \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{m}\) is the set of ensemble members and \(\bar{\mathbf{x}}\) is the ensemble mean. We recall that the motivation behind our approach to deriving the bound is to separate the contribution of each component in the Hessian, particularly the components of \(B_{0}\) and \(P_{f}\) and their weights. We are especially keen to find out how the ratio of \((1-\beta)||B_{0}||_{2}/(\beta||P_{f}||_{2})\) influences the bound, as this ratio is linked to the balancing of the climatological part and the ensemble part. Driven by this motivation, we discovered a decomposition of the matrix product \(U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h}\) that yields a useful max function which is directly related to this ratio. The detail is given by the proof of Theorem 5. **Theorem 5**.: _The condition number of the Hessian matrix of the Hybrid 4D-Var method satisfies_ \[\begin{cases}\kappa(S_{P4D})\leq 1+\sqrt{(\beta-\beta^{2})\lambda_{1}(B_{0}) \lambda_{1}(P_{f})\lambda_{1}(K^{2})}+\max\left[(1-\beta)\lambda_{1}(B_{0}) \lambda_{1}(K),\ \beta\lambda_{1}(P_{f})\lambda_{1}(K)\right],\\ \kappa(S_{P4D})\geq 1+\max\left[(1-\beta)\lambda_{1}(K)\lambda_{n}(B_{0}),\ \sqrt{(\beta-\beta^{2})\lambda_{n}(B_{0})\lambda_{1}(P_{f})\lambda_{1}(K^{2}) }\right]\end{cases} \tag{17}\] Proof.: We know that the Hessian matrix of the Hybrid 4D-Var can be expressed as follows, \[\begin{cases}&S_{P4D}=I_{n+m}+U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h},\\ &U_{h}=\left[\sqrt{1-\beta}U\ \ \sqrt{\beta}X_{f}\right].\end{cases} \tag{18}\] Given that \(U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h}\) is rank deficient, it is immediate that \[\kappa(S_{P4D})=1+\lambda_{1}(U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h}). \tag{19}\] Substituting \(U_{h}\) in \(S_{P4D}\), we derive \[S_{P4D} =I_{n+m}+\left(\begin{array}{cc}\sqrt{1-\beta}U^{T}\\ \sqrt{\beta}X_{f}^{T}\end{array}\right)K\left(\begin{array}{cc}\sqrt{1-\beta }U&\sqrt{\beta}X_{f}\end{array}\right) \tag{20}\] \[=I_{n+m}+\left(\begin{array}{cc}(1-\beta)U^{T}KU&0\\ 0&\beta X_{f}^{T}KX_{f}\end{array}\right)+\left(\begin{array}{cc}0&\sqrt{ \beta-\beta^{2}}U^{T}KX_{f}\\ \sqrt{\beta-\beta^{2}}X_{f}^{T}KU&0\end{array}\right)\] (21) \[:=I_{n+m}+A_{1}+A_{2}. \tag{22}\] Since \(U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat{H}U_{h}=A_{1}+A_{2}\), using Theorem 1, we get \[\max\left[\lambda_{1}(A_{1})+\lambda_{n+m}(A_{2}),\ \lambda_{n+m}(A_{1})+ \lambda_{1}(A_{2})\right]\leq\lambda_{1}(U_{h}^{T}\hat{H}^{T}\hat{R}^{-1}\hat {H}U_{h})\leq\lambda_{1}(A_{1})+\lambda_{1}(A_{2}). \tag{23}\] We note that \[\lambda_{1}(U^{T}KU)=\lambda_{1}(B_{0}K),\lambda_{1}(X_{f}^{T}KX_{f})=\lambda_ {1}(P_{f}K),\lambda_{1}(U^{T}KX_{f}X_{f}^{T}KU)=\lambda_{1}(B_{0}P_{f}K^{2}). \tag{24}\] Then, applying Theorem 2 to the matrix products, we derive that \[\lambda_{1}(A_{1})=\max\left[(1-\beta)\lambda_{1}(U^{T}KU),\beta\lambda_{1}(X _{f}^{T}KX_{f})\right]\leq\max\left[(1-\beta)\lambda_{1}(B_{0})\lambda_{1}(K),\ \beta\lambda_{1}(P_{f})\lambda_{1}(K)\right], \tag{25}\] \[\lambda_{1}(A_{2})=\sqrt{\lambda_{1}\left[(\beta-\beta^{2})U^{T}KX_{f}X_{f}^{ T}KU\right]}\leq\sqrt{(\beta-\beta^{2})\lambda_{1}(B_{0})\lambda_{1}(P_{f}) \lambda_{1}(K^{2})}. \tag{26}\] So \[\kappa(S_{P4D})\leq 1+\max\left[(1-\beta)\lambda_{1}(B_{0})\lambda_{1}(K), \ \beta\lambda_{1}(P_{f})\lambda_{1}(K)\right]+\sqrt{(\beta-\beta^{2})\lambda_{1}(B _{0})\lambda_{1}(P_{f})\lambda_{1}(K^{2})}. \tag{27}\] In the lower bound, we use that \[\max\left[\lambda_{1}(A_{1}),\lambda_{1}(A_{2})\right]\leq\max\left[\lambda_{1 }(A_{1})+\lambda_{n+m}(A_{2}),\ \lambda_{n+m}(A_{1})+\lambda_{1}(A_{2})\right]\leq\lambda_{1}(U_{h}^{T}\hat{H}^ {T}\hat{R}^{-1}\hat{H}U_{h}). \tag{28}\] Applying Theorem 2 then yields (we note that \(K\) is rank deficient, such that \(\lambda_{n}(K)=0\)) \[\lambda_{1}(U^{T}KU)\geq\max\left[\lambda_{n}(B_{0})\lambda_{1}(K ),\lambda_{1}(B_{0})\lambda_{n}(K)\right]=\lambda_{n}(B_{0})\lambda_{1}(K), \tag{29}\] \[\lambda_{1}(X_{f}^{T}KX_{f})\geq\max\left[\lambda_{1}(P_{f}) \lambda_{n}(K),\ \lambda_{n}(P_{f})\lambda_{1}(K)\right]=0,\] (30) \[\lambda_{1}(U^{T}KX_{f}X_{f}^{T}KU)\geq\max\left[\lambda_{1}(B_{0 })\lambda_{1}(P_{f})\lambda_{n}(K^{2}),\lambda_{1}(B_{0})\lambda_{n}(P_{f}) \lambda_{1}(K^{2}),\lambda_{n}(B_{0})\lambda_{1}(P_{f})\lambda_{1}(K^{2}) \right]=\] \[\lambda_{n}(B_{0})\lambda_{1}(P_{f})\lambda_{1}(K^{2}) \tag{31}\] and giving us \[\lambda_{1}(A_{1})\geq(1-\beta)\lambda_{1}(K)\lambda_{n}(B_{0}),\ \lambda_{1}(A_{2})\geq\sqrt{(\beta-\beta^{2})\lambda_{n}(B_{0})\lambda_{1}(P_{f} )\lambda_{1}(K^{2})}. \tag{32}\] By substituting \(\max\left[\lambda_{1}(A_{1}),\lambda_{1}(A_{2})\right]\) into (28) we conclude that \[\kappa(S_{P4D})\geq 1+\max\left[(1-\beta)\lambda_{1}(K)\lambda_{n}(B_{0}), \sqrt{(\beta-\beta^{2})\lambda_{n}(B_{0})\lambda_{1}(P_{f})\lambda_{1}(K^{2}) }\right]. \tag{33}\] Another version of the bounds can be derived by directly applying Theorem 2 to the product of \(U_{h}KU_{h}\) instead of using the decomposition such as that in Theorem 5. **Theorem 6**.: _Let \(S_{P4D}\) be the Hessian matrix of Hybrid 4D-Var with CVT, then the condition number of \(S_{P4D}\) satisfies_ \[1\leq\kappa(S_{P4D})\leq 1+\left[(1-\beta)\lambda_{1}(B_{0})+\beta\lambda_{1}(P_{ f})\right]\lambda_{1}(K) \tag{34}\] Proof.: We note that \[\lambda(U_{h}KU_{h}^{T})=\lambda(U_{h}^{T}U_{h}K)=\lambda(B^{T}K). \tag{35}\] Applying the eigenvalue inequality of the product, and using the symmetry of \(B\), we find that \[\lambda_{1}(B^{T}K)\leq\lambda_{1}(B)\lambda_{1}(K). \tag{36}\] Furthermore, the eigenvalue inequality of the sum yields \[\lambda_{1}(B)\leq(1-\beta)\lambda_{1}(B_{0})+\beta\lambda_{1}(P_ {f}). \tag{37}\] Applying (36) and (37) directly to (19), we obtain that \[\kappa(S_{P4D})\leq 1+\left[(1-\beta)\lambda_{1}(B_{0})+\beta\lambda_{1}(P _{f})\right]\lambda_{1}(K) \tag{38}\] For the lower bound, we note that applying Theorem 2 to \(\lambda_{1}(B^{T}K)\) only yields \[\lambda_{1}(B^{T}K)\geq\max\left[\lambda_{1}(B^{T})\lambda_{n}(K ),\lambda_{n}(B^{T})\lambda_{1}(K)\right]\geq 0.\] In terms of the effectiveness of the preconditioning, we note that the upper bounds given by Theorem 5 and Theorem 6 are not controlled by the condition number of \(B\), but by the largest eigenvalues of \(B_{0},P_{f},K^{2}\) only. This is different from the upper bound of the unpreconditioned case. In addition, for the unpreconditioned case, \(B=P_{f}\) at \(\beta=1\), indicating that \(B\) becomes closer to a singular matrix as \(\beta\) approaches \(1\). In such a case, the theory predicts that the condition number of \(S_{4D}\) diverges to infinity as \(\beta\to 1\). However, by implementing CVT, this divergence is eliminated. On the other hand, we note that the upper bound given by Theorem 5 is distinctively different from all the other versions, including the ones derived for the unpreconditioned cases. Namely, the upper bound of Theorem 5 contains a max function that selects the larger term of \((1-\beta)\lambda_{1}(B_{0})\) and \(\beta\lambda_{1}(P_{f})\). We can thus anticipate a switching point of the upper bound as a function of \(\beta\), which does not exist in other versions of the upper bound. In fact, this switching point provides useful information about the behaviour of \(\kappa(S_{P4D})\) with respect to varying \(\beta\). We will illustrate this with numerical examples in section 4. Similar to the unpreconditioned scenario, we find that these bounds do not reflect any contribution of the number \(p\) of observation points, which means that they can not predict the trend of \(\kappa(S_{4D})\) with respect to a changing \(p\). ## 4 experimental design Recalling the motivation of the theoretical studies in section 3, we note that the purpose of the bounds is to provide useful information on the actual condition number of the Hessian without having to compute the Hessian explicitly. However, for the bounds to be informative, we require two properties of them: the estimation given by these bounds should reflect the condition number; the bounds change with \(\beta\) similarly to the condition number. The first property ensures that we can use the bound directly to estimate the condition number in practice; the second property guarantees that the bound is useful in analyzing the impact of the ensemble part on the condition number. To demonstrate the theories we use the special case of 3DVar. ### Computing Error Covariance Matrices and the Observational Matrix In this section, we give details of computing the error covariance matrices of \(B_{0},P_{f},R\) and the observational matrix \(H\) on a one-dimensional grid. **Notation 4**.: _Let \(L_{0},L_{ens}\) denote the correlation length scale associated with \(B_{0},P_{f}\), and \(\sigma_{R_{0}}^{2},\ \sigma_{B_{0}}^{2},\ \sigma_{P_{f}}^{2}\) be the variances of \(R,B_{0},P_{f}\). Let \(r\) denote the radius of a two dimensional disk \(\mathcal{B}^{2}(0,r)\), \(\theta\) be the angular location of a point on the boundary of \(\mathcal{B}^{2}(0,r)\) and \(\theta_{i,j}\) be the angular difference between two grid point on the boundary of \(\mathcal{B}^{2}(0,r)\)._ To simulate real-life applications, we use the Second-order Auto-regressive Correlation (SOAR) function to generate \(B_{0}\), with a correlation length scale of \(L_{0}\). The SOAR function is used by the UK Met Office for Numerical Weather Prediction (NWP)[8, 19]. It is also frequently used in other DA applications[20, 21]. Following the formulation discussed in Tabeart et el (2018)[8], we can compute the SOAR matrix from the following formula[8, 22], \[D_{L}(i,j)=\left[1+2rL^{-1}\sin\left(\frac{\theta_{i,j}}{2}\right)\right]\exp \left[1+2rL^{-1}\sin\left(\frac{\theta_{i,j}}{2}\right)\right], \tag{39}\] where \(L\) is a correlation length scale. Applying (39), we can obtain \(B_{0}\) directly by replacing \(L\) with \(L_{0}\), and \(\theta_{i,j}\) with \(2\pi/n\). Here we assume that the grid descritizing the boundary of \(\mathcal{B}^{2}(0,r)\) is uniform and we set \(r=1\). The resulting background error covariance matrix is then given by, \[B_{0}=\sigma_{B_{0}}^{2}D_{L_{0}}. \tag{40}\] To produce the ensemble part \(P_{f}\), we sample from a different SOAR matrix \(B_{1}\) associated with a length scale of \(L_{ens}\), such that, \[B_{1}=\sigma_{P_{f}}^{2}D_{L_{ens}}. \tag{41}\] To obtain the covariance matrix \(P_{f}\) sampling from \(B_{1}\), we generate a set of random vectors \(\mathbf{w}_{k}\sim\mathcal{N}(0,1)\), then compute the sample covariance matrix of the set \(\{B_{1}^{1/2}\mathbf{w}_{1},B_{1}^{1/2}\mathbf{w}_{2},\cdots,B_{1}^{1/2}\mathbf{w}_{m}\}\). As the size of the set - denoted by \(m\) - is restricted by \(m\ <n\), \(P_{f}\) is then guaranteed to be rank deficient. For the observational error covariance matrix \(R\), we simply choose that, \[R=\sigma_{R_{0}}^{2}I_{p}, \tag{42}\] where \(I_{p}\in\mathbb{R}^{p,p}\) is the identity matrix. In terms of the observational operator, we consider four different types in our experiments. They are given as follows, \[H^{(1)}(i,j)=\left\{\begin{array}{rl}1,&i=j,\ \text{for}\ i=1 \to p,\\ 0,&i\neq j.\end{array}\right.;\ \ H^{(3)}(i,j)=\left\{\begin{array}{rl}1/5,&j= \frac{n}{p}i-2\to\frac{n}{p}i+2,\ (mod(n)),\ \text{for}\ i=1\to p,\\ 0.\end{array}\right.; \tag{43}\] \[H^{(2)}(i,j)=\left\{\begin{array}{rl}1,&j=\frac{n}{p}i,\ \text{for}\ i=1 \to p,\\ 0,&else.\end{array}\right.;\ H^{(4)}(i,j)=\left\{\begin{array}{rl}1,&i,j, \ \text{are choosen randomly and non-repeating},\\ 0.\end{array}\right.\] The choice of \(H^{(1)}\) corresponds to observing the first \(p\) points of the domain, \(H^{(2)}\) observes every \(n/p\) points, \(H^{(3)}\) is an observation which is a weighted sum over 5 grid points and \(H^{(4)}\) chooses observations at random locations on the grid. We note that \(H^{(1)},H^{(2)}\) and \(H^{(4)}\) satisfy the condition of Corollary 1, while \(H^{(3)}\) does not. In all of the experiments, we fix the value of \(n\) to be \(500\) such that the computation is small scale and similar to previous studies[3, 8], and therefore the results can be compared. ### A comparison of different bounds We note that four different versions of the bound have been presented in this paper, i.e., Theorem 3 and 4 for the unpreconditioned cases, Theorem 5 and 6 for the preconditioned cases with CVT. Before further investigating the details of these results, we want to determine which versions of these are the most effective (in terms of revealing the trend of the condition number itself and estimating it with a small error). Considering the unpreconditioned cases, Theorem 4 has an apparent advantage of separating the impact of \(P_{f}\). However, through repeated experiments we observe that in most cases the lower bound in Theorem 4 remains close to \(1\); hence it not effective. On the other hand, the lower bound provided by Theorem 3 produces a better result. Still, it is not close enough to the actual condition number to be a good estimate. In Figure 1 we show typical examples of the variation of these bounds with \(\beta\). Figure 0(a) shows that, for the unpreconditioned case, as \(\beta\) increases towards \(1\), the upper bound captures the shape of \(\kappa(S_{3D})\). The divergence of \(\kappa(S_{3D})\) to infinity is captured by both versions of the upper bounds. The upper bounds computed from Theorem 1 and 3 do not differ significantly. This is a general result for all choices of parameters we have used to conduct numerical case studies. Based on these observations, we decide to focus only on the upper bound from here onward. In the preconditioned case, Figure 0(b), we note that the condition number is drastically reduced by the CVT compared to the unpreconditioned case. We observe an inflection point in the upper bound of Theorem 5. This coincides with the transition point of the condition number (from decreasing to increasing with \(\beta\)) and so predicts the minimum of \(\kappa(S_{3D})\). The bound produced by Theorem 6 does not capture such behaviour, while not being significantly closer to the actual condition number either. These results are valid for all different choices of matrices and parameters.Therefore we focus our study on the bound produced by Theorem 4 and Theorem 5 only, because of their superiority in separating the impact of different matrices and capturing the shape of the actual condition number (see Figure 0(b)). We note that the condition numbers are plotted in \(\log_{10}\) scale in this paper. Case Studies for Unpreconditioned 3D-Var In this section, we illustrate the upper bound given by Theorem 4 with multiple case studies. In these case studies we change parameters associated with \(B_{0},P_{f},\hat{H},\hat{R}\) and observe the responses in the condition number of \(S\) and its upper bound. In particular, we are especially interested in the relationship between the conditioning of the system and the weight of the ensemble part (i.e., \(\beta\)). We note again that the case studies in the following sections use 3D-Var as a special case of 4D-Var to demonstrate the theories. Therefore, \(\hat{H}\) and \(\hat{R}\) are replaced with \(H_{0}\) and \(R_{0}\), respectively. Additionally, we will denote the Hessian as \(S_{3D}\) and consider it as a special case of \(S_{4D}\). As the ensemble error covariance matrix is rank deficient, we can then expect that the balance of \(B_{0}\) and \(P_{f}\) is particularly important in the conditioning of the system. For example, when \(P_{f}\) is the dominant component of the Hessian matrix \(S_{3D}\), the condition number of \(S_{3D}\) is likely to be large and the system ill-conditioned. In addition, as \(\beta\to 1\), the Hessian matrix \(S_{3D}\to P_{f}^{-1}+H_{0}^{T}R_{0}^{-1}H_{0}\), the limit is clearly ill-defined as \(P_{f}^{-1}\) does not exist. Meanwhile, the balance of \(B_{0}\) and \(P_{f}\) is also controlled by the relative sizes of physical length scales \(L_{0},L_{ens}\) and the variances \(\sigma_{B_{0}},\sigma_{P_{f}}\). Recalling the upper bound in Theorem 4, it is predominantly controlled by the largest eigenvalues of \(B_{0}\) and \(P_{f}\). Meanwhile, the largest eigenvalues of \(B_{0}\) and \(P_{f}\) vary with their correlation length scales[3]. As Figure 2 shows, the largest eigenvalue of \(B_{0}\) increases with the correlation length scale; the general trend is similar for \(P_{f}\) except for random fluctuations caused by the sampling noise. Consequently, we expect \(\kappa(S_{3D})\) and its bound to change when the physical length scales \((L_{0},L_{ens})\) alter. As an important justification of the effectiveness of the upper bound, we observe that the shape of the upper bound is similar to the condition number in all four case studies presented in Figure 3. We also note that in this set of experiments, we observe that the impact of \(L_{ens}\) on the conditioning and the bounds is less significant than \(L_{0}\) (compare Figure 3a,3b). In Figure 3c, we observe that the trend of the condition number with respect to \(\sigma_{P_{f}}\) is well captured by the upper bounds. Also, in Figure 3d, we observe that upper bound does reveal a general trend of the condition number with respect to \(\sigma_{B_{0}}\), but the upper bound does not reflect a significant reduction of the condition number at \(\beta\sim 0\). The reason for this discrepancy is unknown. It is an important finding that the upper bound starts to sharply increase and diverge to infinity, and this transition occurs at about the same \(\beta\) as the condition number of the Hessian. This then indicates that the bound is informative from a numerical point of view, e.g., it informs a constraint on \(\beta\) if one seeks to avoid a sudden deterioration of the conditioning of the system in Hybrid 4D-Var. On the other hand, we also note that in the three cases of changing \(L_{0},\sigma_{B_{0}}\) and \(\sigma_{P_{f}}\), the upper bounds reflect the direction of change in the condition number of the Hessian. Thus it shows promising results Figure 2: The largest eigenvalues of \(B_{0}\) and \(P_{f}\) as functions of correlation length scale. that the upper bound can be used to qualitatively analyse the conditioning of the system when these parameters alter with time. Furthermore, the upper bound is two orders of magnitude above the condition number. We find similar conclusions for the cases of changing \(\sigma_{R_{0}}\), the observation operator \(H_{0}\) and the number of observations \(p\) (see Figure 4). We note that the upper bound for a fixed \(B_{0}\) and \(P_{f}\) does not change insofar as the largest eigenvalue of \(H_{0}^{T}R_{0}H_{0}\) stays the same (see Theorem 4). Comparing \(\lambda_{1}(H_{0}^{T}R_{0}H_{0})\) for different versions of \(H_{0}\), we find that \(H^{(1)},H^{(2)}\) and \(H^{(4)}\) have the same largest eigenvalue. It is slightly smaller for \(H^{(3)}\) but not significantly. We also point out that previous investigations in 4D-Var [3, 8] also indicate that a small change in \(\sigma_{R_{0}}^{2}\) does not change the bound noticeably. Thus we anticipate that the upper bound remains similar for different versions of \(H_{0}\). This is confirmed by cases in Figure 3(b). Meanwhile, the condition number of the Hessian is the largest for the choice of \(H_{0}=H^{(1)}\), while choosing \(H_{0}=H^{(2)}\) or \(H_{0}=H^{(3)}\) produces a similar result and they yield the best conditioning of the Hessian; the condition number associated with the randomized version \(H_{0}=H^{(4)}\) lies between those of \(H^{(1)}\) and \(H^{(2)},H^{(3)}\). Thus the observation indicates that evenly distributed observation points produce better conditioning of the system, while partial observations concentrated in a small region lead to the opposite. The upper bound in (16) also implies that changing the number of observations does not change the upper bound (as the case study in Figure 3(c) confirms). Lastly, we also verified that the upper bound also remains effective when the number of ensemble members changes. We tested cases where the number of ensemble members increases from 50 to 400. In all these cases, we find that the upper bounds have similar shapes to the condition number, and they provide a good estimation of the value of the condition number. ## 6 numerical experiments for the preconditioned Hybrid 3D-Var with CVT Following the case studies for the unpreconditioned cases, we now conduct similar experiments to illustrate Theorem 5 and examine the predictions using Theorem 5 for preconditioned cases with CVT. In the experiments we focus on three major issues: firstly we validate the correctness of the bound; secondly, we illustrate the bound reflects the behaviour of the conditioning in general (with a few exceptions); lastly, we make a comparison to the unpreconditioned cases. We note that each error covariance matrix and observation operator is given by Section 4. An immediate observation from Theorem 5 is that the switching point in the upper bound shifts with the relative sizes of \(\lambda_{1}(B_{0})\) and \(\lambda_{1}(P_{f})\). More specifically, the max function in the upper bound in (17) switches at a larger value of \(\beta\) when \(\lambda_{1}(B_{0})\) increases, and the opposite occurs when \(\lambda_{1}(P_{f})\) increases. Furthermore, we know that \(\lambda_{1}(B_{0})\) and \(\lambda_{1}(P_{f})\) become larger as \(L_{0}\) and \(L_{ens}\) increase (see Figure 2), and the same is true for increasing \(\sigma_{B_{0}},\sigma_{P_{f}}\). As Figure 5 shows, the results of these four case studies justifies the observations implied by Theorem 5, and the switching point of the upper bound also predicts the minimum of the condition number. On the other hand, as \(\lambda_{1}(B_{0}),\lambda_{1}(P_{f})\) increase with \(L_{0},L_{ens},\sigma_{B_{0}},\sigma_{P_{f}}\), we anticipate that the upper bound would increase with these parameters. This is confirmed by the case studies (see Figure 5). Crucially, we find that the upper bound predicts the trend of the condition number of Hessian with respect to \(\beta\) in all four cases. For example, in Figure (a)a, we observe that the inflection points of the upper bounds predict the minima of the condition numbers, and they both move rightward when \(L_{0}\) increases. In Figure (b)b, we find that the inflection points of the upper bounds and the minima of the condition numbers move leftwards when \(L_{ens}\) increases. In Figure (c)c, the inflection points of the upper bounds and the minima of the condition numbers move rightwards when \(\sigma_{B_{0}}\) increases, and in figure (d)d we find that the trend of the condition number reacting to \(\sigma_{P_{f}}\) is well captured by the upper bounds, the inflection points of the upper bounds and the minima of the condition numbers move leftwards when \(\sigma_{P_{f}}\) increases. However, we want to point out that the changes in the conditioning is very limited in these preconditioned cases and are normally less than one order of magnitude. Furthermore, compared to the unpreconditioned case, these results show a clear improvement of the conditioning in Hybrid 4D-Var with CVT (we note that this is something that is also found in standard 4D-Var in previous work[9]). We observe that CVT leads to a maximum reduction of six magnitudes in the condition number of Hessian. We also note that in all cases presented in the preconditioned Hybrid 4D-Var, the condition number of Hessian does not diverge to infinity at \(\beta=1\), and this is a crucial difference from the unpreconditioned Hybrid 4D-Var. Theorem 5 also shows that the upper bound takes a larger value when the largest eigenvalue of \(K=H_{0}^{T}R_{0}H_{0}\) is bigger. Furthermore, since \(\sigma_{R_{0}}^{-2}\) is a scaling factor of \(K\), we can then anticipate that the upper bound grows with a decreasing \(\sigma_{R_{0}}^{2}\). The case presented in Figure (a)a not only justifies this observation but also shows that the condition number itself follows the same trend. On the impact of choosing different versions of \(H_{0}\), the trend is similar to the unpreconditioned case; we find that evenly spreading out the observation across the whole domain leads to better conditioning and skewing observations into a local region result in worse conditioning. We note Figure 5: Same settings as Figure 3, but for the CVT preconditioned cases. that the upper bound remains the same for \(H_{0}=H^{(1)},H^{(2)}\) and \(H^{(4)}\) (see Figure 5(b), 5(c)). This is because \(K=H_{0}^{T}R_{0}H_{0}\) shares the same largest eigenvalue for these three versions, whereas \(H^{(1)}\) produces a smaller maximum eigenvalue for \(K\), therefore the upper bound is smaller. On the other hand, the impact of the number \(p\) of observations on the condition number is opposite to that of the unpreconditioned case (this is similar to previous reports of 4D-Var [8, 9]), but the bound estimation does not reflect this trend, as the largest eigenvalue of \(K\) does not change with \(p\). We note that in both unpreconditioned and preconditioned cases, the upper bounds cannot predict the impact of \(p\) on the conditioning of the system. The effect of sampling noise in \(P_{f}\) is similar to the unpreconditioned case, which is that the impact on the conditioning or the upper bound is insignificant. We find that this is true even with a small sample size. As an important justification of the effectiveness of the upper bound given by Theorem 5, we observe the changes of the upper bound with respect to \(\beta\) provide valuable information about the transition of the condition number (from decreasing to increasing); the inflection point of the upper bound predicts the minimum of the condition number. This is particularly important because it informs an optimal choice of \(\beta\) from a numerical perspective of obtaining the best conditioning of the system. We therefore can conclude that the upper bound is useful for providing qualitative information about the actual conditioning of the system. ## 7 Convergence Test of a Conjugate Gradient Routine We note that there are well-known situations where the condition number provides a pessimistic indication of convergence speed (e.g. in the case of repeated or clustered eigenvalues)[9]. Here we use Figure 6: As settings as Figure 4, but for the CVT preconditioned cases. hybrid 3D-Var as a special case of hybrid 4D-Var to illustrate that for hybrid variational assimilation the convergence speed follows a similar trend to the condition number as the weight of the ensemble part increases. Following a similar method to section 5.3.2. of Tabeart et el[8], we study how the speed of convergence of a conjugate gradient method applied to the linear system \(S_{3D}\mathbf{x}=\mathbf{b}\) changes with the weight \(\beta\) of the ensemble part. In the first test, the matrix \(S_{3D}\) is given by the Hessian of the unpreconditioned 3D-Var (Section 5), and the vector \(\mathbf{b}\) is given by Haben[3] in Section 3.2 (\(\mathbf{b}=B^{-1}(\mathbf{x}_{b}-\mathbf{x}_{0})-H_{0}^{T}\mathbf{d}\)), where the vectors \(\mathbf{x}_{b}-\mathbf{x}_{0},\mathbf{d}\) are chosen to be random at the beginning of the trial). For the computation of \(S_{3D}\), we choose the parameters as follows, \(L_{0}=0.1,L_{ens}=0.05,\sigma_{B_{0}}=\sigma_{P_{f}}=\sigma_{R_{0}}=1,p=100\) and \(H_{0}=H^{(4)}\), which are in line with our previous case studies. As figure 7 shows, the condition number of the Hessian shows the same trend as the number of iterations performed to reach the tolerance threshold. We can observe that the conditioning is a good proxy to study the convergence speed in this case. For the preconditioned case with CVT, shown in Figure 8, we find that the difference in the condition number as \(\beta\) varies is marginal relative to the unpreconditioned case. One of the reason for this is that the Hessian has more eigenvalues clustered around 1, which is a result of CVT. However, the number of iterations taken to converge still follows the general pattern of the condition number, with higher values when \(\beta\) is close to 0 or 1. Finally, we conducted experiments with stopping criteria of different orders of magnitude to investigate the impact on the convergence speed trend. For \(\epsilon>10^{-6}\) there is very little variation in the number of iterations as \(\beta\) changes. For values of \(\epsilon\) smaller than \(10^{-6}\), the number of iterations increases slightly, but the trend in Figure 8(b) remains consistent. ## 8 Summary In this paper, we established a set of theories for the conditioning of Hybrid 4D-Var. These theories provide effective upper bounds for the condition number of the Hessian. These theoretical results are illustrated by numerical case studies using the special case of Hybrid 3D-Var. In numerical experiments Figure 7: Convergence test for an unpreconditioned case. Figure (a) displays the condition number of \(S_{3D}\) and its upper bound. The parameters used to compute \(S_{3D}\) are \(L_{0}=0.1,L_{ens}=0.05,\sigma_{B_{0}}=\sigma_{P_{f}}=\sigma_{R_{0}}=1,p=100\) and we choose \(H_{0}=H^{(4)}\). Figure (b) displays the number of iterations. we tested that the upper bounds have similar shapes to the condition number with respect to the weight of the ensemble part (i.e. \(\beta\)). Thus they can provide a useful estimation of the behaviour of the condition number of the Hessian. In addition, the upper bound enabled us to study the condition number through parameters associated with different components and observe their interactions. We conclude that the upper bound is effective in explaining the conditioning of the Hessian matrix in general. However, the lower bound does not provide useful information about the condition number in all cases studied. We summarize our findings as follows, * For the unpreconditioned cases, the condition number of the Hessian increases gradually at first with \(\beta\) then quickly diverges to infinity as \(\beta\to 1\). This transition is predicted by the upper bounds. * For the preconditioned cases with CVT, a general trend is that the condition number reduces at first and then increases as \(\beta\) increases. There is an optimal \(\beta\) at which the condition number is at its minimum. The upper bound predicts the optimal \(\beta\) effectively. * The preconditioning with CVT improves the conditioning drastically and eliminates the divergence of the condition number of Hessian at \(\beta=1\). * For preconditioned cases, the upper bound changes in the same direction as the condition number of the Hessian with respect to changes in \(L_{0},L_{ens},\sigma_{B_{0}},\sigma_{P_{f}}\) and \(\sigma_{R_{0}}\). In unpreconditioned cases, we have similar conclusion for \(L_{0},\sigma_{B_{0}}\) and \(\sigma_{P_{f}}\). In preconditioned cases, we find that the upper bound reveals the trend of the condition number with respect to changing parameters such as the correlation length scale and the variance. However, the theories in section 3 cannot explain the impact of the four different choices of \(H_{0}\) that we presented in this paper. Furthermore, the bounds do not change with the number of observations \(p\) and the sample size \(m\), although \(p\) does directly influence the actual condition number. Meanwhile, the tests show that these theories do provide useful predictions on the impact of the balancing of variances and correlation length scales for the preconditioned cases. In unpreconditioned cases, the upper bound can predict well the influence of \(L_{0}\) and \(\sigma_{P_{f}}\). The bounds can inform the impact of these components on the convergence of iterative numerical algorithms. Figure 8: Convergence test for a case preconditioned with CVT. Figure (a) displays the condition number of \(S_{P3D}\) (as a special case of \(S_{P4D}\)) and its upper bound. The parameters used to compute \(S_{P3D}\) are \(L_{0}=0.1,L_{ens}=0.05,\sigma_{B_{0}}=\sigma_{P_{f}}=\sigma_{R_{0}}=1\), \(p=100\) and we choose \(H_{0}=H^{(4)}\). Figure (b) displays the number of iterations. It is well-known that the condition number of the Hessian matrix is a useful proxy to study the convergence speed of the least-squares minimisation of Hybrid 4D-Var. The results presented in this paper could then inform applications in terms of the restriction of the weight of the ensemble part (for the unpreconditioned cases), such that extreme ill-conditioning can be avoided. For the preconditioned Hybrid 4D-Var with CVT, we established a theory that effectively predicts an optimal weight of the ensemble part such that the conditioning is optimal. ## Acknowledgements This work is funded by the UK Engineering and Physical Sciences Research Council, grant number EP/V061828/1, and in part by the NERC National Centre for Earth Observation.
2305.08868
Exact Spatio-Temporal Dynamics of Lattice Random Walks in Hexagonal and Honeycomb Domains
A variety of transport processes in natural and man-made systems are intrinsically random. To model their stochasticity, lattice random walks have been employed for a long time, mainly by considering Cartesian lattices. However, in many applications in bounded space the geometry of the domain may have profound effects on the dynamics and ought to be accounted for. We consider here the cases of the six-neighbour (hexagonal) and three-neighbour (honeycomb) lattice, which are utilised in models ranging from adatoms diffusing in metals and excitations diffusing on single-walled carbon nanotubes to animal foraging strategy and the formation of territories in scent-marking organisms. In these and other examples, the main theoretical tool to study the dynamics of lattice random walks in hexagonal geometries has been via simulations. Analytic representations have in most cases been inaccessible, in particular in bounded hexagons, given the complicated zig-zag boundary conditions that a walker is subject to. Here we generalise the method of images to hexagonal geometries and obtain closed-form expressions for the occupation probability, the so-called propagator, for lattice random walks both on hexagonal and honeycomb lattices with periodic, reflective and absorbing boundary conditions. In the periodic case, we identify two possible choices of image placement and their corresponding propagators. Using them, we construct the exact propagators for the other boundary conditions, and we derive transport related statistical quantities such as first passage probabilities to one or multiple targets and their means, elucidating the effect of the boundary condition on transport properties.
Daniel Marris, Seeralan Sarvaharman, Luca Giuggioli
2023-05-11T12:36:54Z
http://arxiv.org/abs/2305.08868v1
# Exact Spatio-Temporal Dynamics of Lattice Random Walks in Hexagonal and Honeycomb Domains ###### Abstract A variety of transport processes in natural and man-made systems are intrinsically random. To model their stochasticity, lattice random walks have been employed for a long time, mainly by considering Cartesian lattices. However, in many applications in bounded space the geometry of the domain may have profound effects on the dynamics and ought to be accounted for. We consider here the cases of the six-neighbour (hexagonal) and three-neighbour (honeycomb) lattice, which are utilised in models ranging from adatoms diffusing in metals and excitations diffusing on single-walled carbon nanotubes to animal foraging strategy and the formation of territories in scent-marking organisms. In these and other examples, the main theoretical tool to study the dynamics of lattice random walks in hexagonal geometries has been via simulations. Analytic representations have in most cases been inaccessible, in particular in bounded hexagons, given the complicated 'zig-zag' boundary conditions that a walker is subject to. Here we generalise the method of images to hexagonal geometries and obtain closed-form expressions for the occupation probability, the so-called propagator, for lattice random walks both on hexagonal and honeycomb lattices with periodic, reflective and absorbing boundary conditions. In the periodic case, we identify two possible choices of image placement and their corresponding propagators. Using them, we construct the exact propagators for the other boundary conditions, and we derive transport related statistical quantities such as first passage probabilities to one or multiple targets and their means, elucidating the effect of the boundary condition on transport properties. ## I Introduction Popularised in the 1920s by Polya [1], lattice random walks (LRW) are widely used in the mathematics [2] and physics [3; 4; 5] literature. Owing to their versatility as a special class of Markov chains, one finds applications of LRW across a multitude of disciplines including animal ecology [6; 7], cell biology [8], actuarial science [9] and social sciences [10]. While analytic representations of the dynamics of LRW has been studied for a long time, recent advances in the exact description of their dynamics in finite \(d\)-dimensional hypercubic lattices [11; 12; 13; 14] have brought renewed interest. Following these advances, many computationally challenging problems such as transmission and encounter dynamics between LRW pairs [15] and the dynamics of interactions with inert spatial heterogeneities can now be tackled analytically [16]. These developments are, however, limited to Cartesian lattices with little attention given to other geometries. In \(d=2\) dimensions, two such important cases are the hexagonal and the honeycomb lattices, often found to be used interchangeably in the literature [17; 18; 19]. To avoid any confusion, in the present work we refer to a hexagonal lattice when each site has six nearest-neighbours and to a honeycomb lattice when each site has three nearest-neighbours. Random walks on both lattices have been employed to study many stochastic processes. The hexagonal lattice has been used to represent adatom diffusion in metals [20], space usage and foraging of animals [21], territory formation in scent-marking organisms [22; 23], and substrate diffusion across artificial tissue [24], while the honeycomb LRW has been utilised for particle movement in ice and graphite [25] and the diffusion of excitons on a single-walled carbon nanotube (SWCN) [26; 27]. Both lattices are also of interest in the context of self-avoiding walks [28; 29]. For both lattices, when unbounded, the walk statistics have been studied by mapping the dynamics onto a square lattice. To model a hexagonal walk, two of the eight permissible movement directions in a next nearest-neighbour walk are removed [19; 30], while a brick-like structure of positive and negative sites is created to represent the honeycomb LRW [4; 31]. Another approach, applicable to the honeycomb lattice, models the domain as a bipartite network allowing the construction of the propagator generating function for an unbounded walker that always moves [18]. For many of the applications stated earlier, the movement statistics are heavily affected by the size of the underlying spatial domain. In the literature, analytical attempts to take into consideration the finiteness of the space have been rare, largely due to the non-orthogonal configuration of lattice sites, which leads to complex 'zig-zag' boundaries. One example, which aims to account for the dynamics at the boundary, considers a four walled domain with two opposing walls made up of absorbing sites and the other two walls representing a periodic domain. Via the use of a technique to solve inhomogenous linear partial difference equations [32], analytic expressions for the expectation value that the walk reaches a lattice site before getting absorbed have been obtained [33]. To represent faithfully the finiteness of the hexagonal space, we introduce here a framework for the analytic representation of the spatio-temporal dynamics of LRW in hexagonal geometries with true 'zig-zag' boundaries. We utilise a non-orthogonal hexagonal coordinate system [34; 35] for both lattices and model the honeycomb lattice via the inclusion of internal states [4; 36]. By deriving an extension of the method of images [37] to hexagonal space we find closed-form expressions for the propagator, in periodically bounded random walks. We generalise the defect technique [15; 16; 38] to hexagonal space and random walks with internal states and obtain analytically the propagator generating function for both LRW with absorbing and reflecting boundary conditions. Various transport properties in both lattices are also analysed. The outline of the paper is as follows. In Sec. II we introduce the coordinate system used to parameterise the lattices. Section III is devoted to the analysis of the unbounded lattice Green's function or propagator for both the hexagonal and honeycomb random walk. In Sec. IV, closed-form expressions for the propagator, in two representations of periodically bounded domains are obtained. In Secs. V and VI, we derive the propagators for absorbing and reflecting domains, respectively. Transport properties are studied in Sec. VII where we present the analytic representation of the first passage probability, or first-hitting time, to a target, while in Sec. VIII we derive closed-from expressions for the mean first passage time to one target and employ it to study the mean first passage to multiple targets. ## II Hexagonal coordinate system A convenient coordinate system for hexagonal lattices, designed for application in computer graphics [34; 35], is given by three linearly dependent integer coordinates \((n_{1},n_{2},n_{3})\) such that \(n_{1}+n_{2}+n_{3}=0\). One can represent these coordinates on two different axis sets: an oblique plane in \(\mathbb{R}^{3}\) or three axes lying 60 degrees apart in \(\mathbb{R}^{2}\). We take the latter, whose visual representation can be found in Fig. 1(a), and we refer to the coordinate system as hexagonal cubic coordinates (HCC). The HCC system is related to \(\mathbb{R}^{2}\) Cartesian coordinates \((x_{1},x_{2})\) via the transformation [39] \[n_{1}=-\frac{x_{2}}{2}+\frac{\sqrt{3}x_{1}}{2};\ \ n_{2}=x_{2};\ \ n_{3}=-\frac{x_{2}}{2}-\frac{ \sqrt{3}x_{1}}{2}, \tag{1}\] where \(n_{1},n_{2},n_{3}\in\mathbb{Z}\) and \(x_{1},x_{2}\in\mathbb{R}\), which enables convenient plotting. ### Finite Hexagonal Lattice We model permissible jumps to neighbouring sites taking place between the centroid of the so-called Wigner-Seitz cell (WS) and its six neighbours, shown in Fig. 1(a), giving a coordination number \(Z=6\). We also allow for the option of remaining on the lattice site at each timestep. The size of the finite domain is controlled by the single parameter \(R\), the circumradius of the hexagon. The corners lie at \((n_{1},n_{2},n_{3})=(\pm R,0,\mp R)\), \((n_{1},n_{2},n_{3})=(\pm R,\mp R,0)\), and \((n_{1},n_{2},n_{3})=(0,\pm R,\mp R)\). ### Finite Honeycomb Lattice The honeycomb lattice has a coordination number \(Z=3\) and each lattice site can be thought of as a vertex of the hexagonal WS cell, making the honeycomb lattice the dual of the hexagonal lattice [40]. We model this lattice as a tessellation of even (\(\rhd\)) and odd (\(\sphericalangle\)) triangles by utilising the HCC and including internal states. To avoid confusion, we refer to the hexagonal WS cell, with coordinates \((n_{1},n_{2},n_{3})\) as a lattice site, which contains six internal states labelled \(m_{i}\), \(i=\{1,...6\}\) starting from the top left triangle and going clockwise round the unit cell, shown in Fig. 1(b). At each timestep the walker has four permissible actions: to remain on the same lattice site and state, to move to either of two adjacent states in the same site or to move to an adjacent state in an adjacent site. The number of locations the walker can reach in the honeycomb lattice is six times bigger than in the hexagonal lattice with the same circumradius. ## III Dynamics in unbounded space With later sections exploiting the analytic representation of the occupation probability for the infinite lattice to construct bounded propagators, we show here the procedure to obtain the unbounded case. ### Hexagonal Lattice The Master equation governing the evolution of the site occupation probability, \(Q(n_{1},n_{2},n_{3},t)\), for the unbounded hexagonal lattice is represented by \[\begin{split} Q(& n_{1},n_{2},n_{3},t+1)=\frac{q}{6} \bigg{[}Q(n_{1}-1,n_{2},n_{3}+1,t)+Q(n_{1},n_{2}-1,n_{3}+1,t)+Q(n_{1}+1,n_{2}-1, n_{3},t)\\ &+Q(n_{1}+1,n_{2},n_{3}-1,t)+Q(n_{1},n_{2}+1,n_{3}-1,t)+Q(n_{1}-1,n_{2}+1,n_{3},t) \bigg{]}+(1-q)Q(n_{1},n_{2},n_{3},t),\end{split} \tag{2}\] where \(q\) (\(0<q\leq 1\)), determines the probability of movement, that is \(q=1\) represents a walker changing lattice site at every timestep. Eq. (2) is subject to the initial condition \(Q(n_{1},n_{2},n_{3},0)=\delta_{n_{1}n_{0}}\delta_{n_{2}n_{0}}\delta_{n_{3}n_{ 0}}\), where \(\delta_{ij}\) is the Kronecker delta and \(n_{0_{1}}+n_{0_{2}}+n_{0_{3}}=0\). While Eq. (2) is well suited for the infinite lattice, in the bounded cases the linear relationship between the coordinates, \(n_{3}=-n_{1}-n_{2}\), makes it necessary (see Appendix B) to drop the \(n_{3}\) dependence and re-write the Master equation with a two coordinate representation given by \[\begin{split} Q(n_{1},n_{2},t+1)=\frac{q}{6}\bigg{[}Q(n_{1}-1,n_ {2},t)+Q(n_{1},n_{2}-1,t)+Q(n_{1}+1,n_{2}-1,t)+\\ Q(n_{1}+1,n_{2},t)+Q(n_{1},n_{2}+1,t)+Q(n_{1}-1,n_{2}+1,t)\bigg{]} +(1-q)Q(n_{1},n_{2},t).\end{split} \tag{3}\] After applying the discrete Fourier transform \(\widehat{f}(k)=\sum_{n=-\infty}^{\infty}e^{-ikn}f(n)\) and the unilateral \(z\)-transform \(\widetilde{f}(z)=\sum_{t=0}^{\infty}z^{T}f(t)\), we solve Eq. (3) to obtain the hexagonal lattice Green's function as a double integral \[\widetilde{Q}_{\mathbf{n}_{0}}(n_{1},n_{2},z)=\frac{1}{(2\pi)^{2}}\int_{-\pi}^{ \pi}\int_{-\pi}^{\pi}\frac{e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}]}}{1-z\mu(k_{1 },k_{2})}dk_{1}dk_{2}, \tag{4}\] where \(\mathbf{k}=(k_{1},k_{2})^{\intercal}\) and \(\mathbf{n}-\mathbf{n}_{0}=(n_{1}-n_{0_{1}},n_{2}-n_{0_{2}})\) and \(\mu(k_{1},k_{2})=1-q+\frac{q}{3}\left[\cos(k_{1}-k_{2})+\cos(k_{1})+\cos(k_{2 })\right]\) is the so-called structure function, or discrete Fourier transform of the individual step probabilities [3]. Equation (4) reduces to known results when \(q=1\)[19]. Figure 1: (Colour online). A schematic representation of the hexagonal lattice, panel (a), and the honeycomb lattice, panel (b). In (a), we show the HCC in \(\mathbb{R}^{2}\) with three non-orthogonal axes. Coordinate labels are shown on some lattice sites, while permissible movement directions for the hexagonal lattice are shown on others with dotted lines. For clarity, we omit arrows depicting the option of staying on lattice sites. In (b) we show the honeycomb lattice. Here we show the \((0,0,0)\) Wigner-Seitz cell, half of its six neighbours and their labelled internal states. Permissible movement directions are again shown through dotted lines and similarly to panel (a), arrows representing the option of remaining on any site are removed. The boundaries of the hexagonal Wigner-Seitz cell are shown in bolder lines. ### Honeycomb Lattice The general form of the Master equation for a LRW with internal states has the vectorial form [41; 4] \[\mathbf{\mathcal{Q}}(\mathbf{n},t+1)=\sum_{\mathbf{n}^{\prime}}\mathbb{W}(\mathbf{n},\mathbf{n}^{ \prime})\mathbf{\mathcal{Q}(\mathbf{n}^{\prime},t)}, \tag{5}\] where \(\mathbb{W}(\mathbf{n},\mathbf{n}^{\prime})\) represents all possible movement from \(\mathbf{n}^{\prime}\) to site \(\mathbf{n}\) at each moment in time and \(\mathbf{\mathcal{Q}(\mathbf{n}^{\prime},t)}\) is a column vector representing the occupation probability of each internal state in site \(\mathbf{n}^{\prime}\) at time \(t\). For the honeycomb lattice, Eq. (5) reduces to \[\begin{split}&\mathbf{\mathcal{Q}}(n_{1},n_{2},t+1)=\frac{q}{3}\bigg{[} \mathbb{A}_{1,4}\cdot\mathbf{\mathcal{Q}}(n_{1}-1,n_{2},t)+\mathbb{A}_{2,5}\cdot \mathbf{\mathcal{Q}}(n_{1},n_{2}-1,t)+\mathbb{A}_{3,6}\cdot\mathbf{\mathcal{Q}}(n_{1} +1,n_{2}-1,t)+\\ &\mathbb{A}_{4,1}\cdot\mathbf{\mathcal{Q}}(n_{1}+1,n_{2},t)+\mathbb{ A}_{5,2}\cdot\mathbf{\mathcal{Q}}(n_{1},n_{2}+1,t)+\mathbb{A}_{6,3}\cdot\mathbf{ \mathcal{Q}}(n_{1}-1,n_{2}+1,t)\bigg{]}+\mathbb{B}\cdot\mathbf{\mathcal{Q}}(n_{1},n_{2},t),\end{split} \tag{6}\] where \(\mathbb{A}_{i,j}\) is a \(6\times 6\) matrix with value one at index \(i,j\) that represents the movement from state \(i\) in one WS cell to state \(j\) in a new WS cell, as depicted in Fig. 1(b). \(\mathbb{B}\), which represents the movement within one WS cell, is a tridiagonal Toeplitz matrix with perturbed corners where \(\mathbb{B}_{i,i}=1-q\) with \(i\leq 1\leq 6\), \(\mathbb{B}_{i+1,i}=\mathbb{B}_{i,i+1}=\frac{q}{3}\) with \(1\leq i\leq 5\), \(\mathbb{B}_{1,6}=\mathbb{B}_{6,1}=\frac{q}{3}\) and zero elsewhere. Taking the localised initial condition \(\mathbf{\mathcal{Q}}(n_{1},n_{2},t=0)=\delta_{\mathbf{n}\mathbf{n}_{0}}\mathbf{U}_{m_{0}}\), where \(\mathbf{U}_{m_{0}}\) is a \(6\times 1\) column vector with element \(m_{0}\) exactly one and the rest exactly zero, and following standard techniques for random walks with internal states (see e.g. [4]), we obtain the generating function of the unbounded propagator \[\widetilde{\mathbf{\mathcal{Q}}}_{\mathbf{n}_{0},m_{0}}(n_{1},n_{2},z)=\frac{1}{(2\pi )^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}]} \left[\mathbb{I}-z\mathbf{\mu}(k_{1},k_{2})\right]^{-1}\cdot\mathbf{U}_{m_{0}}dk_{1} dk_{2}, \tag{7}\] where \(\mathbb{I}\) is the \(6\times 6\) identity matrix and the structure function \[\mathbf{\mu}(k_{1},k_{2})= \tag{8}\] \[\begin{bmatrix}1-q&\frac{q}{3}&0&\frac{q}{3}e^{-ik_{1}}&0&\frac{q }{3}\\ \frac{q}{3}&1-q&\frac{q}{3}&0&\frac{q}{3}e^{-ik_{2}}&0\\ 0&\frac{q}{3}&1-q&\frac{q}{3}&0&\frac{q}{3}e^{i(k_{1}-k_{2})}\\ \frac{q}{3}e^{ik_{1}}&0&\frac{q}{3}&1-q&\frac{q}{3}&0\\ 0&\frac{q}{3}e^{ik_{2}}&0&\frac{q}{3}&1-q&\frac{q}{3}\\ \frac{q}{3}&0&\frac{q}{3}e^{-i(k_{1}-k_{2})}&0&\frac{q}{3}&1-q\end{bmatrix}.\] To access the probability at a unique state \(m\) one simply takes the scalar dot product \(\widetilde{\mathcal{Q}}_{\mathbf{n}_{0},m_{0}}(n_{1},n_{2},m,z)=\mathbf{U}_{m}^{\intercal} \cdot\widetilde{\mathbf{\mathcal{Q}}}_{\mathbf{n}_{0},m_{0}}(n_{1},n_{2},z)\). ## IV Periodic boundary conditions To impose periodic boundary conditions, we generalise the method of images [37] to hexagonal domains. The technique represents a convenient way to impose boundary conditions on Green's functions. To implement the technique for fully bounded domains, one considers an infinite set of initial conditions, tessellated across the space, which act concurrently by mirroring the walker's movement. To tessellate hexagonal lattices in 2-dimensional space, there are two choices for the placement of the neighbouring domains due to the 'zig-zag' nature of the boundaries, which differ depending on whether the images are located with a so-called left or right shift in relation to one of the axes. To illustrate, let us consider the hexagon directly above the chosen finite domain. If the bottom right corner of the neighbouring domain is to the right of the top right corner of the modelled domain, it is referred to as the right shift tessellation, otherwise, it is to the left, and is referred to as the left shift tessellation (see Appendix A for a pictorial representation). Using either tessellation, we construct an infinite number of images of the initial condition and obtain the bounded periodic propagator \[\widetilde{P}_{\mathbf{n}_{0}}^{(p)}(n_{1},n_{2},z)=\sum_{m_{1}=-\infty}^{\infty} \sum_{m_{2}=-\infty}^{\infty}\widetilde{Q}_{\mathbf{n}_{0}}(n_{1}+\hat{n}_{1},n_{2} +\hat{n}_{2},z), \tag{9}\] built by considering the appropriate coordinate transform from a location in the finite hexagonal domain to the equivalent location in any of the infinite neighbouring domains. For the right shift we find \[\begin{bmatrix}\hat{n}_{1}\\ \hat{n}_{2}\end{bmatrix}=\begin{bmatrix}-Rm_{1}+(2R+1)m_{2}\\ -(R+1)m_{1}-Rm_{2}\end{bmatrix}, \tag{10}\] and for the left shift \[\begin{bmatrix}\widehat{n}_{1}\\ \widehat{n}_{2}\end{bmatrix}=\begin{bmatrix}(2R+1)m_{1}-Rm_{2}\\ -(R+1)m_{1}+(2R+1)m_{2}\end{bmatrix}, \tag{11}\] where \(m_{1},m_{2}\in\mathbb{Z}\). ### Hexagonal Lattice Applying Eq. (4) in Eq. (9) and interchanging the order of integration and summation, as is permissible in generalised function theory [4], one solves (see Appendix B) for the periodic propagator \[\begin{split} P_{\mathbf{n}_{0}}^{(p)_{[i]}}(n_{1},n_{2},& t)=\frac{1}{\Omega}+\frac{2}{\Omega}\sum_{r=0}^{R-1}\sum_{s=0}^{3r+2}\cos \left(\frac{2\pi\left[k_{1}^{[i]}(r,s)\left(n_{1}-n_{0_{1}}\right)+k_{2}^{[i]} (r,s)\left(n_{2}-n_{0_{2}}\right)\right]}{\Omega}\right)\\ &\times\left(1-q+\frac{q}{3}\left[\cos\left(\frac{2\pi\left(k_{1} ^{[i]}(r,s)-k_{2}^{[i]}(r,s)\right)}{\Omega}\right)+\cos\left(\frac{2\pi k_{1 }^{[i]}(r,s)}{\Omega}\right)+\cos\left(\frac{2\pi k_{2}^{[i]}(r,s)}{\Omega} \right)\right]\right)^{t},\end{split} \tag{12}\] with \(i\in\{\rho,\lambda\}\) indicating the right and left shift respectively. \(\Omega\) is the number of lattice sites in the finite domain, namely \(\Omega=3R^{2}+3R+1\), and \[\begin{split} k_{1}^{[\rho]}(r,s)&=k_{2}^{[\lambda]} (r,s)=R(s+1)+s-r,\\ k_{2}^{[\rho]}(r,s)&=k_{1}^{[\lambda]}(r,s)=R(2-s+3r )+r+1.\end{split} \tag{13}\] We note here that as \(k_{1}(r,s)\) and \(k_{2}(r,s)\) are interchanged under the transition between \(\rho\) and \(\lambda\), the structure function is not dependent on this choice. We further note that one can also find \(\Omega\) using the so-called centered hexagonal number [22; 42]. While the dynamics in the bulk are identical, the walker acts differently at the boundaries depending on the chosen shift (see Fig. 2). As we will see in Secs. VII and VIII, it may lead to marked differences in the first passage probability and mean first passage (MFPT) time. We note that Eq. (12) is still valid for the trivial \(R=0\) case where the domain reduces to a single point at the origin. Here, the two summations disappear and the solution reduces to \(P_{\mathbf{n}_{0}}^{(p)_{[i]}}(n_{1},n_{2},t)=\frac{1}{\Omega}=1\), irrespective of the shift, as expected. Figure 2: (Colour online). A schematic showing the difference in boundary dynamics for two specific lattice sites between the left shift, panel (a), and the right shift, panel (b). The dashed arrows show permissible directions of movement at the two chosen boundary sites and the shape at the end of the arrow corresponds with the shape indicating where this direction of travel would lead to. ### Honeycomb Lattice Since the honeycomb lattice is created by considering the hexagonal lattice with internal states, the set of images used to construct the finite hexagonal propagator in Sec. IV can also be applied to the honeycomb case. Using Eq. (7) in the \(6\times 1\) column vectorial equivalent of Eq. (9), the periodically bounded honeycomb LRW propagator is given by (Appendix C) \[\begin{split}\mathbf{\mathcal{P}}_{\mathbf{m}_{0},m_{0}}^{(p)_{[i]}}(n_{ 1},n_{2},t)=\frac{\mathbf{\mu}(0,0)^{t}\cdot\mathbf{U}_{m_{0}}}{\Omega}&+ \frac{1}{\Omega}\sum_{r=0}^{R-1}\sum_{s=0}^{3r+2}\left\{e^{\frac{2\pi i(\mathbf{n}- \mathbf{n}_{0})\cdot k^{[0]}(r,s)}{\Omega}}\mathbf{\mu}\left(\frac{2\pi k_{1}^{[i]}(r,s )}{\Omega},\frac{2\pi k_{2}^{[i]}(r,s)}{\Omega}\right)^{t}\right.\\ &\left.+e^{\frac{-2\pi i(\mathbf{n}-\mathbf{n}_{0})\cdot k^{[i]}(r,s)}{ \Omega}}\mathbf{\mu}\left(\frac{-2\pi k_{1}^{[i]}(r,s)}{\Omega},\frac{-2\pi k_{2}^{ [i]}(r,s)}{\Omega}\right)^{t}\right\}\cdot\mathbf{U}_{m_{0}},\end{split} \tag{14}\] where \(k_{1}^{[i]}(r,s),k_{2}^{[i]}(r,s)\) are defined in Eq. (13). To lighten the notation, from here onwards we drop the explicit \((r,s)\) dependence on \(k_{1}^{[i]}(r,s),k_{2}^{[i]}(r,s)\). To obtain \(\mathcal{P}_{\mathbf{m}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},m,t)\) a scalar dot product is taken, i.e. \(\mathcal{P}_{\mathbf{m}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},m,t)=\mathbf{U}_{m}^{\sf T }\cdot\mathbf{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},t)\). When \(R=0\), one is left with six internal states at the origin and Eq. (14) reduces to the dynamics dictated by its first term. For \(0<q<1\), as \(t\rightarrow\infty\), \(\mathbf{\mu}\left(\frac{2\pi k_{1}^{[i]}}{\Omega},\frac{2\pi k_{2}^{[i]}}{\Omega }\right)^{t}\), \(\mathbf{\mu}\left(\frac{-2\pi k_{1}^{[i]}}{\Omega},\frac{-2\pi k_{2}^{[i]}}{\Omega }\right)^{t}\rightarrow\mathbf{0}\), while \(\mathbf{\mu}\left(0,0\right)^{t}\rightarrow\frac{1}{6}\mathbb{J}\), where \(\mathbb{J}\) is an all-ones matrix (see Appendix C.1), leaving the steady state probability as \(\mathcal{P}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},m,t=\infty)=\frac{1}{ 6\mathbb{J}}\). Note that due to the odd coordination number, parity issues appear when \(q=1\). That is, if the walker starts on an even (odd) site number, for large even \(t\) the steady state probability on odd (even) sites \(\mathcal{P}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},m,t=\infty)=0\), while on even (odd) sites, \(\mathcal{P}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},m,t=\infty)=\frac{1}{3 \mathbb{J}}\). This can again be seen by studying \(\mathbf{\mu}\left(0,0\right)\) (Appendix C.1). Note also that the term inside the double summation of Eq. (14) is the addition between a matrix and its Hermitian transpose, which ensures the propagator gives real values. We illustrate this by plotting, from Eq. (14), the occupation probability after two separate timesteps in Fig. 3. ## V Absorbing boundary conditions To obtain closed-form solutions with absorbing boundaries we employ the so-called defect technique [15; 37; 43], placing absorbing defects along boundary sites (or states for the honeycomb lattice). This technique can be applied directly to the hexagonal lattice, while for the honeycomb lattice, we extend it to random walks with internal states. To do this we go beyond the standard way defects are introduced into internal states Master equations. Placing a single defect across the entire WS cell and modifying the transition probability matrix accordingly in the Master equation [44] is a valid approach but only if the unbounded propagator is chosen as the defect-free one. Here, we allow any known internal states propagator to be the non-defective solution of the Master equation and place defects on specific internal states. For either lattice, we consider the periodic propagator as the defect-free propagator. Since we take the defective points as fully absorbing, the walker gets taken out of the system upon reaching any boundary point. The absence of any dynamics in that situation makes the choice of left or right periodic propagator irrelevant. As such, we drop the \(\rho,\lambda\) superscript. To obtain temporal dynamics from generating functions, here and elsewhere below, we exploit the convenience of the numerical inverse \(z\)-transform [45]. ### Hexagonal Lattice We denote the set of boundary points, i.e. the lattice sites with one or more coordinates equal to \(\pm R\), \(B^{(a)}=\{\mathbf{b}_{1},\mathbf{b}_{2},...,\mathbf{b}_{N}\}\) where \(N=6R\). One finds the generating function of the absorbing propagator as [15] \[\widetilde{P}_{\mathbf{n}_{0}}^{(a)}(\mathbf{n},z)=\widetilde{P}_{\mathbf{n}_{0}}^{(p)}( \mathbf{n},z)-\sum_{j=1}^{N}\widetilde{P}_{\mathbf{b}_{j}}^{(p)}(\mathbf{n},z)\frac{\det( \mathbb{G}^{(j)}(\mathbf{n}_{0},z))}{\det\left(\mathbb{G}(z)\right)}, \tag{15}\] where \(\mathbb{G}(z)_{i,k}=\widetilde{P}_{\mathbf{b}_{j}}^{(p)}(\mathbf{b}_{i},z)\), a \(6R\times 6R\) matrix, is built by considering the defect-free dynamics from one defect to every other defect and \(\mathbb{G}^{(j)}(\mathbf{n}_{0},z)\) the same as \(\mathbb{G}\) but with the \(j^{\rm th}\) column replaced with the transpose of the vector \(\left[\widetilde{P}_{\mathbf{n}_{0}}^{(p)}(\mathbf{b}_{1},z),\widetilde{P}_{\mathbf{n}_{0} }^{(p)}(\mathbf{b}_{2},z),\ldots,\widetilde{P}_{\mathbf{n}_{0}}^{(p)}(\mathbf{b}_{M},z)\right]\). ### Honeycomb Lattice We define defective states along the boundary of the honeycomb lattice \(\{(\mathbf{b}_{1},m_{\mathbf{b}_{1}}),(\mathbf{b}_{2},m_{\mathbf{b}_{2}}),...,(\mathbf{b}_{\mathcal{N}},m_{\mathbf{b}_{\mathcal{N}}})\}\). The sites in which these defects are placed are equivalent to the hexagonal case. However, with the inclusion of internal states, the number of defective states is \(\mathcal{N}=6(2R+1)\), that is two on each standard boundary site, and three on each corner. The absorbing propagator for the honeycomb lattice is given as (Appendix D) \[\begin{split}\widetilde{\mathcal{P}}^{(a)}_{(\mathbf{n}_{0},m)}& (\mathbf{n},m,z)=\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)- \\ &\sum_{j=1}^{\mathcal{N}}\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_ {j},m_{j})}(\mathbf{n},m,z)\frac{\det(\mathbb{H}^{(j)}(\mathbf{n}_{0},m_{0},z))}{\det \left(\mathbb{H}(z)\right)},\end{split} \tag{16}\] where \(\mathbb{H}(z)_{i,k}=\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{k},m_{\mathbf{b}_{k}}) }(\mathbf{b}_{i},m_{\mathbf{b}_{i}},z)\) and \(\mathbb{H}^{(j)}(\mathbf{n}_{0},m_{0},z)\) is the same as \(\mathbb{H}(z)\), but with the \(j^{\text{th}}\) column replaced with the transpose of the vector \(\left[\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{1},m_{1},z), \ldots,\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{M},m_{M},z) \right]\). ## VI Reflecting boundary conditions We now place defects between boundary sites (or states). Taking the periodic propagator as the defect-free solution, we reduce the number of reflective barriers required to make a fully reflective domain compared to, say, the unbounded propagator. The general formalism derived in [16] can be used in the hexagonal lattice with careful consideration of the placement of reflective barriers (see Fig. 4(a)) and we also make it applicable to random walks with internal states for the honeycomb lattice (see Appendix E for derivation). Defects between boundary sites (states) are placed by modifying the outgoing connections from boundary site (state) \(\mathbf{u}\) to boundary site (state) \(\mathbf{v}\) via the parameter \(\eta_{\mathbf{u},\mathbf{v}}\), where \(0<\eta_{\mathbf{u},\mathbf{v}}\leq\frac{q}{Z}\). While it is possible for \(\eta_{\mathbf{u},\mathbf{v}}\neq\eta_{\mathbf{v},\mathbf{u}}\) (representing one way or partial reflection) [16], we take them as equivalent with perfect, bi-directional reflection, i.e. \(\eta_{\mathbf{u},\mathbf{v}}=\eta_{\mathbf{v},\mathbf{u}}=\frac{q}{Z}\) for all boundary interaction in either lattice. Despite the 'zig-zag' boundaries, the movement directions can be thought of as if the walker tries to jump over one of the boundaries, it gets pushed back in (see Fig. 4(b) for the hexagonal case) meaning that reflective dynamics are modelled as if the walker attempts to escape, it remains at the site (or state) it came from. To find connected boundary sites, one imagines a walker one jump outside the bounded domain and simply subtracts the coordinate of the centroid of the nearest image to this point. While the following equations do depend on whether the left or right shift periodic propagator is taken, it is simply a matter of considering the appropriate set of defective sites (states) for the propagator chosen. Therefore, for ease of notation we drop the \(\rho\), \(\lambda\) superscripts. ### Hexagonal Lattice We consider the set of defective paired sites \(B^{(r)}=\{\{\mathbf{b}_{1},\mathbf{b}^{\prime}_{1}\},\{\mathbf{b}_{2},\mathbf{b}^{\prime}_{2} \},...,\{\mathbf{b}_{N_{1}},\mathbf{b}^{\prime}_{N_{1}}\}\}\) where \(N_{1}=6R+3\). Following [16], and taking \(\eta_{\mathbf{b}_{1},\mathbf{b}^{\prime}_{i}}=\eta_{\mathbf{b}^{\prime}_{i},\mathbf{b}_{i}}= \frac{q}{6}\) for all \(i\), the generating function of the propagator is given by \[\widetilde{\mathcal{P}}^{(r)}_{\mathbf{n}_{0}}(\mathbf{n},z)=\widetilde{\mathcal{P}}^{ (p)}_{\mathbf{n}_{0}}(\mathbf{n},z)-1+\frac{\det(\mathbb{K}(\mathbf{n},\mathbf{n}_{0},z))}{ \det\left(\mathbb{K}(z)\right)}, \tag{17}\] where \(\mathbb{K}(z)\) and \(\mathbb{K}(\mathbf{n},\mathbf{n}_{0},z)\) are \((6R+3)\times(6R+3)\) matrices with elements \[\mathbb{K}(z)_{i,k}=\frac{q}{6}\left[\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{k }-\mathbf{b}^{\prime}_{k})}(\mathbf{b}_{i},z)-\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{ k}-\mathbf{b}^{\prime}_{k})}(\mathbf{b}^{\prime}_{i},z)\right]-\frac{\delta_{ik}}{z}, \tag{18}\] Figure 3: (Colour online). The occupation probability of the right shift bounded periodic honeycomb LRW at \(t=2\) in (a) and \(t=50\) in (b) from Eq. (14). In both cases, \(q=0.9\), \(R=6\) and the initial condition is at \((n_{0_{1}},n_{0_{2}},n_{0_{3}})\), \(m=(2,2,-4),4\). \[\mathbb{K}(\mathbf{n},\mathbf{n}_{0},z)_{i,k} =\mathbb{K}(z)_{i,k}-\frac{q}{6}\bigg{(}\widetilde{P}^{(p)}_{(\mathbf{b} _{k}-\mathbf{b}_{k}^{\prime})}(\mathbf{n},z) \tag{19}\] \[\times\Big{[}\widetilde{P}^{(p)}_{\mathbf{n}_{0}}(\mathbf{b}_{i},z)- \widetilde{P}^{(p)}_{\mathbf{n}_{0}}(\mathbf{b}_{i}^{\prime},z)\Big{]}\bigg{)},\] respectively, with the notation \(f_{(\mathbf{b}-\mathbf{b}^{\prime})}(\cdot)=f_{\mathbf{b}}(\cdot)-f_{\mathbf{b}^{\prime}}(\cdot)\). ### Honeycomb Lattice The set of pairs of defective states is \(\mathcal{B}^{(r)}=\{\{(\mathbf{b}_{1},m_{\mathbf{b}_{1}}),(\mathbf{b}_{1}^{\prime},m_{\mathbf{ b}_{1}^{\prime}})\},...,\{(\mathbf{b}_{\mathcal{N}_{1}},m_{\mathbf{b}_{\mathcal{N}_{1}}}),( \mathbf{b}_{\mathcal{N}_{1}}^{\prime},m_{\mathbf{b}_{\mathcal{N}_{1}}^{\prime}})\}\}\) where the sites \((\mathbf{b}_{1},\mathbf{b}_{2}...,\mathbf{b}_{\mathcal{N}_{1}})\) correspond to the defective pairs required for corresponding shift in the hexagonal case making \(\mathcal{N}_{1}=N_{1}\). We adjust the outgoing connections by setting \(\eta_{m_{\mathbf{b}_{i}},m_{\mathbf{b}_{i}^{\prime}}}=\eta_{m_{\mathbf{b}_{i}^{\prime}},m_ {\mathbf{b}_{i}}}=\frac{q}{3}\). The generating function of the propagator is given as (see Appendix E) \[\widetilde{\mathcal{P}}^{(r)}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z) =\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)-1 \tag{20}\] \[+\frac{\det(\mathbb{L}(\mathbf{n},m,\mathbf{n}_{0},m_{0},z))}{\det\left( \mathbb{L}(z)\right)},\] where \(\mathbb{L}(z)\) and \(\mathbb{L}(\mathbf{n},m,\mathbf{n}_{0},m_{0},z)\) are \((6R+3)\times(6R+3)\) matrices with elements \[\mathbb{L}(z)_{i,k} =\frac{q}{3}\bigg{[}\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{k},m _{\mathbf{b}_{k}}-\mathbf{b}_{k}^{\prime},m_{\mathbf{b}_{k}^{\prime}})}(\mathbf{b}_{i},m_{\bm {b}_{i}},z)- \tag{21}\] \[\widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{k},m_{\mathbf{b}_{k}}-\mathbf{b} _{k}^{\prime},m_{\mathbf{b}_{k}^{\prime}})}(\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{ \prime}},z)\bigg{]}-\frac{\delta_{ik}}{z},\] and \[\mathbb{L}(\mathbf{n},m,\mathbf{n}_{0},m_{0},z)_{i,k}=-\frac{q}{3} \widetilde{\mathcal{P}}^{(p)}_{(\mathbf{b}_{k},m_{\mathbf{b}_{k}}-\mathbf{b}_{k}^{\prime},m_{\mathbf{b}_{k}^{\prime}})}(\mathbf{n},m,z) \tag{22}\] \[\times\Big{[}\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}}( \mathbf{b}_{i},m_{\mathbf{b}_{i}},z)-\widetilde{\mathcal{P}}^{(p)}_{\mathbf{n}_{0},m_{0}} (\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}},z)\Big{]}+\mathbb{L}(z)_{i,k},\] respectively. ## VII First-passage probability in periodic domains Two important quantities in the dynamics of stochastic systems are the first-passage, or first hitting, probability \(F_{\mathbf{n}_{0}}(\mathbf{n},t)\), and the return probability \(R_{\mathbf{n}}(t)\). \(F_{\mathbf{n}_{0}}(\mathbf{n},t)\) represents the time dependence of the probability to reach a target \(\mathbf{n}\) from the initial condition \(\mathbf{n}_{0}\), while \(R_{\mathbf{n}}(t)\) represents the first time the walker returns to \(\mathbf{n}=\mathbf{n}_{0}\). The generating function of these quantities are obtained via the well-known renewal equation [3], which is valid in arbitrary dimensions. The generalisation to random walks with internal states is straightforward leading to, in the \(z\)-domain, \(\widetilde{\mathcal{F}}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)=\widetilde{\mathcal{P}} _{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)/\widetilde{\mathcal{P}}_{\mathbf{n},m}(\mathbf{n},m,z)\) and \(\widetilde{\mathcal{R}}_{\mathbf{n},m}(z)=1-1/\widetilde{\mathcal{P}}_{\mathbf{n},m}( \mathbf{n},m,z)\)[19]. In this section, we study the differences between the first-passage temporal dependence of the left and right shift in periodically bounded domains in the presence of a single target. It is well known that the direct trajectories, those that travel in a more direct path from the initial condition to the target, influence the location and the mode of the first-passage probability [46]. As the chosen shift impacts the dynamics at the boundary, if the initial condition and the target are placed across the boundary from one another, the direct trajectories differ between the two shifts. This difference is greater the smaller the number of ways to reach the target (or variance in the direct trajectories), which occurs when the locations of the initial condition and the target lie only a few jumps across a boundary from one another. In this setting, by considering all the ballistic trajectories, one may expect disparities between the Figure 4: (Colour online). A schematic representation of the reflective hexagonal domain. Panel (a) depicts the required barriers to turn a right shift periodic domain into a fully reflective domain. Panel (b) shows the four full WS cells inside the rectangle in panel (a). In panel (b) we show how the walker tries to escape and is reflected onto the corresponding WS cell. The dotted arrows represent the modified probability of remaining on that lattice site and it depends on \(q\), as indicated. Note that the probability of remaining on a corner site is \(\frac{q}{6}\) greater than other boundary sites. modes of the two shifts' probability as even if there is an equal number of jumps to reach the target in both cases, one shift may provide more ballistic options than the other. In Fig. 5, we show one such case by placing the walkers with identical initial conditions, near the bottom left of the domains, and placing the targets across the boundary near the top left corner (see Fig. 5 panels (b), (c)). In this setting, the lower coordination number for the honeycomb lattice ensures a lower variance in the direct trajectories. In turn, this causes the trajectories that differ between the shifts to have a greater effect on the modes of the distribution. While the difference in first-hitting dynamics is evident for the honeycomb case, to observe similar disparities for the hexagonal lattice one needs to move the initial condition and target closer to the boundary. Despite the mode dynamics being considerably different in the honeycomb lattice, the tails of the distribution are very similar as the tail is heavily dependent on the indirect trajectories [46], i.e. the paths where the walker meanders around the domain and does not hit the target for extended periods of time. In the honeycomb case we see clearly when the indirect trajectories become dominant as it corresponds to the kinks in the temporal dependence at around \(t\approx 110\). Analytic knowledge of the propagators also allows us to readily calculate the first passage to a set of \(\mathcal{M}\) targets \(\{S\}=\{\mathbf{s}_{1},...,\mathbf{s}_{\mathcal{M}}\}\). This is done by considering the splitting probabilities, that is the probability of reaching one target \(\mathbf{s}_{j}\) in \(\{S\}\) before reaching any other, which is given by [15] \[\widetilde{T}_{\mathbf{n}_{0}\rightarrow(\mathbf{s}_{j}|\{S\}-\mathbf{s}_{j})}(z)=\frac{ \det(\mathbb{F}^{(j)}(\mathbf{n}_{0},z))}{\det(\mathbb{F}(z))}, \tag{23}\] where \(\mathbb{F}(z)_{k,k}=1\), \(\mathbb{F}(z)_{i,k}=\widetilde{F}_{\mathbf{s}_{k}}(\mathbf{s}_{i},z)\) and \(\mathbb{F}^{(j)}(\mathbf{n}_{0},z)\) is the same as \(\mathbb{F}(z)\) but with the \(j^{\text{th}}\) column replaced with \(\left[\widetilde{F}_{\mathbf{n}_{0}}(\mathbf{s}_{1},z),\widetilde{F}_{\mathbf{n}_{0}}(\bm {s}_{2},z),\ldots,\widetilde{F}_{\mathbf{n}_{0}}(\mathbf{s}_{\mathcal{M}},z)\right]^ {\intercal}\). Since the splitting probabilities represent mutually exclusive trajectories, to obtain the generating function for the first-passage to any target, one simply sums them, i.e. \(\widetilde{T}_{\mathbf{n}_{0}\rightarrow\{S\}}=\sum_{j=1}^{\mathcal{M}}\widetilde {T}_{\mathbf{n}_{0}\rightarrow(\mathbf{s}_{j}|\{S\}-\mathbf{s}_{j})}(z)\). For all known internal states propagators with localised initial conditions, the analogous quantities to those in Eq. (23) are easily found, ensuring a trivial extension to the honeycomb lat Figure 5: (Colour online). Time dependent first passage probability in periodic hexagonal and honeycomb domains. In panel (a) we present the temporal probability for both shifts in both lattices, for the initial condition and placement of the targets and initial conditions are presented in panels (b) and (c). The hexagonal lattice, panel (b), is \(R=13\) (547 lattice sites) and the honeycomb lattice, panel (c), is \(R=5\) (546 lattice states). In both cases, the initial condition is placed near the bottom left corner, that is \(\mathbf{n}_{0}=(1,8,-9)\) in the hexagonal lattice, and \(\mathbf{n}_{0},m_{0}=(1,3,-4),3\) in the honeycomb lattice, while the targets are placed across the periodic boundary near the top left corner, at \(\mathbf{n}=(-8,0,8)\) in the hexagonal lattice and \(\mathbf{n},m=(-4,0,4),3\) in the honeycomb lattice. This ensures that in the hexagonal shift, there is a ballistic trajectory of 10 steps for both shifts, while in the honeycomb the left shift has a ballistic trajectory of 10 steps and the right shift has a ballistic trajectory of 12 steps. We show one of these trajectories using a solid line before the walker crosses the boundary and a dashed line for the left shift and dotted line for the right shift after the walker crosses the boundary. For all curves we take \(q=0.85\). tice. ## VIII Mean first passage time Analytic knowledge of the generating functions of the first-passage and return probabilities allow us to obtain closed-form representation of their first moments (\(\mathcal{F}_{\mathbf{n}_{0}\rightarrow\mathbf{n}}\) and \(\mathcal{R}_{\mathbf{n}_{0}}\)) in the hexagonal and honeycomb lattices found by evaluating the first derivative, with respect to \(z\), of the respective probability generating function at \(z=1\)[3]. The MFPT to multiple targets is also accessible via \[\mathcal{F}_{\mathbf{n}_{0}\rightarrow\{S\}}=\frac{\det(\mathbb{T}_{0})}{\det( \mathbb{T}_{1})-\det(\mathbb{T})}, \tag{24}\] a general result derived more recently in [15], where \(\mathbb{T}_{ii}=0\), \(\mathbb{T}_{ij}=\mathcal{F}_{\mathbf{s}_{j}\rightarrow\mathbf{s}_{i}}\), \(\mathbb{T}_{0_{ij}}=\mathbb{T}_{ij}-\mathcal{F}_{\mathbf{n}_{0}\rightarrow\mathbf{s} _{i}}\) and \(\mathbb{T}_{1_{ij}}=\mathbb{T}_{ij}-1\). ### Hexagonal Lattice For the hexagonal lattice, the MFPT is given by \[\begin{split}\mathcal{G}_{\mathbf{n}_{0}\rightarrow\mathbf{n}}^{(p)_{|i |}}&=\frac{2}{q}\sum_{r=0}^{R-1}\sum_{s=0}^{3r+2}\left\{\cos\left( \frac{2\pi k_{1}^{[i]}(n_{1}-n_{0_{1}})+2\pi k_{2}^{[i]}(n_{2}-n_{0_{2}})}{ \Omega}\right)-1\right\}\\ &\qquad\qquad\times\left\{\frac{1}{3}\left[\cos\left(\frac{2\pi (k_{1}^{[i]}-k_{2}^{[i]})}{\Omega}\right)+\cos\left(\frac{2\pi k_{1}^{[i]}}{ \Omega}\right)+\cos\left(\frac{2\pi k_{2}^{[i]}}{\Omega}\right)\right]-1 \right\}^{-1},\end{split} \tag{25}\] while for the MRT we confirm Kac's lemma [47], for which, regardless of the shift, \(\mathcal{R}_{\mathbf{n}}^{(p)}=\Omega=3R^{2}+3R+1\), the inverse of the steady state probability. Knowledge of the generating function of the reflective propagator allows us to study the same statistics in reflective domains [16]. The MFPT is given by \[\mathcal{F}_{\mathbf{n}_{0}\rightarrow\mathbf{n}}^{(r)}=\mathcal{G}_{\mathbf{n}_{0} \rightarrow\mathbf{n}}^{(p)}-1+\frac{\det(\mathbb{F}-\mathbb{F}^{(1)})}{\det( \mathbb{F})}, \tag{26}\] where \[\mathbb{F}_{ij}=\frac{q}{6\Omega}\left[\mathcal{F}_{(\mathbf{b}_{j}-\mathbf{b}_{j}^{ \prime})\to\mathbf{b}_{i}}^{(p)}-\mathcal{F}_{(\mathbf{b}_{j}-\mathbf{b}_{j}^{ \prime})\to\mathbf{b}_{i}^{\prime}}^{(p)}\right]+\delta_{ij}, \tag{27}\] and \[\mathbb{F}_{ij}^{(1)}=\frac{q\mathcal{F}_{(\mathbf{b}_{j}-\mathbf{b}_{j}^{\prime}) \rightarrow\mathbf{n}}^{(p)}}{6\Omega}\left[\mathcal{F}_{(\mathbf{n}_{0}-\mathbf{n}) \rightarrow\mathbf{b}_{i}}^{(p)}-\mathcal{F}_{(\mathbf{n}_{0}-\mathbf{n})\to\mathbf{b}_{i}^ {\prime}}^{(p)}\right], \tag{28}\] while the MRT is \(\mathcal{R}_{\mathbf{n}}^{(r)}=\mathcal{R}_{\mathbf{n}}^{(p)}\), as expected. In Fig. 6 we plot the MFPT as a function of the target location for two different initial conditions \(\mathbf{n}_{0}=(0,0,0)\) (panels (a) and (d)) and \(\mathbf{n}_{0}=(13,-13,0)\), the far right corner, (panels (b) and (e)). The target, \(\mathbf{s}_{1}^{(\alpha)}\), is placed sequentially in a ring-like manner anti-clockwise around the \(R=11\) circumradius of the hexagon. For the case where \(\mathbf{n}_{0}=(0,0,0)\), although the initial displacement between the initial condition and any target is constant, rich dynamics appear the target is moved around the ring. Owing to the symmetry of the system, in both domains, we see oscillations occurring with a wavelength of \(11\), the length of a side of the ring the target is moving around. Peaks are located at the corners of the \(R=11\) circumradius, while the troughs correspond to the centre of the ring. At short times, it is easier for a walker to hit a target in the centre of the ring than the corner as the number of direct trajectories is greater for the centre target, seen in the modes of Fig. 6(c). Owing to the probability conserving property, the tail of \(F_{\mathbf{n}_{0}}(\mathbf{n},t)\) is then slightly lower, giving a smaller MFPT. Furthermore, the MFPT in the reflective cases is roughly twice that of the corresponding periodic case. This can be understood by thinking that the periodic boundary conditions effectively double the trajectories with which the walker can reach a target compared to the reflective case. In panel (b) there is a marked difference between the dynamics in the reflective and periodic domains. As \(\mathbf{n}_{0}\) is the far right point of the domain, the displacement between the initial condition and a target at, for example, \(\mathbf{s}_{1}^{(23)}=(-11,0,11)\) is much greater in the reflective domain than in the periodic domain. In the reflective case, we see a linear increase (decrease) as we move the target further away from (closer to) the initial condition. The linear increase is seen until the target moves around the first corner. We then see small oscillations where, again, targets at the centre of the ring produce a local minimum and the peaks are located at the corners. The highest peak corresponds to the target at \(\mathbf{s}_{1}^{(34)}=(-11,11,0)\), the target furthest from the initial condition. We now introduce six other static targets \(\{\mathbf{s}_{2},...,\mathbf{s}_{6}\}\) placed at other locations within the \(R=11\) ring (given explicitly in the caption of Fig. 6) and move \(\mathbf{s}_{1}^{(\alpha)}\) sequently as before. For the \(\mathbf{n}_{0}=(0,0,0)\) case, the introduction of more targets minimises the differences between the two boundaries compared to the one target set-up. This is likely due to the targets being placed across the whole domain meaning for many realisations, the walker will hit a target before any boundary interaction. In contrast, for \(\mathbf{n}_{0}=(13,-13,0)\), we see similar behaviour to the one target case as in the reflective case, the initial condition renders targets on the opposite side of the domain nearly redundant as many other targets lie between them and the initial condition. For both initial conditions, we again see a maxima at \(\alpha=34\), the location where \(\mathbf{s}_{1}^{(34)}\) is located next to \(\mathbf{s}_{2}\). As such, it renders one of these targets near redundant as if the walker is to find one of the targets, it is likely he will find both. In contrast, the shortest MFPT is when \(\mathbf{s}_{1}^{(\alpha)}\) is dependent on the initial condition. For the case with the origin at the initial condition, the minima for both reflective and periodic is located near the centre of the bottom right side of the domain. This placement of \(\mathbf{s}_{1}^{(\alpha)}\) fills the space and creates the most widely spread arrangement of targets meaning that a walker exploring any section of the domain is always in close proximity to a target. For \(\mathbf{n}_{0}=(13,-13,0)\), the effect of the boundary conditions are still visible with multiple targets. In both domains, the minima naturally lie where the moving target is only a few jumps from the initial condition. ### Honeycomb Lattice For the honeycomb lattice, we find the MFPT as \[\mathcal{G}^{(p)_{[i]}}_{\mathbf{n}_{0},m_{0}\rightarrow\mathbf{n},m}=\mathbf{U} ^{\mathsf{T}}_{m}\cdot\mathbb{C}\cdot\mathbf{U}_{m_{0}}-\mathbf{U}^{\mathsf{T}}_{m}\cdot \mathbb{C}\cdot\mathbf{U}_{m}+\] \[\mathbf{U}^{\mathsf{T}}_{m}\cdot\left[6\sum_{r=0}^{R-1}\sum_{s=0}^{3r +2}\left\{e^{\frac{2\pi i(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}^{[l]}}{2i}}\left[\mathbf{ \mu}\left(\frac{2\pi k_{1}^{[i]}}{\Omega},\frac{2\pi k_{2}^{[i]}}{\Omega} \right)-\mathbb{I}\right]^{-1}+e^{\frac{-2\pi i(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}^{ [l]}}{2i}}\left[\mathbf{\mu}\left(\frac{-2\pi k_{1}^{[i]}}{\Omega},\frac{-2\pi k_ {2}^{[i]}}{\Omega}\right)-\mathbb{I}\right]^{-1}\right\}\right]\cdot\mathbf{U}_{m}\] \[-\mathbf{U}^{\mathsf{T}}_{m}\cdot\left[6\sum_{r=0}^{R-1}\sum_{s=0}^{3r +2}\left\{\left[\mathbf{\mu}\left(\frac{2\pi k_{1}^{[i]}}{\Omega},\frac{2\pi k_{2 }^{[i]}}{\Omega}\right)-\mathbb{I}\right]^{-1}+\left[\mathbf{\mu}\left(\frac{-2 \pi k_{1}^{[i]}}{\Omega},\frac{-2\pi k_{2}^{[i]}}{\Omega}\right)-\mathbb{I} \right]^{-1}\right\}\right]\cdot\mathbf{U}_{m} \tag{29}\] where \(\mathbb{C}\) is a \(6\times 6\) symmetric circulant matrix with the first row \(c=\left[5-\frac{9}{q},5-\frac{4}{q},5-\frac{3}{q},5-\frac{4}{q},5-\frac{3}{q},5-\frac{4}{q}\right]\) such that \(\mathbb{C}_{ij}=c_{j-i\pmod{6}}\). The term \(\mathbf{U}^{\mathsf{T}}_{m}\cdot\mathbb{C}\cdot\mathbf{U}_{m_{0}}-\mathbf{U}^{\mathsf{T}}_ {m}\cdot\mathbb{C}\cdot\mathbf{U}_{m}\) is independent of the choice of shift and governs the MFPT in the degenerate \(R=0\) case, which is periodic in \(|m-m_{0}|\) and given explicitly in Eq. (120). It gives the MFPT for any \(m,m_{0}\) pairs that are either one jump away or those that are two jumps away from one another (see Fig. 1b when R=0 for visual understanding). We again confirm Kac's lemma as the MRT is found to be \(\mathscr{R}^{(p)}_{\mathbf{n}_{0},m_{0}}=6\Omega\), the number of states in the lattice. For the reflective case, one obtains similar expressions to the hexagonal lattice with reflecting boundaries (see Appendix F.2). Using Eq. (29) we study the differences between the MFPT of the two shifts as a function of the number of targets. We sequentially introduce targets in two ways, either building a 'wall' of targets along the top of the domain or placing targets at random locations, with both set-ups shown pictorially in panel (c) of Fig. 7. In both the random and the 'wall' placement of targets, the initial condition and the location of the first target correspond to the set-up used to obtain the full FP probability given in Fig. 5. As Fig. 7, Panel (a) shows, for both target set-ups, adding targets lowers the MFPT, as expected. In the randomly placed target case, an increase in the number \(E\) of targets reduces the differences between the MFPTs, if the added target is placed in the bulk of the domain, seen in Fig. 7, Panel (b). This reduction is due to the increasing likelihood that the walker reaches the target before any boundary interactions occur. If, on the other hand, the new target is placed on, or very closed to, the boundary (\(E=5\), \(E=9\) and \(E=12\)) slight increases are seen in \(\Delta\mathcal{T}_{\mathbf{n}_{0},m_{0}\rightarrow\mathbf{n},m}\) further emphasising the importance of boundary interactions in the periodic propagators. In the case of the 'wall', we add the targets to the right of the initial target until we reach it again from the left. In this case we see a dramatic convergence between the two shifts before the mean of the right shift becomes lower. With a wall of targets placed near the top of the domain, the easiest route for the walker to complete its search is the few steps across the boundary. As exemplified by the higher first-passage probability mode of the left shift honeycomb walker (Fig. 5), when there are few targets, the searcher benefits from the left shift boundary condition since the distance to the target is smaller. However, as we build the wall to the right, this advantage lessens until sufficient targets are added such that it is more beneficial to utilise the right shift walker. Finally, when the wall is complete (\(E=14\)) we see negligible differences between the two shifts. ## IX Conclusions The implementation of the method of images to obtain periodically bounded LRW in square geometries has been known for some time [37; 4]. Somewhat surprisingly, the same could not be said about hexagonal lattices. Here, we have constructed the image set for a lattice in HCC and derived the exact spatio-temporal dynamics for a LRW on a periodic hexagonal and honeycomb lattice. By generalising the defect technique to hexagonal geometries we have found the absorbing and reflecting propagators for both lattices. We have then utilised these propagators to obtain expressions such as the return and first-passage probabilities and their means. We note that while we limit ourselves to deploying the defect technique for the dynamics in hexagonally constrained spatial domains, it is possible to place both absorbing or reflecting sites in such a way as to confine the domain to other shapes, for example, a triangle. Moreover, the formalism may be applied to other periodic propagators in hexagonal geometries, for example, a biased LRW [11], a resetting random walker [13] or when the space is composed of different media [14]. Dynamics on other lattice geometries are also available through our internal states procedure. For example, by placing three internal states in a triangular structure, one can create the so-called tri-hexagonal lattice [48] or with four internal states, one can achieve a square-octagon tessellation seen in the theorised T-graphene structure [49]. Other potential directions include finding continuum limits of the periodic propagator and placing both absorbing defects [38] or reflecting defects [50] to obtain diffusive dynamics in hexagonally shaped domains, avoiding the need to solve the diffusion equation numerically in these geometries. We conclude by noting other potential applications of our work. These include modelling neutron diffusion in a nuclear reactor core [51; 52], the transmission of an infectious pathogen in a population of territorial animals [7; 22; 23; 53], diffusion on a SWCN with topological defects such as dislocations [54], and amoeboid migration in Petri dishes with hexagonally placed micropillars [55]. ###### Acknowledgements. LG acknowledges funding from the Biotechnology and Biological Sciences Research Council (BBSRC) Grant No. BB/T012196/1 and the Natural Environment Research Council (NERC) Grant No. NE/W00545X/1, while DM and SS acknowledge funding from Engineering and Physical Sciences Research Council (EPSRC) DTP studentships with Reference Nos. 2610858 and 2123342, respectively. ## Appendix A Placement of Images Figure 8 shows the first ring of the infinite images for an \(R=1\) domain as per discussion in Sec. IV. We refer to Figure 8: (Colour online). A schematic representation of the nearest-neighbour images for panel (a), the left shift, and panel (b), the right shift, in an \(R=1\) domain. In this case, we show in the central domain, the LRW starting location as a filled circle while the first ring of images has open circles. For ease of visual comparison between the left and right shift, each hexagon in the nearest-neighbour image ring is coloured differently. the shift with respect to the location of the top red image, that is whether it is to the left or the right of the main domain. ## Appendix B Derivation of the Periodic Hexagonal Propagator Using the periodic image set, Eqs. (10) and (11), in the unbounded Green's lattice function, Eq. (4) in the main text, and isolating the image contribution in the exponential, we obtain \[\widetilde{P}_{\mathbf{n}_{0}}^{[i]}(n_{1},n_{2},z)=\frac{1}{(2\pi)^{2}}\int_{-\pi }^{\pi}\int_{-\pi}^{\pi}\sum_{m_{1}=-\infty}^{\infty}\sum_{m_{2}=-\infty}^{ \infty}\frac{e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}]}e^{i[\mathbf{m}\cdot\mathbf{\mathcal{ D}}_{[i]}\cdot\mathbf{k}]}}{1-z\mu(k_{1},k_{2})}dk_{1}k_{2}, \tag{12}\] where \(\mathbf{m}=(m_{1},m_{2})\), and for the right shift \[\mathbb{D}_{[\rho]}=\begin{bmatrix}-R&-R-1\\ 2R+1&-R\end{bmatrix}, \tag{13}\] while for the left shift \[\mathbb{D}_{[\lambda]}=\begin{bmatrix}2R+1&-R-1\\ -R&2R+1\end{bmatrix}. \tag{14}\] Equations (12), (13), and (14) allow us to connect with literature on Fourier analysis in hexagonal domains [56, 39] and utilise the distributional form of the Poisson summation formula associated with the hexagonal lattice \[\sum_{m_{1}=-\infty}^{\infty}\sum_{m_{2}=-\infty}^{\infty}e^{i[\mathbf{m}\cdot \mathbb{D}_{[i]}\cdot\mathbf{k}]}=\frac{(2\pi)^{2}}{\Omega}\sum_{m_{1}=-\infty}^{ \infty}\sum_{m_{2}=-\infty}^{\infty}\delta(\mathbf{k}-2\pi\mathbb{D}_{[i]}^{-1} \cdot\mathbf{m}^{\intercal}), \tag{15}\] where \(\Omega=3R^{2}+3R+1\), the number of sites in the hexagonal lattice. The equivalent result for the square lattice can be found in [57]. We note here the importance of dropping the \(n_{3}\) dependence from Eq. (2). If the full three co-ordinate representation of HCC was chosen, \(\mathbb{D}_{[\rho,\lambda]}\) would be a \(3\times 3\) matrix, with three linearly dependent rows making \(\mathbb{D}_{[i]}\) singular matrices. Applying Eq. (15) on Eq. (12) and shifting the integral limits, due to the periodicity of the integrand, we obtain \[\widetilde{P}_{\mathbf{n}_{0}}^{[i]}(n_{1},n_{2},z)=\int_{-\varepsilon}^{2\pi- \varepsilon}\int_{-\varepsilon}^{2\pi-\varepsilon}\sum_{m_{1}=-\infty}^{ \infty}\sum_{m_{2}=-\infty}^{\infty}\frac{e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k }]}\delta(\mathbf{k}-2\pi\mathbb{D}_{[i]}^{-1}\cdot\mathbf{m}^{\intercal})}{\Omega \left[1-z\mu(k_{1},k_{2})\right]}dk_{1}dk_{2}, \tag{16}\] where the parameter \(0<\varepsilon\leq\frac{2\pi}{\Omega}\), avoids having a singularity of the Dirac delta on the integral bound. The values where the Dirac delta is non-zero are given by \[\begin{split} k_{1}^{[\rho]}&=\frac{2\pi}{\Omega}[- Rm_{1}+(R+1)m_{2}],\\ k_{2}^{[\rho]}&=\frac{-2\pi}{\Omega}[(2R+1)m_{1}+ Rm_{2}],\end{split} \tag{17}\] for the right shift and \[\begin{split} k_{1}^{[\lambda]}&=\frac{2\pi}{\Omega}[ (2R+1)m_{1}+(R+1)m_{2}],\\ k_{2}^{[\lambda]}&=\frac{2\pi}{\Omega}[Rm_{1}+(2R+1 )m_{2}],\end{split} \tag{18}\] for the left shift. To proceed, one finds the values of \(m_{1}\) and \(m_{2}\), which lead to singularities that lie within the integral bounds where each \((m_{1},m_{2})\) corresponds to a unique point in the finite domain. However, due to the non-orthogonality of the coordinate points, one cannot independently sum \(m_{1}\) and \(m_{2}\) along the length of each axis, as one does for the square lattice. To overcome this, we parameterise \(k_{1}^{[i]},k_{2}^{[i]}\) via Eq. (13), alongside their corresponding negative value, and create the nested summation in Eq. (12) in the main text. Figure 9 shows the validity of this parameterisation for the left (panel (a)) and right (panel (b)) shift. Upon substituting these parameterised values into Eq. (16) and simplifying the complex exponential, one obtains the exact periodic propagator Eq. (12). ## Appendix C Derivation of the Periodic Honeycomb Propagator Starting from Eq. (7) in the main text and applying the images defined in Sec. IV leads to \[\widetilde{\mathbf{\mathcal{P}}}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},z)=\frac{ 1}{\Omega}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\sum_{m_{1}=-\infty}^{\infty}\sum_{m _{2}=-\infty}^{\infty}\delta(\mathbf{k}-2\pi\mathbb{D}_{[i]}^{-1}\cdot\mathbf{m}^{ \intercal})e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{k}]}\left[\mathbb{I}-z\mathbf{\mu}(k_{ 1},k_{2})\right]^{-1}\cdot\mathbf{U}_{m_{0}}dk_{1}dk_{2}. \tag{10}\] Expanding \(\left[\mathbb{I}-z\mathbf{\mu}(k_{1},k_{2})\right]^{-1}\) in powers of \(z\), \(\left[\mathbb{I}-z\mathbf{\mu}(k_{1},k_{2})\right]^{-1}=\sum_{t=0}^{\infty}\left[z \mathbf{\mu}(k_{1},k_{2})\right]^{t}\) and shifting the integral limits one has \[\mathbf{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(n_{1},n_{2},t)=\frac{1}{\Omega }\int_{-\varepsilon}^{2\pi-\varepsilon}\int_{-\varepsilon}^{2\pi-\varepsilon} \sum_{m_{1}=-\infty}^{\infty}\sum_{m_{2}=-\infty}^{\infty}\delta(\mathbf{k}-2\pi \mathbb{D}_{[i]}^{-1}\cdot\mathbf{m}^{\intercal})e^{i[(\mathbf{n}-\mathbf{n}_{0})\cdot\mathbf{ k}]}\mathbf{\mu}(k_{1},k_{2})^{t}\cdot\mathbf{U}_{m_{0}}dk_{1}dk_{2}. \tag{11}\] Using the known parametrisation of the Dirac delta (see Appendix B), one finds the closed-form solution for the honeycomb lattice, shown in Eq. (14) of the main text. ### Steady state behaviour in the honeycomb lattice Since the eigenvalues and eigenvectors of \(\mathbf{\mu}\left(0,0\right)\) are readily available, one can diagonalise the symmetric matrix \(\mathbf{\mu}\left(0,0\right)=\mathbb{P}\mathbb{E}\mathbb{P}^{-1}\) where \(\mathbb{E}_{1,1}=1\), \(\mathbb{E}_{i,i}=1-q\), \(2\leq i\leq 5\), \(\mathbb{E}_{6,6}=1-2q\), \(\mathbb{E}_{i,j}=0\) otherwise, and \[\mathbb{P}=\frac{1}{\sqrt{6}}\begin{bmatrix}-1&1&0&-\sqrt{3}&0&-1\\ 1&1&-\sqrt{3}&0&-1&0\\ -1&1&0&0&0&2\\ 1&1&0&0&2&0\\ -1&1&0&\sqrt{3}&0&-1\\ 1&1&\sqrt{3}&0&-1&0\end{bmatrix}. \tag{12}\] For \(0<q<1\), \(\mathbb{P}\mathbb{E}^{t}\mathbb{P}^{-1}\to\mathbb{P}\mathbb{E}\mathbb{P}^{-1}\) as \(t\to\infty\), where \(\overline{\mathbb{E}}_{1,1}=1\) and \(\overline{\mathbb{E}}_{i,j}=0\) otherwise. Evaluating \(\mathbb{P}\overline{\mathbb{E}}\mathbb{P}^{-1}\), it is straightfoward to find \(\mathbf{\mu}(0,0)^{t}=\frac{1}{6}\overline{\mathbb{J}}\) as \(t\to\infty\). Figure 9: The \(k_{1}^{[l]}\), \(k_{2}^{[l]}\) value that give a singularity of the Dirac delta in Eq. (11) for an \(R=3\) domain for (a) the left shift and (b) the right shift. The large circles represent \(k_{1}^{[l]}\), \(k_{2}^{[l]}\) found numerically by summing over \(m_{1}\), \(m_{2}\). The corresponding parameterised values, Eq. (13) are represented by crosses. The first three (\(R\)) diagonal rows correspond to the positive \(k_{1}^{[l]}\), \(k_{2}^{[l]}\) values, while the remaining three are the negative ones, where for ease of visualisation, we have added a \(2\pi\) phase to the negative terms. The term corresponding to the steady state is shown at \(\left(0,0\right)\). Note that in this case, there are \(\Omega=37\) points that need to be paramaterised. For the \(q=1\) case one has to be more careful as \(\lim_{t\rightarrow\infty}\mathbb{E}_{6,6}=\lim_{t\rightarrow\infty}(-1)^{t}\). Here, \(\mathbf{\mu}(0,0)\) is a hollow Toeplitz matrix with alternating bands of \(0\) and \(\frac{1}{3}\). By inspecting this matrix, it is clear that all odd powers of \(t\) revert \(\mathbf{\mu}(0,0)\) onto itself, while even powers of \(t\)'swap' the bands, giving rise to the alternating steady state probabilities in this case. ## Appendix D Propagator with Absorbing Defects with Internal States Here we outline the generalisation of the defect technique on the lattice to random walks with internal states. It follows closely to, and generalises, the derivation in Sec. 2.1 of [15]. We begin with the general Master equation governing the dynamics of the occupation probability of a random walker in a lattice with defective internal states (\(\mathbf{b},m_{\mathbf{b}}\)) in the set \(\mathcal{B}\) with \(\mathcal{N}\) defects \[\begin{split}\mathcal{P}(\mathbf{n},m,t+1)&=\sum_{\bm {n}^{\prime}}\sum_{m^{\prime}}A(\mathbf{n},m,\mathbf{n}^{\prime},m^{\prime})\mathcal{P }(\mathbf{n}^{\prime},m^{\prime},t),\ \ \mathbf{n},m\notin\mathcal{B}\\ \mathcal{P}(\mathbf{b}_{i},m_{\mathbf{b}_{i}},t+1)&=(1- \rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}})\sum_{\mathbf{n}^{\prime}}\sum_{m^{\prime}}A(\bm {b}_{i},m_{\mathbf{b}_{i}},\mathbf{n}^{\prime},m^{\prime})\mathcal{P}(\mathbf{n}^{\prime},m^{\prime},t),\ \ \mathbf{b}_{i},m_{\mathbf{b}_{i}}\in\mathcal{B},\end{split} \tag{10}\] i.e. \(i\in\{1,...,|\mathcal{B}|\}\), \(A(\mathbf{n},m,\mathbf{n}^{\prime},m^{\prime})\) is the transition probability tensor from state \(\mathbf{n},m\) to state \(\mathbf{n}^{\prime},m^{\prime}\), and where \(\rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}}\) (\(0\leq\rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}}\leq 1\)) governs the probability of getting absorbed at defect \(\mathbf{b}_{i},m_{\mathbf{b}_{i}}\) where \(\rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}}=1\) represents perfect trapping efficiency at that site. To proceed, one first considers \(\rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}}\neq 1\) case. For convenience, we combine Eq. (10) into one equation \[\mathcal{P}(\mathbf{n},m,t+1)=\sum_{\mathbf{n}^{\prime}}\sum_{m^{\prime}}\bigg{[}A( \mathbf{n},m,\mathbf{n}^{\prime},m^{\prime})\mathcal{P}(\mathbf{n}^{\prime},m^{\prime},t )-\sideset{}{{}^{\prime}}{\sum}_{\mathbf{b}}\sum_{m_{\mathbf{b}}}\rho_{\mathbf{b},m_{\mathbf{ b}}}\delta_{\mathbf{n}\mathbf{b}}\delta_{mm_{\mathbf{b}}}A(\mathbf{b},m_{\mathbf{b}},\mathbf{n}^{ \prime},m^{\prime})\mathcal{P}(\mathbf{n}^{\prime},m^{\prime},t)\bigg{]}, \tag{11}\] where the primed summation is over all defective sites containing a defective state, and the following summation is over all the states \(m_{\mathbf{b}}\) in that site. The formal solution is simply the propagator of the defect free problem plus the propagator convoluted in time and space with the known, defect-free term [15]. Calling \(\Psi_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,t)\) the defect-free propagator (the periodic propagator in this case) and applying the localised initial condition \(\mathcal{P}(\mathbf{n},m,0)=\delta_{\mathbf{n}\mathbf{n}_{0}}\delta_{mm_{0}}\left[(1-\rho_ {\mathbf{n}_{0}m_{0}})\delta_{\mathbf{n}_{0}m_{0}\in\mathcal{B}}+\delta_{\mathbf{n}_{0}m_ {0}\notin\mathcal{B}}\right]\) one obtains, in \(z\)-domain, \[\widetilde{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)=\widetilde{\Psi}_{\bm {n}_{0},m_{0}}(\mathbf{n},m,z)-\sideset{}{{}^{\prime}}{\sum}_{\mathbf{b}}\sum_{m_{\bm {b}}}\frac{\rho_{\mathbf{b},m_{\mathbf{b}}}}{1-\rho_{\mathbf{b},m_{\mathbf{b}}}}\widetilde{ \Psi}_{\mathbf{b},m_{\mathbf{b}}}(\mathbf{n},m,z)\widetilde{\mathcal{P}}_{\mathbf{n}_{0},m_{0 }}(\mathbf{b},m_{\mathbf{b}},z). \tag{12}\] After setting \(\mathbf{n},m\) to all absorbing sites \(\mathbf{b},m_{\mathbf{b}}\), Eq. (12) can be solved via Cramer's rule giving \[\widetilde{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{j},m_{\mathbf{b}_{j}},z)=(1- \rho_{\mathbf{b}_{j},m_{\mathbf{b}_{j}}})\frac{\det(\mathbb{H}^{(j)}(\mathbf{\rho},\mathbf{n }_{0},m_{0},z))}{\det(\mathbb{H}(\mathbf{\rho},z))}. \tag{13}\] Equation (13) represents the generating function of the probability of being at defective site \(\mathbf{b}_{j}\), \(m_{\mathbf{b}_{j}}\) at time \(t\) and not having been absorbed in any of the other sites in the set \(\mathcal{B}\). The elements in the matrix \(\mathbb{H}(\mathbf{\rho},z)\) are given as \(\mathbb{H}_{k,k}(\mathbf{\rho},z)=1+\rho_{\mathbf{b}_{k},m_{\mathbf{b}_{k}}}+\rho_{\mathbf{b} _{k},m_{\mathbf{b}_{k}}}\widetilde{\Psi}_{\mathbf{b}_{k},m_{\mathbf{b}_{k}}}(\mathbf{b}_{k},m _{\mathbf{b}_{k}},z)\) and \(\mathbb{H}_{i,k}(\mathbf{\rho},z)=\rho_{\mathbf{b}_{k},m_{\mathbf{b}_{k}}}\widetilde{ \Psi}_{\mathbf{b}_{k},m_{\mathbf{b}_{k}}}(\mathbf{b}_{i},m_{\mathbf{b}_{i}},z)\) and \(\mathbb{H}^{(j)}(\mathbf{\rho},\mathbf{n}_{0},m_{0},z)\) is the same as \(\mathbb{H}(\rho,z)\) but with the \(j^{\text{th}}\) column replaced with \(\left(\widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{1},m_{\mathbf{b}_{1}},z),..., \widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{\mathcal{N}},m_{\mathbf{b}_{\mathcal{N}}},z)\right)^{\intercal}\). Substituting Eq. (13) into Eq. (12) and taking the limit \(\rho_{\mathbf{b}_{i},m_{\mathbf{b}_{i}}}\to 1\) gives us the defective propagator given in Eq. (16) of the main text, where we have taken \(\rho_{\mathbf{b}_{k},m_{\mathbf{b}_{k}}}=1\), for all \(k\), to model the fully absorbing boundary. As such, in Eq. (16), we have dropped the \(\rho\) dependence in \(\mathbb{H}\). ## Appendix E Propagator with Inert Spatial Heterogeneities with Internal States Here we outline the derivation of the defect technique to account for inert spatial heterogeneities in random walks with internal states. It follows closely to and generalises [16] (see Section I of the Supplementary Material). In this case, the defects appear in pairs \(\mathcal{B}=\{\{(\mathbf{b}_{1},m_{\mathbf{b}_{1}}),(\mathbf{b}_{1}^{\prime},m_{\mathbf{b}_{i}^{ \prime}})\},...,\{(\mathbf{b}_{\mathcal{N}},m_{\mathbf{b}_{\mathcal{N}}}),(\mathbf{b}_{ \mathcal{N}}^{\prime},m_{\mathbf{b}_{\mathcal{N}}^{\prime}})\}\}\), that is, in the case of the reflective propagator, the boundary states and their respective neighbour across the periodic boundary. The dynamics are given by \[\mathcal{P}(\mathbf{n},m,t+1)=\sum_{\mathbf{n^{\prime}}}\sum_{m^{\prime}}A(\mathbf{n},m,\mathbf{n^ {\prime}},m^{\prime})\mathcal{P}(\mathbf{n^{\prime}},m^{\prime},t), \tag{100}\] when the walker is not on a defective site. Instead, when the walker is on any defective site we have \[\mathcal{P}(\mathbf{b}_{i},m_{\mathbf{b}_{i}},t+1) =\sum_{\mathbf{n^{\prime}}}\sum_{m^{\prime}}A(\mathbf{b}_{i},m_{\mathbf{b}_{i }},\mathbf{n^{\prime}},m^{\prime})\mathcal{P}(\mathbf{n^{\prime}},m^{\prime},t)+\eta_{m _{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}^{\prime}}}\mathcal{P}(\mathbf{b}_{i},m_{\mathbf{b }_{i}},t)-\eta_{m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}^{\prime}}}\mathcal{P}( \mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}},t), \tag{101}\] \[\mathcal{P}(\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}},t+1) =\sum_{\mathbf{n^{\prime}}}\sum_{m^{\prime}}A(\mathbf{b}_{i},m_{\mathbf{b}_{i }},\mathbf{n^{\prime}},m^{\prime})\mathcal{P}(\mathbf{n^{\prime}},m^{\prime},t)+\eta_{ m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}^{\prime}}}\mathcal{P}(\mathbf{b}_{i},m_{ \mathbf{b}_{i}},t)-\eta_{m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}}}\mathcal{P}(\mathbf{b} _{i},m_{\mathbf{b}_{i}},t). \tag{102}\] Once again, combining Eqs. (101) and (102) into one equation and taking the \(z\)-transform, one obtains \[\begin{split}\widetilde{\mathcal{P}}(\mathbf{n},m,z)-\mathcal{P}(\bm {n},m,0)=\sum_{\mathbf{n^{\prime}}}\sum_{m^{\prime}}A(\mathbf{n},m,\mathbf{n^{\prime}},m^ {\prime})\widetilde{\mathcal{P}}(\mathbf{n^{\prime}},m^{\prime},z)+z\sum_{i=1}^{ \mathcal{N}}\left\{\delta_{\mathbf{b}_{i}\mathbf{n}}\delta_{m_{\mathbf{b}_{i}}m}-\delta_{ \mathbf{b}_{i}^{\prime}\mathbf{n}}\delta_{m_{\mathbf{b}_{i}^{\prime}}m}\right\}\\ \times\left[\eta_{m_{\mathbf{b}_{i}},m_{\mathbf{b}_{i}^{\prime}}} \widetilde{\mathcal{P}}(\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}},z)-\eta_{ m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}}}\widetilde{\mathcal{P}}(\mathbf{b}_{i},m_{ \mathbf{b}_{i}},z)\right],\end{split} \tag{103}\] with the parameters \(\eta_{\mathbf{u},\mathbf{v}}\) defined in the main text. Assuming the defect-free solution \(\widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)\) is known, i.e. the propagator of Eq. (103), which in our case we again take as the periodic propagator, the general solution of Eq. (103) for a localised initial condition \(\mathcal{P}(\mathbf{n},m,0)=\delta_{\mathbf{n}\mathbf{n}_{0}}\delta_{mm_{0}}\) is \[\begin{split}\widetilde{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m, z)&=\widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)+z\sum_{j=1}^{ \mathcal{N}}\widetilde{\Psi}_{(\mathbf{b}_{j},m_{\mathbf{b}_{j}}-\mathbf{b}_{j}^{\prime}, m_{\mathbf{b}_{j}^{\prime}})}(\mathbf{n},m,z)\\ &\times\left[\eta_{m_{\mathbf{b}_{j}},m_{\mathbf{b}_{j}^{\prime}}} \widetilde{\mathcal{P}}(\mathbf{b}_{j}^{\prime},m_{\mathbf{b}_{j}^{\prime}},z)-\eta_{ m_{\mathbf{b}_{j}^{\prime}},m_{\mathbf{b}_{j}}}\widetilde{\mathcal{P}}(\mathbf{b}_{i},m_{ \mathbf{b}_{i}},z)\right].\end{split} \tag{104}\] We again solve by creating \(\mathcal{N}\) simultaneous equations (for each defect pair). Using Cramer's rule we obtain the propagator, \[\widetilde{\mathcal{P}}_{\mathbf{n}_{0},m_{0}}(\mathbf{n},m,z)=\widetilde{\Psi}_{\bm {n}_{0},m_{0}}(\mathbf{n},m,z)-\sum_{j=1}^{\mathcal{N}}\widetilde{\Psi}_{(\mathbf{b}_{j },m_{\mathbf{b}_{j}}-\mathbf{b}_{j}^{\prime},m_{\mathbf{b}_{j}^{\prime}})}(\mathbf{n},m,z) \frac{\det(\mathbb{S}(\mathbf{n}_{0},m_{0},z))}{\det(\mathbb{S}(z))}, \tag{105}\] where \[\mathbb{S}(z)_{i,k}=\eta_{\mathbf{b}_{i}^{\prime},\mathbf{b}_{i}}\widetilde{\Psi}_{( \mathbf{b}_{k},m_{\mathbf{b}_{k}}-\mathbf{b}_{k}^{\prime},m_{\mathbf{b}_{k}^{\prime}})}(\mathbf{b} _{i},m_{\mathbf{b}_{i}},z)-\eta_{\mathbf{b}_{i},\mathbf{b}_{i}^{\prime}}\widetilde{\Psi}_{( \mathbf{b}_{k},m_{\mathbf{b}_{k}}-\mathbf{b}_{k}^{\prime},m_{\mathbf{b}_{k}^{\prime}})}(\mathbf{b} _{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}},z)-\frac{\delta_{ik}}{z}, \tag{106}\] and \(\mathbb{S}(\mathbf{n}_{0},m_{0},z)\) the same as \(\mathbb{S}(z)\) but with the \(j^{\text{th}}\) column replaced with \[\begin{split}\left[(\eta_{m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}}} \widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{1},m_{\mathbf{b}_{1}^{\prime}},z)-\eta_{ m_{\mathbf{b}_{1}},m_{\mathbf{b}_{1}^{\prime}}}\widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{1}^{ \prime},m_{\mathbf{b}_{1}^{\prime}},z),...,\right.\\ \left.\eta_{m_{\mathbf{b}_{N}^{\prime}},m_{\mathbf{b}_{N}}}\widetilde{\Psi}_ {\mathbf{n}_{0},m_{0}}(\mathbf{b}_{N},m_{\mathbf{b}_{N}},z)-\eta_{m_{\mathbf{b}_{N}^{\prime}},m_{ \mathbf{b}_{N}^{\prime}}}\widetilde{\Psi}_{\mathbf{n}_{0},m_{0}}(\mathbf{b}_{N}^{\prime},m_{ \mathbf{b}_{N}^{\prime}},z)\right]^{\intercal}.\end{split} \tag{107}\] Evaluating the sum in Eq. (105) explicitly and taking \(\eta_{m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}^{\prime}}}=\eta_{m_{\mathbf{b}_{i}^{\prime}},m_{\mathbf{b}_{i}}}=\frac{q}{3}\), i.e. the outgoing probability from each site in the defect-free honeycomb lattice, one finds the honeycomb reflective propagator as shown in Eq. (20) in the main text. ## Appendix F Honeycomb MFPT and MRT ### Periodic Boundary Conditions Using the \(z\)-transform Eq. (14), one finds the return probability as \[\begin{split}\mathcal{R}_{\mathbf{n}_{0},m_{0}}^{(p)_{[i]}}(z)=1-\\ \Omega\\ \overline{\mathbf{U}_{m_{0}}^{\intercal}\cdot\left[\mathbb{I}-z\mathbf{\mu} (0,0)]^{-1}+\sum_{r=0}^{R-1}\sum_{s=0}^{3r+2}\left\{\left[\mathbb{I}-z\mathbf{\mu} \left(\frac{2\pi k_{i}^{[i]}}{\Omega},\frac{2\pi k_{s}^{[i]}}{\Omega} \right)\right]^{-1}+\left[\mathbb{I}-z\mathbf{\mu}\left(\frac{-2\pi k_{i}^{[i]}}{ \Omega},\frac{-2\pi k_{s}^{[i]}}{\Omega}\right)\right]^{-1}\right\}\right] \right]\cdot\mathbf{U}_{m_{0}}}.\end{split} \tag{108}\] Owing to the recurrence of the random walk, the matrix \([\mathbb{I}-z\mathbf{\mu}(0,0)]\) is singular at \(z=1\). Denoting the summand in Eq. (13) as \(\mathbb{M}^{[i]}(r,s)\) and multiplying the top and bottom of the fraction by \(\det([\mathbb{I}-z\mathbf{\mu}(0,0)])\), we find \[\mathcal{R}^{(p)_{[i]}}_{\mathbf{n}_{0},m_{0}}(z)=1-\frac{\Omega\det([\mathbb{I}-z \mathbf{\mu}(0,0)])}{\mathbf{U}_{m_{0}}^{\intercal}\cdot\left[\mathrm{Inv}\left([ \mathbb{I}-z\mathbf{\mu}(0,0)]\right)+\det([\mathbb{I}-z\mathbf{\mu}(0,0)])\sum_{r=0}^{ R-1}\sum_{s=0}^{3r+2}\mathbb{M}^{[i]}(r,s)\right]\cdot\mathbf{U}_{m_{0}}}, \tag{14}\] where the notation \(\mathrm{Inv}(\cdot)\) denotes the inverse matrix multiplied by its determinant. Evaluating \(\left.\frac{\partial\mathcal{R}^{[i]}_{\mathbf{n}_{0},m_{0}}}{\partial z}\right|_ {z=1}\), and utilising the property \(\det([\mathbb{I}-z\mathbf{\mu}(0,0)])\Big{|}_{z=1}=0\), we obtain for either shift, \[\mathcal{R}^{(p)}_{\mathbf{n}_{0},m_{0}}=-\frac{\Omega\left(\left.\frac{\partial \det([\mathbb{I}-z\mathbf{\mu}(0,0)])}{\partial z}\right|_{z=1}\right)}{\mathbf{U}_{m_ {0}}^{\intercal}\cdot\mathrm{Inv}(\mathbb{I}-\mathbf{\mu}(0,0))\cdot\mathbf{U}_{m_{0}}}. \tag{15}\] Upon inspection of the \(6\times 6\) matrix, one finds \(\left.\frac{\partial\det([\mathbb{I}-z\mathbf{\mu}(0,0)])}{\partial z}\right|_{z =1}=-2q^{5}\) and \(\mathrm{Inv}(\mathbb{I}-\mathbf{\mu}(0,0))=\frac{q^{5}}{3}\mathbb{J}\), where \(\mathbb{J}\) again denotes an all ones matrix. Using these values in Eq. (15), we confirm Kac's lemma obtaining \(\mathcal{R}_{\mathbf{n}_{0},m_{0}}=6\Omega\), the number of states in the domain. Following the same procedure on the honeycomb first passage probability, one obtains Eq. (29) of the main text. When \(R=0\), the periodic honeycomb MFPT, Eq. (29), reduces to \[\mathcal{T}^{(p)}_{m_{0}\to m}=\sum_{i=1}^{6}\lambda_{i}\left(\mathbf{U}_{m}^{ \intercal}\cdot\mathbf{u}_{i}\cdot\mathbf{u}_{i}^{\intercal}\cdot\mathbf{U}_{m_{0}}-\mathbf{U} _{m}^{\intercal}\cdot\mathbf{u}_{i}\cdot\mathbf{u}_{i}^{\intercal}\cdot\mathbf{U}_{m} \right), \tag{16}\] where \(\mathbf{u}_{i}\) are an orthonormal basis formed of the right eigenvectors associated with the eigenvalues of \(\mathbb{C}\), \(\lambda_{k}=\sum_{l=0}^{5}c_{l}e^{\frac{2\pi(l-1)l}{6}}\), \(k\in\{1,6\}\). Evaluating Eq. (16), one finds \[\mathcal{T}^{(p)}_{m_{0}\to m}=\frac{5}{q}\left(\delta_{|m-m_{0}|,1}+\delta_{| m-m_{0}|,3}+\delta_{|m-m_{0}|,5}\right)+\frac{6}{q}\left(\delta_{|m-m_{0}|,2}+ \delta_{|m-m_{0}|,4}\right). \tag{17}\] As expected due to the periodicity of the internal states, Eq. (17) is symmetric around \(m_{0},m\) and equals zero when \(m_{0}=m\). ### Reflective Boundary Conditions In the reflective honeycomb domain, implementing Appendix II of [16] to random walks with internal states, one obtains \[\mathcal{T}^{(r)}_{\mathbf{n}_{0},m_{0}\to\mathbf{n},m}=\mathcal{T}^{(p)}_{\mathbf{n}_{0},m_{0}\to\mathbf{n},m}-1+\frac{\det(\mathbb{F}-\mathbb{F}^{(1)})}{\det(\mathbb{F})}, \tag{18}\] where \[\mathbb{F}_{ij}=\frac{q}{18\Omega}\left[\mathcal{T}^{(p)}_{(\mathbf{b}_{j},m_{\bm {b}_{j}}-\mathbf{b}_{j}^{\prime},m_{\mathbf{b}_{j}^{\prime}})\to\mathbf{b}_{i},m_{\mathbf{b}_{ i}}}-\mathcal{T}^{(p)}_{(\mathbf{b}_{j},m_{\mathbf{b}_{j}}-\mathbf{b}_{j}^{\prime},m_{\mathbf{b}_{j}^{ \prime}})\to\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{\prime}}}\right]+\delta_{ij}, \tag{19}\] and \[\mathbb{F}^{(1)}_{ij}=\frac{q\mathcal{T}^{(p)}_{(\mathbf{b}_{j},m_{\mathbf{b}_{j}}-\bm {b}_{j}^{\prime},m_{\mathbf{b}_{j}^{\prime}})\to\mathbf{n},m}}{18\Omega}\left[ \mathcal{T}^{(p)}_{(\mathbf{n}_{0},m_{0}-\mathbf{n},m)\to\mathbf{b}_{i},m_{\mathbf{b}_{i}}}- \mathcal{T}^{(p)}_{(\mathbf{n}_{0}-\mathbf{n})\to\mathbf{b}_{i}^{\prime},m_{\mathbf{b}_{i}^{ \prime}}}\right], \tag{20}\] for the MFPT, while for the MRT we have \(\mathcal{R}^{(r)}_{\mathbf{n},m}=\mathcal{R}^{(p)}_{\mathbf{n},m}\), as expected. The factor \(\frac{q}{18\Omega}\) in Eqs. (19) and (20) is obtained via the simple multiplication of the periodic return probability and the probability of movement, \(\frac{q}{3}\). ## Appendix G Efficiency of Computational Procedures To obtain propagators in the absorbing and reflective cases, e.g. evaluating Eqs. (15)-(20), or to obtain the splitting probabilities in Eq. (23), one undertakes the numerical inverse \(z\)-transform [45], which consists of evaluating \[f(t)=\frac{1}{2\pi i}\oint\limits_{|z|<1}\frac{\widetilde{f}(z)}{z^{t+1}}dz \simeq\frac{1}{tr^{t}}\sum\limits_{k=1}^{t-1}(-1)^{k}\mathrm{Re}\left[ \widetilde{f}\left(re^{\frac{ik\pi}{t}}\right)\right]+\frac{1}{2tr^{t}}\left[ \widetilde{f}\left(r\right)+(-1)^{t}\widetilde{f}\left(-r\right)\right], \tag{124}\] which has an error \(e_{r}\) given by \(e_{r}\leq r^{2t}/\left(1-r^{2t}\right)^{-1}\). The computational cost of this procedure scales as a function of the number of lattice sites in the domain multiplied by the number of defective sites squared, multiplied by the time \(t\). To illustrate let us consider the hexagonal absorbing propagator. The nested double summation to obtain the periodic propagator scales with the size of the domain, \(\Omega\sim R^{2}\). With \(6R\) defects on the boundary, the time complexity of the numerical inverse scheme scales quartically in \(R\), i.e. \(36R^{2}\Omega t\sim R^{4}t\). The computation required for the honeycomb lattice is very similar but with the slight additional burden of computing the inverse of the \(6\times 6\) matrix in Eq. (7), which increases the computation time by a scale factor of \(c\) (\(6^{2.37}\lesssim c\leq 6^{3}\)) depending on which numerical scheme is used [58]. To obtain the first passage to multiple targets, one must populate the matrix in Eq. (23) creating a computational cost that scales as \(E^{2}R^{4}t\), where \(E\) is the number of targets. This scaling should be compared to an alternate procedure to calculate the FP to multiple sites, which consists of iteratively solving a Master equation [59]. Convenient implementation of such procedure in hexagonal geometry would require utilising the relationship between HCC and 3-dimensional Cartesian coordinates i.e. Eq. (2) modified appropriately to account for the chosen boundary conditions. One would then be required to iteratively solve a sixth-order sparse tensor and extract information from \(E\) targets at each time iteration and then set those values to zero, which would scale as \(ER^{6}t\). A further advantage of our approach comes when extracting the means of random walk statistics. To obtain MFPTs, by setting \(z=1\), one bypasses the need to compute an inverse \(z\)-transform. As such, for the hexagonal lattice, the complexity for the reflective MFPT scales as \(E^{2}R^{4}\), where \(E\) is the number of targets. If, on the other hand, one were to compute this via an iterative method, an entire transmission probability would need to be obtained, which is far more computationally expensive and introduces the uncertainty of a stopping criterion to approximate long time indirect trajectories. Note that performing stochastic simulations instead of the analytic techniques developed is also disadvantageous. The main reason stems from the impossibility to reduce systematically the error in the simulation output as one increases the size of the ensemble. One is then forced to run a large, time expensive, ensemble, which limits the ability to explore the parameter space.
2301.08347
Sentiment Analysis for Measuring Hope and Fear from Reddit Posts During the 2022 Russo-Ukrainian Conflict
This paper proposes a novel lexicon-based unsupervised sentimental analysis method to measure the $``\textit{hope}"$ and $``\textit{fear}"$ for the 2022 Ukrainian-Russian Conflict. $\textit{Reddit.com}$ is utilised as the main source of human reactions to daily events during nearly the first three months of the conflict. The top 50 $``hot"$ posts of six different subreddits about Ukraine and news (Ukraine, worldnews, Ukraina, UkrainianConflict, UkraineWarVideoReport, UkraineWarReports) and their relative comments are scraped and a data set is created. On this corpus, multiple analyses such as (1) public interest, (2) hope/fear score, (3) stock price interaction are employed. We promote using a dictionary approach, which scores the hopefulness of every submitted user post. The Latent Dirichlet Allocation (LDA) algorithm of topic modelling is also utilised to understand the main issues raised by users and what are the key talking points. Experimental analysis shows that the hope strongly decreases after the symbolic and strategic losses of Azovstal (Mariupol) and Severodonetsk. Spikes in hope/fear, both positives and negatives, are present after important battles, but also some non-military events, such as Eurovision and football games.
Alessio Guerra, Oktay Karakuş
2023-01-19T22:43:59Z
http://arxiv.org/abs/2301.08347v1
Sentiment Analysis for Measuring Hope and Fear from Reddit Posts During the 2022 Russo-Ukrainian Conflict ###### Abstract This paper proposes a novel lexicon-based unsupervised sentimental analysis method to measure the "_hope_" and "_fear_" for the 2022 Ukrainian-Russian Conflict. _Reddit.com_ is utilised as the main source of human reactions to daily events during nearly the first three months of the conflict. The top 50 "hot" posts of six different subreddits about Ukraine and news (Ukraine, worldnews, Ukraina, UkrainianConflict, UkraineWarVideoReport, UkraineWarReports) and their relative comments are scraped and a data set is created. On this corpus, multiple analyses such as (1) public interest, (2) hope/fear score, (3) stock price interaction are employed. We promote using a dictionary approach, which scores the hopefulness of every submitted user post. The Latent Dirichlet Allocation (LDA) algorithm of topic modelling is also utilised to understand the main issues raised by users and what are the key talking points. Experimental analysis shows that the hope strongly decreases after the symbolic and strategic losses of Azovstal (Mariupol) and Severodonetsk. Spikes in hope/fear, both positives and negatives, are present after important battles, but also some non-military events, such as Eurovision and football games. keyword, keyword, keyword, keyword, keyword, keyword, keyword ## 1 Introduction For many years, the war in Europe has been just a dark memory. When on the 24\({}^{th}\) of February 2022, The Russian Federation declared war on Ukraine, it came out as a shock for most people all around the world (Faiola, 2022). It was thought that the presence of NATO and the European Union would be enough to guarantee peace in a short time. Unfortunately, that has been not the case due to the reason that both parties are neither part of NATO nor the EU, but are both former members of the USSR, and the conflict is still going on at early 2023. In war, the morale of the nations is one of the most important elements (Pope, 1941), since it is what pushes a country, most importantly a country that keeps fighting. In the case of a country defending its own land, the morale does not only regard the two-belligerent country but mostly the defenders. In fact, at first, the Ukrainian chance for success has been seen as tied to the support of western countries (Galston, 2022), the need that was also confirmed by the Ukrainian president himself (France 24, 2022). For this reason, the feelings of the western countries who support Ukraine could be a decisive factor in the future of the conflict. If the western audience would perceive the conflict as a lost battle, which, if dragged on, would have bad repercussions on their daily life and only cause more to Ukrainians, it could cause them to pressure their goverments into stopping the support. On the other side, if there is the hope of winning the conflict, it is possible for the governments to keep guaranteeing active support to Ukraine and costly sanctions to Russia. According to the Collins dictionary, hope is an uncountable noun and is described as "a feeling of desire and expectation that things will go well in the future" (Collins Dictionary, 2022b). Conversely, fear is defined as "a thought that something unpleasant might happen or might have happened" (Collins Dictionary, 2022a). As grammatical objects they may be uncountable nouns, however, the main purpose of this paper is to promote various text mining and sentimental analysis techniques to measure "_Hope_" and its negative counterpart "_Fear_" by using social media posts from Reddit.com - the social news aggregation, content rating, and discussion website. ## 2 Background & Related Works From a general point of view, "sentiment analysis" can be defined as the procedure of utilising important techniques such as natural language processing, text analysis and mining in order to extract and interpret subjective and human-related information. The source of information for sentiment analysis can be diverse e.g. written text or voice whilst the entities might be events, topics, individuals, and many more (Liu, 2020). Sentiment analysis is also a broader name for many other tasks such as opinion mining, sentiment mining, emotion analysis and mining (Nasukawa and Yi, 2003; Dave et al., 2003; Liu, 2020). Text data mining can be defined as the process of extracting data from structured and/or unstructured data mainly made of text (Hearst, 1999). Text mining can be utilised for different purposes and with many techniques like topic modelling (Rehurek and Sojka, 2010) and sentiment analysis (Feldman, 2013). Text-related sentiment analysis is a versatile approach that helps to automatically extract meaningful information from the written text and useful to pursue many different objectives such as to assess and monitor psychological disorders (Zucco et al., 2017), to evaluate human behaviours during the football World Cup 2014 (Yu and Wang, 2015), to detect emotions in general (Peng et al., 2021) or to use them to conclude on gender differences (Thelwall et al., 2010), or even to make predictions on the stock market (Pagolu et al., 2016) and measure heterogeneity of investors via their social media posts (Ji and Han, 2022). Considering vast amount of social networks recently continue to expand with regards to number of users, and are capable of reaching more audiences from nearly all levels of the community, Social media has naturally become the main source of information for text mining and sentimental analysis purposes. Sentimental analysis has been used to interpret data from different social network sources the most obvious example of which is Twitter (Hu et al., 2013; Yu and Wang, 2015; Giachanou and Crestani, 2016; Ji and Han, 2022). In addition, other popular social networks have also been used as the data source for the sentiment analysis realted purposes e.g. Facebook (Ortigosa et al., 2014), Reddit (Melton et al., 2021), MySpace Thelwall et al. (2010) and even YouTube comments (Tripto and Ali, 2018). Despite the social media being one of the most common sources of data, sentimental analysis has also found application basis for many more text corpora - to name but a few: movie (Thet et al., 2010) or product reviews (Haque et al., 2018), newspaper articles (Balahur and Steinberger, 2009), or emails (Liu and Lee, 2018). Many of the analyses mentioned above mostly focus on understanding if a text is positive, negative, or neutral as a classifier (Pak and Paroubek, 2010), and/or promoting utilisation of various scoring systems (Naldi, 2019). It is also possible to employ similar analyses to understand if text utilises subjective or objective language (Liu et al., 2010), or to interpret which emotions are conveyed (Yadollahi et al., 2017). Having the vast amount of data containing multitude types of human emotions is not only highly exciting in terms of computational data analysis research, but also seen useful for the human behavioural research. In general, there are two main theories on how emotions are formed in the human brain. The first is the discrete emotion theory that says emotions arise from separate neural systems (Ekman et al., 2013; Shaver et al., 1987). In these seminal studies, (Ekman et al., 2013) recognise 6 basic emotions of anger, disgust, fear, joy, sadness, and surprise whilst (Shaver et al., 1987) recognise anger, fear, joy, love, sadness, and surprise. On the other hand, the dimensional model says that a common and interconnected neurophysiological system causes all effective states (Plutchik and Kellerman, 2013; Lovheim, 2012). In particular, (Plutchik and Kellerman, 2013) recognise anger, anticipation, disgust, fear, joy, sadness, surprise, and trust whilst (Lovheim, 2012) recognises anger, disgust, distress, fear, joy, interest, shame, and surprise. Creating statistical correlation and independence analysis approaches are also highly important to provide evidences for the aforementioned human behavioural studies. This paper aims to develop a novel lexicon-based unsupervised method to measure the "hope" and "fear" of the Ukrainian-Russian Conflict. Reddit.com is utilised as the main source of human reactions to daily events during nearly the first three months of the conflict. The structure of this social network - Reddit.com - allows for discussing about very specific topics (posting in specific subreddits), without short limitations on the number of characters that can be posted. This makes it easy to mine for opinions about the Ukrainian conflict, to get an idea for what people think about it and how hopeful/fearful they are. To achieve this goal, the top 50 "hot" posts of six different subreddits about Ukraine and news (Ukraine, worldnews, Ukraina, UkrainianConflict, UkraineWarVideoReport, UkraineWarReports) and their relative comments are scraped and a data set is created. On this corpus, multiple analyses are employed. We promote using a dictionary approach, which scores the hopefulness of every submitted user post. The Latent Dirichlet Allocation (LDA) algorithm of topic modelling is also utilised to understand the main issues raised by users and what are the key talking points. This research aims to fill the gap present in the literature regarding opinion mining, specifically for _hope_. The main analysis consists of mapping hope measured with the newly proposed method. In particular, first, the trend of hope over the time is monitored. It is later compared with some of the most important events which happened during study time frame. This ascertains how such events influenced the public perception of the conflict, and provides evidence about the validity of the proposed hope measure. Fear is measured and mapped over the same study time period. In order to measure both fear and hope, a dictionary approach is employed that promotes using the National Research Council (NRC) Word-Emotion Association Lexicon dictionary as a starting point. Furthermore, individual topics extracted via the topic modelling observations are studied to interpret whether there is a correlation with "hope" and what kind of relationship they present if this is the case. Sentiment analysis is also employed to track the popularity of individual leaders (Putin and Zelensky) and the Russian and Ukrainian governments. Finally, stocks such as Gazprom and indices (gas prices and Russian and Ukrainian bonds) are analysed to interpret whether there is a relationship between the developed hope score and the stock market. ## 3 Methodology ### Reddit Data Reddit has been chosen since its structure allows to easily group submissions about a specific topic, and because, compared to other social media platforms, the success of content is less influenced by the success of the author. To gather data for the analysis, it was necessary to obtain it from Reddit. The best way to achieve this goal is to use the official Reddit API. To do so it is necessary to register as a developer on their website, authenticate, register the app, state its purpose and functionality. Once the procedure is completed, the developer can request a token which has to be specified along with the client id, user agent, username, and password every time that new data is requested. Six subreddits were chosen for their relevance to the conflict: * r/Ukraine * r/worldnews * r/ukraina * r/UkrainianConflict * r/UkraineWarVideoReport * r/UkraineWarReports The script developed in Python crawls the top 50 posts for each of the subreddit and the relative comments. Subsequently, it combines the new gathered submissions with the previously collected ones. It then removes eventual duplicates using the submission id. For every submission, the subsequent information was obtained: * title (only for posts): the title of the post * text: the actual content of the submission * upvotes * author * date * id: the unique submission id * flair: categorisation of the post by the author * type: post or comment * parent_id * subreddit The data collection process started on the 10th of May and has been completed on the 28th of July. It was conducted daily around 3.00 pm UK time. More than 1.2 million unique observations were gathered within this time frame. ### Pre-processing Stages The data obtained through the collection process was not useful on its own. It had to be processed to be analysed and explored. First, some cleaning needed to be done. Not all the observations gathered would be useful. In fact, some of the submissions in the r/worldnews subreddit were not about the conflict. To eliminate the irrelevant ones, only the posts with the flair "Ukraine/Russia" had to be kept. The only issue is that flair is assigned only to "post" type submissions, but not to comments. Luckily, the structure of Reddit, allows to use id and parent_id to move upwards to the original post from every comment. Every comment is like a tree branch in a forest-like structure, with every post representing a single tree. Thanks to this principle, it was possible to extract the "ancestor_id" of every submission and use it to assign a flair to the comments. This allowed to identify and remove the submissions without the relevant flair from the r/worldnews subreddit. The next step would be converting all the words in each post to lowercase. Subsequently, we obtain score for a specific emotion for every submission. To reach this goal, the number of words related to the investigated emotion in every entry was counted. Another useful information to be extracted is the polarity score. Using a different sentiment analysis approach, the "text" of a post or a comment would receive a score that ranges from -1 to 1 according to its sentiment. A score of -1 indicates a very negative meaning, while 1 indicates a very positive one. The score was extracted using the _sentiment.polarity_ method from the _TextBlob_ python module. Another method, _sentiment.subjectivity_, from the same module was also used that allows us to understand if the author is stating facts or if they are voicing an opinion. Subjectivity ranges from a score of 0, which indicates a very subjective text, to 1, which indicates a very objective one. One of the problems with dictionary-based sentiment analysis, is that it arbitrarily favours long texts. In fact, with a higher wordcount there are more chances to find the relevant words. Furthermore, it increases the score cap for a submission. A one-word comment could have a maximum score of one, while a hundred-words comment could potentially score one hundred. To solve this issue, a new parameter called "\(w_{lenght}\)" was created. It stores the emotion score divided by the length of the submission multiplied by 100. \[w_{lenght}=\frac{N_{emotion}}{length\times 100} \tag{1}\] Another improvement to be made regarded the weight of singular opinions. There are opinions which are more popular than others. On Reddit, it is easy to understand whether one post is popular by looking at the number of upvotes. To have a better understanding of the public opinion, it was relevant to weight the hope score to the number of upvotes. While being an improvement, simply multiply the "\(w_{lenght}\)" score for the number of upvotes, would disfavour popular comments in unpopular posts. A very successful post would have a very high number of visualisations, comments and upvotes. A comment X, viewed by 100 people and upvoted by 10 (10%) would have a higher score than a comment Y, viewed by 10 people and upvoted by 5 (50% of viewers). To solve this issue, the number of upvotes needed to be weighted on the number of comments on a post, to obtain its relative popularity (opposed to the absolute one). A parameter storing the number of comments for every post ("\(sub_{in-post}\)") was obtained by counting the submissions for every "ancestor.id". Finally, another parameter "\(w_{upvotes}\)" was created. It stores the value that "\(w_{lenght}\)" multiplied by the number of upvotes divided by "\(sub_{in-post}\)". \[w_{upvotes}=\frac{w_{lenght}\times upvotes}{sub_{in-post}} \tag{2}\] Hence, \(w_{upvotes}\) becomes the emotion score that is weighted on its length, the upvotes and the relative popularity. The flow diagram of the general pre-processing process is depicted in Figure 1. ### Measuring Hope and Fear Overall interest in the conflict has been measured in two different ways: (i) the number of submissions and (ii) the popularity of the posts. For the former, data were grouped by each day, and the number of daily submissions was counted. This includes both posts and comments, giving a good idea of the engagement trend. The latter studies the daily average number of upvotes for each post. Comments were excluded since a popular post is likely to host many comments with just one upvote, which would significantly lower the average. To achieve this goal, a post-only database was created. Data were grouped by date and the mean value for upvotes was computed. Complementing the aforementioned second method with the first one is very useful to give a proper idea of the general interest trend. The number of posts could have been influenced by a small number of users who are somewhat involved with the conflict, while the public might not be this interested. This can be tested by looking at the popularity of the posts. In fact, popular posts have many upvotes. To reach them, submission needs to have the likeness or the attention of a big group of users. The main goal of this dissertation is to map hope in western public opinion for the Russo-Ukrainian war. There is a gap in the literature regarding this specific issue. There is, indeed, no scholarly accepted way to automatically measure hope. There are many ways to tackle sentiment analysis, like machine learning or dictionary-based approaches. The first one would have required labelling a dataset, saying what is hopeful and what is not. To properly do that, linguistic expertise is a requirement. On the other side, using a dictionary-based approach would allow using scholarly accepted dictionaries. Hence, this paper concerns a dictionary-based approach. Two issues had to be addressed to complete a dictionary-based analysis: that are linguistic and technical ones. At this point, we ask several important questions: What is hope and how do we measure it? According to the Collins dictionary, "Hope is a feeling of desire and expectation that things will go well in the future". Picking apart this definition helps to understand what are the elements that construct hope. The keywords are "feeling", "well" and "expectation in the future". A feeling is something inherently subjective to the person who feels them. Well, in this case, indicates a positive outcome. The expectation is "something looked forward to, whether feared or hoped for" and it is a synonym for anticipation. Since there is no "hope" dictionary to the best of our knowledge, one had to be developed. As a starting point, the NRC sentiment and emotion lexicon was used. The NRC Emotion Lexicon is a list of English words and their associations with eight basic emotions (anger, fear, anticipation, trust, surprise, sadness, joy, and disgust) and two sentiments (negative and positive). The annotations were manually done by crowdsourcing. Among the emotions catalogued in this dictionary, there is "anticipation", "positive" and "joy". According to the previous definition, something to be hopeful needs to be subjective anticipation of a positive outcome. Hence, the three dictionaries were cross-referenced to find the words that showed "anticipation" and at least one between "positive" or "joy". Figure 1: The pre-processing workflow Thanks to this procedure, a "hope" dictionary is developed. The lexicon respects two of the three parameters: "anticipation" and "positive outcome". To satisfy the third one, all the Reddit submissions were analysed through the _textblob.subjectivity_ function. It gives a score that goes from 0 (not subjective) to 1 (very subjective). To respect the three parameters, only the submissions that present a minimum score of 0.5, are to be analysed. Once the dictionary was developed, it needed to be implemented. Every submission is characterised by a "text" column, which contains the message sent by the user. The script counts how many times words present in the "hope" dictionary are also present in the "text". In this way, a raw hope score, notated as \(hope_{score}\), is obtained, which is refined as described in the "pre-processing" chapter of the paper \[hope_{score}=\frac{N_{hope}}{\frac{lenight\times 100}{sub_{in-post}}} \tag{3}\] Fear was measured in the same way as hope. It is dictionary based and the score it is obtained by counting the fear-related words in every submission. The utilised dictionary was the same NRC one which is used to obtain "anticipation", "joy" and "positive" words. The \(fear_{score}\) is calculated as \[fear_{score}=\frac{\frac{N_{fear}}{lenight\times 100}\times upvotes}{sub_{in-post}} \tag{4}\] ### Leader and Country Analysis To obtain Leader analysis data, two new databases were created. The first one had only the submission containing the name "Zelenskyy" or its variations "Zelens'kyj" or "Zelensky". The second one instead included only observations which presented the name "Putin". Differently from the other analysis, hope and fear was not analysed, but the focus was on the sentiment polarity score. The polarity method from TextBlob was employed. It gives a score that ranges from -1 to 1, with the former representing a negative opinion, while the latter showing a positive one. After both databases were grouped by day, the mean daily polarity score was computed. Similar to the Zelenskyy vs Putin analysis, two new databases were created. The first one included only submissions which contained the name "Ukraine", while the second one only had only observations which presented the name "Russia". Subsequently, the polarity score was measured using the TextBlob polarity method. Then, observations were grouped by day and the daily average polarity score was computed. ### Stock Market Analysis After collecting historical prices on six different stocks and financial titles (UK oil & gas, Russian Ruble and US Dollar exchange rate, the price of gas and the price of crude oil), they were joined to the "daily" database. Said database contains the weighted average daily value for hope and fear. We developed a linear regression model having the price of the ticker as the dependent variable and either the average weighted daily hope score or the weighted average daily fear score as the independent one. Then for each data set, we run this linear regression model and calculated the corresponding parameters for each modelling. ### Topic Modelling The aim of this analysis is to understand what the gathered submissions are about through topic modelling. Topic modelling is an unsupervised machine learning technique that allows us to organise, understand and summarise large bodies of text. It can be described as a method for extracting meaning out of the textual data by extracting groups of words, or abstract topics, from a collection of documents that best represents the information in the collection. More specifically, this technique returns a probabilistic distribution of different topics of discussion, where each topic is associated with a given document by a certain likelihood score. A document could be about different topics at the same time in different proportions. We first created a corpus and dropped less frequent terms in it. Now that the text data have been processed, the optimal number of topics (\(K\)) is estimated. Using the _searchK()_ function, the different distributions of \(K\) (from 2 to 10) are elaborated, so that it is possible to interpret the results and make a guess on the optimal number of topics into the model. To find the optimal number of topics, it is necessary to plot the distributions of K topics discovered according to various goodness of fit measures such as semantic coherence and exclusivity. Semantic coherence measures the frequency in which the most probable words in each topic occur together within the same document. Exclusivity on the other hand, checks the extent to which the top words for a topic are not top words in other topics. Coherence measure how a topic is strongly present and identifiable in documents, whilst exclusivity measures how much the topic differs from each other. The goal is to maximise both whilst keeping likelihood high and residuals low enough. Then the distribution of the topics in the document is examined to see if there is a prominence of one topic over the others or if they have similar distributions (bad sign). Subsequently, a word cloud for every topic is created. It shows in a graphical cloud all the top words, with size changing according to the relative frequency of the words. Using the _labelTopics()_ function, the words that are classified into the topics to better read and interpret them are inspected. This function generates a group of words which summarise each topic and measures the associations between keywords and topics. The most representative documents for each topic are then extracted. This is useful because it helps to give a more concrete idea of what each topic is about, using a real review as an example. The relationship between metadata, and topics is investigated. It is done defining the correlation model applying the _estimateEffect()_ function. This function performs a regression that returns the topic proportions as outcome variable. The output of the function has the aim to demonstrate the effect of the covariates of the topics. To conclude, the correlation between topics is studied. ## 4 Experimental Analysis ### Hope-Fear Analysis Our Hope-Fear analysis starts by measuring the public interest about the war and their intention to share posts in social media as shown in Figure 2. Overall social media interest during the conflict has been slowly but steadily decreasing for the whole analysed time window. With an average of 4335 daily submissions, in the first days, there were plenty of submissions, with a peak of 6993 posts in one single day on the 16th of May 2022. In the last part of the explored time, numbers are lower, with a negative peak of only 1080 submission in one day on the 22nd of July 2022, 5919 less than its maximum. When we evaluate the daily upvotes rates in Figure 2, differently from the above analysis, there are no significant changes in the trend of number of upvotes over time. The daily average itself is very volatile, but the trend remains stable. This could mean that while the users are still receptive and supportive towards the Ukrainian conflict (they keep upvoting the most important posts), they are less engaged, posting and commenting less. Thanks to this steady trend in upvotes and number of posts in each day, we calculated daily hope score by using the expression given in (3). As it is possible to observe from the graph given in Figure 3, the hope score during the analysed time-period is decreasing and finds a nearly-steady state after the half of the observed period in terms of its running mean visualisation. After the initial big drop, the score seems to stabilise on a lower value. This seems to reflect what happens during the war. In fact, the big drop happens around the fall of Azovstal (Mariupol) and Severodonetsk. Successively, it mirrors the "phase two" of the Russian offensive, with very slow and steady trend of hope score. This is also reflected by the fact that central 50% of the observations of the hope score are in a range of 0.054, while the total range is 0.264, as it is possible to see from the descriptive statistics in Table 1. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & **Interest** & **Upvotes** & **Hope** & **Fear** & **Zelensky** & **Putin** & **Ukraine** & **Russia** \\ \hline Days & 80 & 43 & 81 & 81 & 81 & 81 & 81 & 81 \\ Mean & 4383.06 & 441.38 & 0.7928 & 1.4803 & 0.0949 & 0.0381 & 0.0901 & 0.0402 \\ St. Deviation & 861.06 & 518.90 & 0.0425 & 0.0591 & 0.0417 & 0.0162 & 0.0129 & 0.0108 \\ Minimum & 1080.00 & 1.00 & 0.6434 & 1.2749 & -0.0311 & -0.0053 & 0.0627 & 0.0067 \\ 25\({}^{th}\) percentile & 3840.50 & 20.00 & 0.7650 & 1.4405 & 0.0716 & 0.0254 & 0.0813 & 0.0359 \\ 50\({}^{th}\) percentile & 4412.50 & 253.00 & 0.7926 & 1.4848 & 0.0959 & 0.0390 & 0.0896 & 0.0417 \\ 75\({}^{th}\) percentile & 4815.25 & 659.50 & 0.8189 & 1.5213 & 0.1206 & 0.0476 & 0.0991 & 0.0480 \\ Maximum & 6993.00 & 2464.67 & 0.9075 & 1.6199 & 0.2138 & 0.0728 & 0.1210 & 0.0596 \\ \hline \hline \end{tabular} \end{table} Table 1: Descriptive statistics for the whole analysis Figure 2: Number of submissions & daily average number of upvotes over the time Similarly, by using the expression developed in (4), we calculated the fear score for the same time period. Despite being pretty volatile, fear remains stable for the whole analysis just after inital couple of days. This is an interesting observation, especially when compared to hope, which decreases in the same time period. Hope-Fear results are slightly negatively correlated, with a Pearson correlation index of -0.986. Here, in order to clearly interpret this phenomenon, we plot running means of Hope and Fear on the same axes below in Figure 3. ### Validation of Hope/Fear Scores In order to validate and better visualise the proposed hope/fear scores, we investigated 18 important events within the experimental period. To reach this, observations were grouped by day and the mean hope score was computed. The overall mean of the hope score was also calculated and a new column which contained the overall mean - each day average was created. The said important events chosen for the validation analysis are given below: 1. **May 9** - failed Russian Donetsk River crossing. Ukrainian sources declare that during the crosses, 70 heavy Russian units were destroyed or lost. 2. **May 13** - American-Russian talks. Lloyd Austin (American secretary of defence) and Sergei Shoigu (Russian minister of defence) held telephone talks for the first time since the start of the invasion. 3. **May 15** - Ukraine won the Eurovision song contest thanks to an overwhelming popular vote. Stefania by the Kalash Orchestra won with 192 votes from the jury (4th place) and 439 from the televote. Second place went to the United Kingdom with 466 total votes. 4. **May 17** - Azovstal, the steel factory of Mariupol is lost. It was the last stand of the Azov Battalion, a controversial group, which contained many of the best trained Ukrainian soldiers. This deprived Ukraine of a strategically important port, many soldiers and allowed the Russians to unify the front. Figure 3: Running means for the proposed hope and fear scores 5. **May 27** - 90% of Severodonetsk is destroyed. The city is of big strategic importance, since it could allow the Russian to encircle many Ukrainian units in Donbass. 6. **May 29** - First visit of Zelenskyy outside of Kiev. This visit had the purpose to show that the president was not afraid of Russia taking him out. 7. **May 30** - Russian troops enter Severodonetsk 8. **June 5** - Ukraine is eliminated in the World Cup qualifiers, after losing 1-0 to Wales, with a goal scored by Gareth Bale. 9. **June 12** - Ukrainian supplies and planes destroyed. 10. **June 16** - sinking of a Russian ship. The Pastel Vasily Bekh tug was sunk near snake island in the black sea. 11. **June 17** - Putin speech at economic forum in St. Petersburg. 12. **June 22** - Ukrainian drone strike on a Russian oil refinery. 13. **June 26** - 14 missiles hit Kiev, damaging several buildings and a kindergarten. 14. **July 6** - Russian duma prepares to go into war economy, which would allow to order companies to produce war supplies and make workers work overtime. 15. **July 7** - Zelenskyy gave a speech on the effectiveness of western artillery. Furthermore, a technical pause from the Russian offensive started, with the aim to regroup. 16. **July 14** - start of the volunteer mobilisation, which requires by the end of the month, 85 federal areas to recruit 400 men each. 17. **July 16** - US house of representative approves a bipartisan bill that would grant $100 million in funds to train Ukrainian pilots to fly US fighter jet. 18. **July 23** - 4 Kalibr missiles hit Odessa. Of those 4, 2 were intercepted. The other 2 according to Russian sources destroyed a warship and a warehouse containing missiles. The graph in Figure 4 shows how much above, or below average hope scored during the analysed period. Many of the spikes, both negative and positive, coincide with real world events which had an impact on the war or on the morale of the western public opinion. Some of the positive events include but are not limited to: the Ukrainian victory in the Eurovision contest (3), financial help packages from the United States (17) and the sinking of Russian ships (10). Negative ones include but are not limited to the loss of Azovstal (4), the fall of Severodonetsk (5) and the elimination of Ukraine from the World Cup 2022 (8). As it is possible to observe in Figure 4, most of the biggest positive spikes are concentrated in the first days, when the phase 2 of the war had recently started. After the fall of Azovstal and Severodonets, a slower and more intense phase of the war starts. Russians advance slowly but steadily. This is also reflected in the graph, where we can observe few spikes, and many observations being below average for the whole duration of June. In July there was more movement, in fact the United States developed a plan of military and financial aid to Ukraine. Furthermore, Turkey managed to broker a trade deal between Ukraine and Russia, which would allow Ukraine to export grain, avoiding famine in many countries (mainly in Africa). At the same time, Russian advance keeps preceding recklessly, as shown by the negative spikes at the end of the month. ### Country-Leader Analysis In this case of the experiments, we try to measure public interest in countries (Ukraine - Russia) and leaders (Zelenskyy - Putin). As previously stated, the metric for popularity refers to the sentiment "polarity". The first and most obvious consideration that emerges from this analysis presented in Figure 5 is that Zelenskyy, the president of Ukraine, presents a higher sentiment than Putin, president of Russia. As it is possible to notice, Zelenskyy is consistently more popular than his Russian counterpart, for the whole analysed period. In fact, the average polarity score for the Ukrainian president is 0.097, 2.6 times more than Putin, who scores a mere 0.037. Despite being less popular, the Russian president is more interesting to the Reddit community than Zelenskyy. In fact, his name is cited 30663 times in the database, 7.2 times more than his Ukrainian counterpart, who is cited only 4055 times. Another interesting point is that despite being relatively volatile, the trend seems to be consistent during the analysed period. None of the two leaders present an increase, nor a decrease, in popularity. Zelenskyy shows a higher volatility than Putin, but this is likely attributable to the smaller sample size. The small sample size also causes the big outliers in the Zelenskyy graph. For example, on the fourteenth of July 2022, the Ukrainian president shows a polarity score of -0.31, 0.128 below the average score. There are only 49 submissions naming Zelenskyy on that day. One of the first ones, accuses the president to be a Nazi and to have violated human rights in Donbass. Many comments answer to these accusations defending the president. Saying for example: _"this is such a massive false equivalence. periodically i bother responding to it. here is my copy-paste nobody ever wants to engage with. non-extensive list examples of ways in which i think it's possible to differentiate the two cases:* zelensky has never used chemical weapons to suppress a revol against his rule by an ethnic minority, * the us did not execute civilians en mass in any captured town [...]"_ or: "_this is a ludicrous comparison. whilst i don't agree with what the west did in iraq in early 2000's....sadam hussein was committing genocide against the kurds, systematically slaughtering hundreds of thousands of people because of their race/religion. zelensky is not doing this, he is a democratically elected official and ukraine are a peaceful nation. so the idea that we (the west) are not allowed to comment on the russian invasion of ukraine because we've done something similar is lazy, ridiculous and without being rude to you, a tad stupid._" Figure 4: Deviation from the average hope. Most of those comments are saying that Zelenskyy and Ukraine did not commit atrocities, as affirmed by someone else. But (as it is later explained in the limitation part), many words with negative sentiment like "suppress", "execute", "genocide", "slaughtering", "lazy", "stupid" are used and the context is not interpreted. Having a big sample prevents these context-based exceptions from happening. For this specific day, the sample size is relatively small and is not able to counterbalance this single thread. Another interesting insight is that there is basically no correlation between the popularity of Zelenskyy and Putin. The Pearson correlation index in fact is -0.03. It could have been possible to hypothesise a negative correlation between the two, maybe connected to the tides of the war. For example, if Russia was making gains Putin's popularity could be increasing, while Zelenskyy's would be decreasing. But this hypothesis is disproven by the evaluated data in the given time period. This could be explained by the fact that it is possible that Putin's popularity would not increase with a successful war, since he has mostly seen as the enemy. Similar to the Putin vs Zelenskyy analysis above, it can be explored from the Figure 5 that Ukraine scores evidently better than Russia. In fact, the former consistently more than the latter with an average polarity of 0.077, compared to an average of 0.044. In the same fashion of the previous analysis, Russia is cited way more frequently than Ukraine. In fact, the former is cited 137419 times, whilst the letter is 89736. This is found pretty interesting since, despite five of the six analysed subreddits being named after Ukraine, the real focus is Russia. The two trends seem very similar. In fact, the Pearson correlation is 0.55. This might be because the two countries are very often cited in the same submission, hence presenting identical polarity scores. To solve this issue, two new databases which respectively contained "Ukraine" but not "Russia" and vice versa are created. In this process, 33790 observations for each database were dropped, removing more than one third of the original "Ukraine" database. The new numbers highlight even more focus on Russia, who now counts almost double the number of citations than Ukraine, counting 103629 against 55946. The new data shows an increase in the gap between the two countries. In fact, Ukraine, with an average score of 0.09 scores more than double than Russia, which decreases its polarity to 0.04. As expected, also the Pearson correlation index decreases significantly to 0.26, which remains still surprisingly high. Figure 5: (LEFT) Polarity score for the two leaders. (RIGHT) Polarity score for the two countries. MA graphs for each figure refer to 7-days moving average of the original data. ### Stock Market Analysis Four different tickers, regarding four different aspects connected to the war, were chosen: (1) United Kingdom Oil and gas stock price, (2) Ruble - US Dollar exchange rate, (3) Oil price, and (4) Gas Price. In particular, the most influential one is gas prices which have been used as a leverage for a good chunk of the conflict. Many western countries, including but not limited to Italy and Germany, provide weapons and support to Ukraine, but used to rely heavily on Russian gas for their energy needs. Russia has manoeuvred the gas price and supply (for example closing the gas pipeline North Stream One) to try to weaken the support for the Ukrainians and lift the sanctions imposed on them. Furthermore, through the increase of gas price, Russia secured record earnings and export levels. As always, in the stock market, prices are not only a reflection of current demand and offer, but also the projected demand and offer in the future. For all those reasons, we found it interesting to explore if a relationship existed between the hope and fear towards the conflict and the price of gas. Oil price was also chosen for similar reasons. Oil is another combustible fuel which can be used to produce electricity. If natural gas is to become scarce, it is one of the most likely substitutes for many usages. Furthermore, the quota controlled by Russia is not big enough to allow them to manipulate the prices in the same way they do with gas. Considering that the energy crisis could influence the perception of the conflict for the European public opinion, it is interesting to also explore the oil prices relationship with the proposed hope and fear scores. One of the very first consequences of western sanctions on Russia, was the fall of the Ruble. Many speculations were done on how this would have affected the Russian economy and their ability to repay their debts. The matter became even more interesting when after it started to climb back, even reaching higher values than pre-conflict period. Since Russia sells a significant part of its gas in Rubbles, the swinging of the value of the Ruble are very important to the Russian economy and they are not to be underestimated. The perception of the stability of the country, hence the trust of the market in its currency could be put in jeopardy by losing this war. This is a good reason to expand the study to the exchange rate between US dollar and Russian Ruble. The United Kingdom has been one of the most supportive countries of Ukraine since the beginning of the war. Differently from Italy and Germany, they are not part of the European Union, and they have rich reserves of natural gas and oil. United Kingdom Oil and Gas is one of the main stocks for the British energy market. It could prove insightful to understand if there is a relationship between hope and fear towards the Ukrainian war and the stock price of a company which acts in a country involved in the war, is influenced by the price of gas and oil, but has access to national stocks and is less dependent on Russia. We run a linear regression analysis between each of these stock market elements and the proposed hope/fear score. Evaluating the results, we conclude that in terms of \(p\)-value there was no significant correlation between hope/fear score and Oil-price, Ruble & US dollar exchange rate, and UK Oil-Gas. The similar insignificant relationship mentioned above was also obtained between fear score and gas prices. However, in terms of the hope score, a significant relationship was found between hope and the gas price. To interpret the relationship between the hope score and gas prices a linear regression was run, having the average daily hope score as the independent variable and the daily closing price as the dependent one. The regression presents a \(p\)-value of 0.018, showing the significance of the model whilst a relatively low \(R^{2}\) value is obtained as 0.1. Furthermore, the Pearson correlation between the two variables is -0.32. As expected, the correlation is negative, so if hope goes up, the gas prices go down, or vice versa (See Figure 6-(Left)). We also conducted a research on the relationship between the all stock variables as regressors and the hope/fear score as the target. Considering a significance threshold value of 0.05 for \(p\)-value, only the gas and UK Oil-Gas prices returned a significant relationship with the hope score whilst fear score does not provide a significant relationship with any of the regressors. Evaluating the results presented in Figure 6-(Right), we can conclude that a clear relationship between the hope score and two-regressor model (Gas&OKOG) with \(R^{2}\) value of 0.202 and again with a reciprocal proportion. This analysis means that the public hope for the result of the conflict is not the primary driver for gas and UKOG prices, but there is indeed a relationship to be explored. ### Topic Modelling As is described in the previous sections, we now investigate the Reddit data set in terms of topic modelling. To achieve this goal, we utilised R programming language and many different R external packages are used: * **NLP:** provides the basic classes and methods for natural processing language and poses as a base for the following packages. * **openNLP:** "an interface to the Apache OpenNLP tools (version 1.5.3). The Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text written in Java. It supports the most common NLP tasks, such as tokenization, sentence segmentation, part-of-speech tagging, named entity extraction, chunking, parsing, and coreference resolution (The Apache Software Foundation, 2009)." * **quanteda:** "framework for quantitative text analysis in R. Provides functionality for corpus management, creating and manipulating tokens and ngrams, exploring keywords in context, forming and manipulating sparse matrices of documents by features and feature co-occurrences, analysing keywords, computing feature similarities and distances, applying content dictionaries, applying supervised and unsupervised machine learning, visually representing text and text analyses, and more (Benoit et al., 2018)." Figure 6: (Left) Scatterplot showing the gas price and the hope score. In red, the regression line. (Right) 3D-scatter plot of 2-regressor model fit. * **dplyr:** "is a grammar of data manipulation, providing a consistent set of verbs that help to solve the most common data manipulation challenges (Wickham et al., 2022)." * **tidytext:** "provides functions and supporting data sets to allow conversion of text to and from tidy formats, and to switch seamlessly between tidy tools and existing text mining packages (Silge and Robinson, 2016)." * **qdap:** "automates many of the tasks associated with quantitative discourse analysis of transcripts containing discourse. The package provides parsing tools for preparing transcript data, coding tools and analysis tools for richer understanding of the data Rinker (2020)." * **plotly** and **ggplot2:** are packages used for creating graphics for the analysis. * **ggthemes:** is a package that enable better aesthetics for graphs. * **wordcloud:** is a package that allows the creation of wordcloud-type graphs. * **stm:** "The Structural Topic Model (STM) allows researchers to estimate topic models with document-level covariates. The package also includes tools for model selection, visualisation, and estimation of topic-covariate regressions Roberts et al. (2019)". Structural Topic Modelling (STM) is a topic model method. It is a semi-automatic approach that allows us to incorporate metadata, which represents information about each document, into the topic model. STM aims at discovering topics, estimate their relationship to document metadata and gather information on how the topics are correlated. #### 4.5.1 Estimating the optimal number of topics After the corpus is created, the first step is to extract the diagnostics and estimate the optimal number of topics. Whilst estimating the optimal number of topics, our aim is to maximise two important diagnostics of the _exclusiveness_ and _coherence_ whilst keeping _likelihood_ high and _residual_ diagnostics low enough. Due to the fact that having nine topics would ensure that there would be little mixing up between the topics, a little more importance is given to coherence. On the other hand, data would be very hard to interpret and would be difficult to extract useful information from it. We present the optimal number of topic selection diagnostic results in Figure 7-(a). Examining the Figure 7-(a), we can see that 7 and 8 number of topics appear to be the optimal choices as the result for the likelihood, residual and coherence-exclusiveness analysis. We stick with 7 number of topics as the optimal model since it has lower coherence value compared to 8 topics. Thus, two out of nine topics are discarded and seven is chosen as the topics for this analysis which are: * Topic 1: Geopolitical arguments * Topic 2: Russia and government * Topic 3: Morality of war * Topic 4: War atrocities * Topic 5: Submissions in Russian * Topic 6: Foreign submissions * Topic 7: Weapons Examining Figure 7-(c), the quality of the topic is investigated in the same way as before, ideally coherence and exclusivity would be maximised. In this case it is possible to observe that Topic 5 greatly outperformed all the other topics, especially in coherence. This happens because those observations are all in Russian, this makes them very different from the rest. Topics 1 and 3 score very well on their own in terms of Coherence, whilst Topic 2 & 7 are the worst performing ones overall. Topic 6 on the other side is the one that distinguishes itself the most in terms of exclusiveness, despite having a relatively low semantic coherence. The distribution of the topics is analysed in Figure 7-(d). Topic 3 is the most prominent topic, describing around 20% of the database. Topics 5 and 7 are the less popular ones, scoring around 10% each. Considering the correlation analysis plot in Figure 7-(b), we can clearly conclude that there appears to be no correlation between any of the topics. #### 4.5.2 Topic 1: Geopolitical arguments In Table 2, linear regression modelling results of each topic with hope and fear scores are presented. It can bee seen that Topic 1 is positively correlated to both hope and fear. In addition, as shown Fig 8, Topic \begin{table} \begin{tabular}{l c c c c c c} \hline **Results** & **Topic 1** & **Topic 2** & **Topic 3** & **Topic 4** & **Topic 5** & **Topic 6** & **Topic 7** \\ \hline Intercept & 0.1112 & 0.1943 & 0.2187 & 0.1759 & 0.0800 & 0.1206 & 0.0994 \\ \(hope_{score}\) & 0.0036 & -0.0035 & -0.0079 & 0.0056 & -0.0040 & -0.0091 & 0.0152 \\ \(fear_{score}\) & 0.0068 & -0.0051 & -0.0085 & -0.0098 & 0.0060 & 0.0137 & -0.0031 \\ \hline \end{tabular} \end{table} Table 2: Topic Modelling Analysis Results Figure 7: (a - Top Left) Model Selection results with four distinct diagnostics. Sizes of each marker relate to the residual diagnostic values. (b - Top Right) Exclusivity and coherence for the individual topics. (c - Bottom Left) Topic proportions in the dataset. (d - Bottom Right) Correlation between topics. 1 is mostly about geopolitical argumentation. The most used words are "Ukraine", "Russia" and "will", showing speculation about the conflict. Other popular words are "NATO", "china", "Germany", "support" and "sanctions", a sign of how the broader picture is also depicted in the conversation. Furthermore, "weapons", "soldiers", "nuclear" are also present, demonstrating an attention to battles. The correlation to both hope and fear could be explained by the word "will". If future possibilities are explored, they might be about positive events, hence increasing the hope score, or about scary ones, hence increasing the fear score. #### 4.5.3 Topic 2: Russia and government Topic 2 is negatively correlated to both hope and fear. Topic 2 seems to be negative opinions about the Russians and governments. There are many words which refer to them as "Putin", "Russian", "Russians", "government", "left" and "right". Other popular words are "f***", "bad", "wrong", "f***ing", "old" and "stop". It is not very clear due to the low internal coherence of this topic. #### 4.5.4 Topic 3: Morality of war Topic 3 is negatively correlated to both hope and fear. Topic 3 seems to be about the moral consequences of the war. Investigating randomly taken submissions as examples shows us that the community discusses about (1) the morality of dealing economically with the side of the war, (2) the consequences positive of globalisation, and (3) the idea of leaving internal civic debates in Ukraine for later, while doing common front now against the common foe. Being these moral considerations, they are not relevant with hope and fear, for this reason it is naturally considerable that they might score low in both. #### 4.5.5 Topic 4: War atrocities Topic 4 is positively correlated with hope, but negatively with fear. Topic 4 is about war atrocities and their devastating effects. Unexpectedly, for this topic we obtained a positive correlation with hope and a negative one with fear. #### 4.5.6 Topic 5: Submissions in Russian Topic 5 is negatively correlated with hope, but positively with fear. Topic 5 is composed by the submissions in Russian language. It is negatively correlated to hope since there are no Russian words in the "hope" dictionary. It is probably negatively correlated to fear because the few English words are present in the Fear dictionary (similar to the case in the third example). #### 4.5.7 Topic 6: Foreign Submissions Topic 6 is negatively correlated to hope but positively correlated to fear. Similarly to the Topic 5, Topic 6 is mainly composed of submissions in foreign languages. Most of them will score 0 since their words will not be present in either dictionary. Potentially some similar common words in foreign languages with English created a positive correlation with Fear. #### 4.5.8 Topic 7: Weapons Topic 7 is positively correlated with hope, but negatively with fear. Topic 7 is about weapons. Many of the words shown reflect that: "tanks", "artillery", "weapon", "missiles", "gun", "range", "modern", "expensive", "drone". Others also regard the military in a broader sense, like "logistic", "training" and "equipment". Finally, "good" is the most used word in the topic. This explain that the superior Ukrainian equipment reassures the public and increases their hope. ## 5 Discussions In Ukraine, many geopolitical themes are unfolding, and many interests are conflicting. Considering how high the stakes are, it is imperative for a good politician to use every tool at his disposal to direct the public opinion where it is most needed. Currently during the winter period in northern hemisphere, the stakes will be particularly high, since the electricity and gas demand will be particularly high, and likely gas price will be a strong weapon for Russia. Increased gas prices will have direct and indirect effects on prices for the public. Heating prices would go Figure 8: Wordcloud representation for each each topic. up significantly. The indirect effect would come from manufactured goods. In fact, with the increase in the electrical bill, would also come an increase of the price of their products. As for the analysed period, support towards Ukraine and Zelenskyy is still strong. The real test will be during the winter, when the western average Joe might be strongly affected by the consequences of the war, sometimes not even being able to afford heating and food. At this point it is possible that the public might ask to end the war at any cost. This might cause the end of European support in weapons and logistics, which would generate huge difficulties for Ukraine. If the world of politics wants to keep support for the war, it might employ two strategies. The first one is to try to increase the hope of the people towards a Ukrainian victory. To do so, they should talk about the superiority of the weapons provided by the west, how good and effective they are compared to soviet era Russian ones. This topic in fact was positively correlated to hope and negatively correlated to fear. The second one, would be to try to instate fear towards Russia. To do so, politics and news could start using those geopolitical arguments that prove the dangers of the country. This might be a double-edged strategy. In fact, it could undermine the faith in victory of the public and jeopardize overall morale. If this would happen and people would start to see Russia as an unstoppable danger, they could ask for a fast end of the conflict, since defeat would be seen as inevitable. Another thing that would be deleterious are periods of excessive stagnation. They might cause a decrease in the interest, which coupled with possible severe economic consequences, might frustrate the public. The risk is that they would see his life worsened in exchange for no visible progress. ## 6 Conclusions The results of this study can be seen as the development of a way to measure hope via exploiting social media posts of the public all over the world, and an insightful overview over the public opinion on the Russo-Ukrainian conflict, focused predominantly on hope. The first analysis regards the interest towards the conflict. A steady decline in the number of submissions is observed, while the average number of upvotes for the posts does not increase or decrease. This shows a relative loss of interest, due to the stagnation of the news. In fact, the analysis takes place mostly during the "phase two" of the war, characterised by a slow but certain Russian advance. On the other side, the average number of upvotes remains constant, demonstrating that the potential interest is still present. The public is still there, it just needs something new to get engaged with and participate more actively again. The second analysis is about hope. Following the events of the war, hope strongly decreases after the symbolic and strategical losses of Azovstal (Mariupol) and Severodonetsk. After that, it stabilises in its slow decrease, mirroring the tides of phase two of the conflict. Spikes in hope, both positives and negatives, are present after important battles, but also some non-military events, such as Eurovision and football games. This is an interesting insight, because it shows how morale is not only formed by the objective results of the war, but also by emotional events. The third one regards fear. Its trend is stable during the entire analysis. Meaning that the tides of the war itself did not influence it significantly. There is a minor negative correlation with hope. It is interesting to notice that they are not inversely correlated. This means that hope and fear could coexist in the public opinion in specific instances. The fourth one analyses the popularity of the two countries and their leaders, using a polarity score. The most obvious consideration is that Zelenskyy and Ukraine constantly outperform Putin and Russia. Despite being relatively volatile, the trend seems to remain constant. A key takeaway from this is that a strong opinion is formed, and without serious upheavals, it will not change. In the fifth one, the relationship between fear/hope and relevant financial items is explored. A significant relationship (which is negative) between hope and the gas price was found. With the increase of hope, gas prices would decrease, or vice-versa. A reason for that could be that there is hope that a Ukrainian victory in the war would put ease again the gas flow from Russia to Europe. Since this has been selected as a fundamental analysis via limited amount of information, more studies would need to be done to fully explore this relationship. The sixth one is the topic modelling. The submissions in English language are about five different topics: geopolitical arguments, Russia and government, morality of war, war atrocities and weapons. Those are the topics which caught the public eye the most in the analysed period. Geopolitical arguments are positively correlated with both hope and fear. Morality of war, Russia and government are negatively correlated with both hope and fear. Discussions about weapons are positively related to hope and negatively to fear, and surprisingly the same applies to war atrocities.
2308.04076
DataTales: Investigating the use of Large Language Models for Authoring Data-Driven Articles
Authoring data-driven articles is a complex process requiring authors to not only analyze data for insights but also craft a cohesive narrative that effectively communicates the insights. Text generation capabilities of contemporary large language models (LLMs) present an opportunity to assist the authoring of data-driven articles and expedite the writing process. In this work, we investigate the feasibility and perceived value of leveraging LLMs to support authors of data-driven articles. We designed a prototype system, DataTales, that leverages a LLM to generate textual narratives accompanying a given chart. Using DataTales as a design probe, we conducted a qualitative study with 11 professionals to evaluate the concept, from which we distilled affordances and opportunities to further integrate LLMs as valuable data-driven article authoring assistants.
Nicole Sultanum, Arjun Srinivasan
2023-08-08T06:21:58Z
http://arxiv.org/abs/2308.04076v1
# DataTales: Investigating the use of Large Language Models ###### Abstract Authoring data-driven articles is a complex process requiring authors to not only analyze data for insights but also craft a cohesive narrative that effectively communicates the insights. Text generation capabilities of contemporary large language models (LLMs) present an opportunity to assist the authoring of data-driven articles and expedite the writing process. In this work, we investigate the feasibility and perceived value of leveraging LLMs to support authors of data-driven articles. We designed a prototype system, DataTales, that leverages a LLM to generate textual narratives accompanying a given chart. Using DataTales as a design probe, we conducted a qualitative study with 11 professionals to evaluate the concept, from which we distilled affordances and opportunities to further integrate LLMs as valuable data-driven article authoring assistants. Human-centered computingVisualizationVisualization design and evaluation methods + Footnote †: email: [email protected] markup language-based frameworks to support the authoring of interactive articles on the web [3, 13, 14]. Another set of systems like Kori [15], VizFlow [34] and DataParticles [2] adopt a more graphical and mixed-initiative approach and allow authors to configure interactive links between text and charts while authoring data articles. Besides systems that explicitly focus on content drafting and presentation fine-tuning, another body of work also includes data fact- or insight-recommendation systems that suggest singleton takeaway statements for visualizations during the data exploration phase to help authors identify talking points in their articles [19, 32, 37, 29]. Our work furthers the line of research on authoring data-driven articles by investigating the use of contemporary LLMs to generate ideas for textual content that authors can further edit. **LLMs in data visualization and writing.** Recent advances in model architectures, performance, and availability have led to a surge of LLM-based applications for writing support. These include creative writing support tools such as plot suggestions [30], journalistic angle ideation [24] and co-writing of theater scripts [21]; as well as technical writing support such as argumentative writing [41], scientific writing [9], and reverse outlining for manuscript revision [4]. Within the data visualization space, LLMs have been used to power natural language interfaces [28] for visualization authoring [36]. Only a few works have looked at text content generation in a data visualization context, to create data stories from a set of user-provided keyframe data facts [35], and natural language summaries of a given chart for accessibility purposes [23]. To our knowledge, our work is the first to look specifically at leveraging LLMs for data-driven articles. ## 3 DataTales Fig. 1 shows the DataTales user interface and Fig. 2 summarizes the system workflow. The system is implemented as a web application using a React and Python Flask setup. It features a curated list of datasets, with respective charts rendered using D3.js. For the language model, we use the OpenAI API for the 'gpt-3.5-turbo' model 1. Below we detail workflow steps and core system features. Footnote 1: The latest model available for development at the time of this research. **Chart and user annotations.** DataTales covers a wide array of charts commonly found in data-driven reports and articles, including bar charts with variants like stacked and group bars, scatterplots, single- and multi-series line charts, and choropleth maps. The current implementation contains a set of predefined charts covering a breadth of datasets including demographic survey responses, unemployment rates, automobile data, and Olympic medal winner history, among others. When asking the system to generate a story (via the Generate button, Fig. 1B), authors can have the entire chart considered for input or optionally add annotations to guide the LLM to emphasize specific data points or ranges when generating its response. For instance, Fig. 2A shows an example where an author highlights two bars they want the system to focus on when generating a story. DataTales supports various annotations including mark selection, color legend range selection, and axis range selection, which can be combined for more complex guidance [25]. **Prompt generation.** The key idea underpinning DataTales is that a system can take a chart or an annotated chart as input and leverage a LLM to recommend data-driven narratives. To this end, we iterated on several template variations to generate the prompts that are fed into the LLM. Specifically, we explored different features to include (or exclude) in the prompt such as the chart type, encodings, analytic tasks associated with specific charts [39, 10, 26], the chart title, the underlying dataset metadata, and user annotations, story length, among others. Besides experimenting with features, we also tried different phrasings to assess if the order of features or the grammar of the prompt notably impacted the generated narrative. We generated 10-20 narratives for each chart type with different combinations of these features. Inspecting the results, we iteratively excluded or combined features that yielded redundant results. For instance, we noticed that including the encoding information in the prompt generated statements reiterating the chart. We thus excluded encoding details from the prompt as such statements tend to offer little value to readers [20]. Similarly, we initially experimented with including analytic tasks (e.g., finding extremes, identifying correlations). However, we noticed that including the chart type in the prompt (e.g., 'bar chart','scatterplot', 'line chart') resulted in narratives comparable to those generated by including analytic tasks. Correspondingly, keeping in mind the simplicity and brevity of specifying the chart type (over analytic tasks), we only included that information in the final prompt template. Fig. 2B shows an example of the prompts generated by DataTales but the general template for generating data narratives is as follows: ``` Write a narrative based on a[chartType]showing the following data:[chartData]on thetopic "[chartTitle]"focusing on:[chartAnnotations] ``` where, * indicates an optional parameter that is included in the prompt only if it is available in the input chart. chartData is the data array that is bound to the marks and chartAnnotations is a list of data items for selection annotations (e.g., \(\{\textit{Year}:2000,\textit{Country}:\textit{Australia}\}\)) and/or values in the case of axis brush annotations (e.g., \(\{\textit{Year}\textit{between}\ [1980,2001]\}\)). Once a narrative is generated, we prompt the LLM again to generate a title: ``` Suggest a title for the following narrative:[marktiveText]. ``` The title and text are sent as to the system front-end as a self-contained story. These prompts generated reasonable results for our purposes, although we argue that further experimentation with prompt patterns [38] would be worthwhile. **Linking the generated text to the input chart.** Once the LLM generates the narrative, DataTales proactively processes the generated story to identify data references. Similar to prior natural language systems for visualization (e.g., [8, 22]), we use a combination of dependency parsing and keyword matching to map phrases in a sentence to attributes and values in the visualized data. DataTales highlights whole sentences containing data references using a dotted underline to emphasize that the sentence talks about a specific set of marks on the chart. To aid reading and comprehension, and incorporating ideas from prior work on interactively linking text and charts [34, 15, 12, 15], as authors hover on these underlined sentences, DataTales highlights relevant portions of the chart (see Figure 2: DataTales workflow overview. Given a chart and an optional set of annotations, the system generates textual narratives that are interactively linked to the chart and can be further edited by authors. Figs. 1 and 2D). Besides improving readability, our motivation to include this text-\(\rightarrow\)chart linking was also that visually seeing the data being referred to in the text could serve as a quick verification for potential hallucinations or incorrect interpretations by the LLM (e.g., Fig. 3). Authors can then redate the stories themselves, and their edits are shown in a different italicized format. ## 4 Evaluation To assess whether our envisioned concept of using chart interaction for LLM story generation made sense in the context of data-driven articles, we conducted a qualitative user study. We used DataTales as a _design probe_ to expose participants to this concept in the context of an authoring task, and then elicited feedback on the experience. We recruited 11 data professionals (P1-P11) with prior experience in authoring data-driven articles or similar reports. Backgrounds encompassed content writers, dashboard designers, project managers and consultants, spanning multiple organizations. Participants were recruited via slack communities on related interest channels. Incidentally, most participants had some prior exposure to LLM-based tools, and five reported using these tools for authoring support at some point (e.g., for brainstorming, starting points, outlines, summaries). Feedback sessions entailed a 20-min data story authoring task on a given chart followed by a semi-structured interview to discuss their experiences with the tool. In lieu of conducting data analysis from scratch, participants were given one of a subset of four distinct dataset+chart types available in the tool (including stacked bar chart, line chart, scatterplot, and choropleth), which helped standardize experiences and keep sessions concise. Participants were told to use DataTales in their authoring process however they saw fit, while editing their working draft in a separate document editor for maximum editing flexibility. We also encouraged them to think aloud whenever possible during the authoring task. This study setup provided us with rich qualitative data to assess potential and limitations of LLMs for this task. We organize our findings in the form of **takeaways (T1-T13)**, encompassing observations of authoring workflows (Section 4.1), perceived value and affordances of LLM-based article authoring systems like DataTales for data-driven authoring experiences (Section 4.2), and identified limitations plus potential solutions (Section 4.3). ### Authoring Workflows Lessons and ideas emerging from task observations are as follows. **(T1) A master draft + multiple stories.** Participants were free to decide a format and framing for their stories, which led to a diverse set of data-driven articles. That said, authoring workflows were fairly consistent across participants: they all generated multiple versions, reused chunks from one or more versions -- occasionally a whole story, but most frequently short paragraphs from various ones -- rearranged them, and then made editorial revisions for style and flow. This suggests a setup for managing multiple story generation outcomes, and that maintaining an integrated master draft which can be easily populated with generated segments would be useful. **(T2) Expediting error checks.** As expected of LLM output, a number of inaccuracies [1] were spotted, prompting them to carefully check generated text for errors: _"is this legi?"_ (P3). The text-chart highlights were frequently used for this purpose and several agreed on its usefulness (P3, P8, P9), suggesting text-chart readability aids [15] should be further explored. On that note, some folks appreciated how author changes were explicitly signaled (P2, P5), helping retain context of what was fixed and what needs checking. **(T3) Synergies between chart and text.** Participants extensively leveraged chart interaction for their story generation: from an average of 4 stories per person, about 3 featured annotations. While the annotated stories did not always feature the depth and framing participants were hoping for (more under **T9, T10**), it consistently matched the selections (and author intent, as per think-aloud feedback), and results were still often usable and repurposed. Some participants also explicitly acknowledged the value of having the chart integrated into their drafting environment (P6, P7). These findings suggest that DataTales's use of chart interaction for text generation shows promise and is worth exploring further. **(T4) Coupling of annotations and generated stories.** Annotations were frequently used to get more details on selected data features, usage that was largely intuitive and represented important context for the text. We posit that preserving annotation context for generated snippets on a master draft would be very useful, not only for authors to recall provenance of snippets but also to potentially reuse as embedded highlights for readers, e.g., in the context of a dynamic story format such as scrollyelling [34]. **(T5) Potentially time saving.** While participants were not expected to finish their stories within the 20min, 3 of them successfully completed a first draft in the allotted time, suggesting potential efficiency gains in the authoring process. Several participants could also foresee saving time in the long run, e.g., _"would cut out a good 15 to 20 minutes of my work"_ (P6), and getting a head start on the writing, e.g., _"Getting started is sometimes hardest thing (...). I'll be looking at the data, procrastinating, trying to find correlations and relationships(...). And it does that for me, at least a base level"_ (P8). ### Affordances Despite the limited nature of the tool as a proof-of-concept design probe, participant reactions to the experience ranged from congenial to enthusiastic. Rationales on how DataTales supported their authoring experience in new and positive ways are compiled below. **(T6) Insights over data facts.** While data facts are an important part of a data story, the segments most often repurposed and appreciated by participants were those containing level-3 and level-4 statements in Lundgard and Satyanarayan's categorization of chart descriptions [20], which participants referred to as _"the why's"_ (P3, P10, P11). For example, on a dataset about cars acceleration vs. horsepower vs. country of origin, this could include things like identifying trends (e.g., _"cars with higher horsepower tend to have better acceleration rates"_), conclusions following findings (e.g., _"The US auto market prioritizes higher horsepower"_), and external context (e.g., _"policymakers should consider regulating emissions for consumers who value speed over efficiency"_). Several added that aggregating this _"human knowledge"_ was one of the most valuable aspects of the experience, complementing their authoring work with new information (P1, P3), alternative framings (P3, P7), and confirmation of current viewpoints (P3, P11). **(T7) Explanatory support: what to talk about and how.** Getting a first initial draft is challenging, and having a range of full stories available helped provide starting points to overcoming writer's block (P6, P8, P10, P11): _"I didn't know where I wanted to approach it, and then after generating a couple stories, I saw a trend and de Figure 3: Example of an incorrect statement generated by the LLM (contrary to the text, the chart shows that Florida _does not_ have a higher number of people over the age of 80 compared to California). The text–chart linking feature helps verify the statement and identify the erroneous interpretation by dynamically highlighting the two states. cided that'd be my focus"_ (P6). New ideas or context present in those stories also provided inspiration for new directions to explore (P5, P7), e.g., the extent that _"the Great Depression"_ affected US gold medal performance in the Summer Olympics (P7); as well as seeing familiar findings or terms portrayed in a different ways, with evocative phrasing, e.g., _"powerful muscle cars"_ (P11), and unique ordering of findings (P3). We argue there may be value in not only supporting easy generation and management of different stories, but also allowing for more diversity across different stories (e.g., via a slider to control the model's temperature), even at the cost of more spurious findings. **(T8) Exploratory support: a different lens on the data.** An unexpected use of DataTales was as a data exploration tool, e.g., to form hypothesis (P3), to gather facts (P7) and to get a high-level summary of the data in natural language form (P2, P6, P9). While our study setup induced some analysis as participants were asked to work with an unknown dataset, several of the dynamics observed applied to in-between stages of analysis and storytelling: e.g., getting ideas for additional datasets and facts to look into (P7, P8), and using annotations to dig deeper into individual data points (P11). Several participants also leveraged DataTales to "test" hypotheses by confirming or denying prior assumptions (P3, P7, P10), and to _"ask its opinion"_ (P7, P11). This showcases the intertwined nature of analysis and storytelling, and how authoring tasks can benefit from interactive visualizations integrated into the drafting environment. ### Opportunities While overall reactions to the tool were net-positive, participants also raised several concerns and suggestions for improvement, informing many compelling directions for future work. **(T9) More control over overall style.** A prominent pain point was the lack of control over _voice_ (e.g., corporate voice (P5), a business owner's perspective (P3)), _tone_ (e.g., formal vs. personal (P2), make it less "robotic" (P8)), and _format_ (e.g., organize findings from highest to lowest counts (P4)). Generated stories were often found _"too wordy"_ (P5-P7, P10, P11), requiring heavy editing to cut them down. Those with prior LLM exposure suggested bridging this gap by writing or editing underlying prompts generated by the tool (P5, P6, P10); on the other hand, it was remarked that prompting could be found too intimidating or unfamiliar for other authors (P3), which calls for some form of reasonable middle ground. We envision DataTales could provide predefined fields for authors to describe target audience, voice, and format in natural language, which could then be incorporated into the base template to generate a new story. **(T10) Co-writing micro-tasks.** Apart from overall style, participants also wanted assistance in generating paragraph-level content for targeted insights (P4, P5, P6), e.g., _"let's focus on January hrux March, and talk about the holiday angle"_ (P5) and _"tell me why the US has such an outlier number of cars on the higher horsepower end"_ (P6). While power-users could again benefit from direct prompting here, P5 suggested a more seamless _fill-in-the-blank_s approach, designating spaces for automatic completion for diegetic prompting [5], i.e., leveraging parts of the writing itself. For data-centered insights, we envision combining chart interaction with natural language input, e.g., by editing the chart title to nudge generation of different insights, drawing trend lines over the chart to indicate what patterns are important to focus on, and adding descriptive annotations to particular data points to retrieve targeted context. **(T11) More exploratory aids.** Many participants wanted to take their insights a step further by getting recommendations for external related datasets to include in the story (P3-P7, P11), and potential comparisons to be drawn within or across datasets (P2, P4-P7, P11). Some also framed these recommendations as an opportunity to continue learning about the data **(T8)** and helping them formulate _"follow up questions"_ (P3) and _"additional areas of exploration"_ (P5), giving authors more directions to consider for their stories. **(T12) Summarize stories.** Another consequence of "wordy" stories **(T9)** is they take _"a lot of time to read through"_ (P5), which is specially onerous when trying to generate stories for exploratory analysis purposes **(T8)**, i.e., natural language overviews of the data. Some participants suggested an option to depict stories in a more concise format (P2, P5, P7, P11), e.g., bullet-point form (P2, P5). Beyond data analysis overview, these summaries could also allow for quicker inspection of new stories, faster recall of past stories, and support revisions of a master draft via reverse outlining [11, 4]. **(T13) Internal knowledge, external validation.** As stated earlier, a number of inaccuracies were surfaced in the generated stories **(T2)**. These include objective errors (e.g., such as hallucinations, failure to spot obvious patterns, labeling mix-ups), but also subtle mistakes that are hard to spot without expert background: e.g., on a misguided reasoning for Vermont's low unemployment rates, P8 warned _"if I didn't have local (Vermont) knowledge, I'd just have taken this at face value"_. To mitigate this, automatic retrieval of supporting references was suggested, e.g., source citations (P9) and external datasets for triangulation (P8, P9). Some participants also wanted to provide corrective feedback to the model (P3, P6), on the assumption that it would get better _"the more you use it"_ (P6). Another related concern was trying to sort out what statements are derived from the underlying chart data and what was brought in by the LLM as external context (P2, P5, P9). Similar to how manual edits are clearly marked **(T2)**, participants suggested further signaling passages with a different color or formatting to denote source. ## 5 Reflections Following our takeaways **(T1-T13)**, we conclude with a few thoughts that emerged from discussions that are pertinent to the use of LLMs for authoring and visual storytelling more generally. First, we consider **bias versus framing.** One of the challenges reported by participants was how difficult it was at times to get generated text that matched the points they wanted to make in a story, i.e., _"confirm their viewpoints"_. On one hand, _"picking a side"_ is an integral exercise of telling a story. On the other, we also know how prone LLMs are to fabricating evidence in a convincing manner. As such, to honor the overall purpose of data stories to stay true to the data (within the many valid interpretations that may arise), and given the significant exploratory component that the authoring process entailed, it is essential we consider ways to mitigate confirmation bias and amplification of harmful speech. A related point is the importance of **bridging data literacy**. It was raised in participant discussions how folks with lower data literacy skills would benefit the most from authoring support (P5, P9), but at the same time, having these skills would be crucial to properly guard against LLM errors (P8). Under the same tenet of mitigating misinformation, we should consider ways to better equip authors with critical thinking skills: from facilitating fact checking and encouraging all statements to be fact-checked, to nudging techniques that encourage reflection, e.g., phrasings that "ask a question" instead of stating "what and why" [6]. And finally, we make a case for the use of **LLMs alongside classic techniques**. While our studies demonstrated how LLMs can add value to the authoring by easily incorporating context and insights, their unpredictability does offset some of the efficiency it is purported to bring. Many of the solutions suggested to deal with this uncertainty would call for add-on modules to parse the generated text, identify entities, link text entities and chart elements, retrieve external evidence, and so on. While there is ongoing work in applying LLMs to natural language tasks like information extraction [17] and fact-checking [40], we argue that tried and tested heuristics-based algorithms and classic NLP techniques will still play an essential role in the foreseeable future.
2302.09509
Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset
Real-world data usually exhibits a long-tailed distribution,with a few frequent labels and a lot of few-shot labels. The study of institution name normalization is a perfect application case showing this phenomenon. There are many institutions worldwide with enormous variations of their names in the publicly available literature. In this work, we first collect a large-scale institution name normalization dataset LoT-insts1, which contains over 25k classes that exhibit a naturally long-tailed distribution. In order to isolate the few-shot and zero-shot learning scenarios from the massive many-shot classes, we construct our test set from four different subsets: many-, medium-, and few-shot sets, as well as a zero-shot open set. We also replicate several important baseline methods on our data, covering a wide range from search-based methods to neural network methods that use the pretrained BERT model. Further, we propose our specially pretrained, BERT-based model that shows better out-of-distribution generalization on few-shot and zero-shot test sets. Compared to other datasets focusing on the long-tailed phenomenon, our dataset has one order of magnitude more training data than the largest existing long-tailed datasets and is naturally long-tailed rather than manually synthesized. We believe it provides an important and different scenario to study this problem. To our best knowledge, this is the first natural language dataset that focuses on long-tailed and open-set classification problems.
Jiexing Qi, Shuhao Li, Zhixin Guo, Yusheng Huang, Chenghu Zhou, Weinan Zhang, Xinbing Wang, Zhouhan Lin
2023-02-19T08:44:21Z
http://arxiv.org/abs/2302.09509v1
# Text Classification in the Wild: ###### Abstract Real-world data usually exhibits a long-tailed distribution, with a few frequent labels and a lot of few-shot labels. The study of institution name normalization is a perfect application case showing this phenomenon. There are many institutions worldwide with enormous variations of their names in the publicly available literature. In this work, we first collect a large-scale institution name normalization dataset **LoT-ints1**, which contains over 25k classes that exhibit a naturally long-tailed distribution. In order to isolate the few-shot and zero-shot learning scenarios from the massive many-shot classes, we construct our test set from four different subsets: many-, medium-, and few-shot sets, as well as a zero-shot open set. We also replicate several important baseline methods on our data, covering a wide range from search-based methods to neural network methods that use the pretrained BERT model. Further, we propose our specially pretrained, BERT-based model that shows better out-of-distribution generalization on few-shot and zero-shot test sets. Compared to other datasets focusing on the long-tailed phenomenon, our dataset has one order of magnitude more training data than the largest existing long-tailed datasets and is naturally long-tailed rather than manually synthesized. We believe it provides an important and different scenario to study this problem. To our best knowledge, this is the first natural language dataset that focuses on long-tailed and open-set classification problems.2 Footnote 1: Long-Tailed institution names Jiexing Qi, Shuhao Li, Zhixin Guo, Yusheng Huang, Chenghu Zhou, Weinan Zhang, Xinbing Wang, Zhouhan Lin, Shanghai Jiao Tong University, Shanghai, China Text Classification, Long-tail, Few-shot learning ## 1 Introduction Real-world data often have a long-tailed distribution [1]. For example, the frequency of words in natural language [2], the number of connections social media users have [3], the species richness in an ecosystem [4], etc. Also, real-world data distributions are open-ended: new classes may show up in the actual world. Therefore, applying classification or recognition models to a real-world case is nontrivial from a scientific point of view: scenarios of few-shot and zero-shot learning will be encountered. In recent years, there has been increasing interest in the study of long-tailed data, but mostly within the field of computer vision. Since [1] presented a visual classification dataset by resampling labeled images from ImageNet [5], related research on this specific scenario has been greatly motivated. Such as through resampling training sets (Decouple [6], BBN [7]), reweighting loss function [8, 9, 10], or with transfer learning [11, 12]. To our best knowledge, there are no public datasets for long-tailed classification in the field of NLP, thus hindering the development of related techniques for natural language. Name normalization for academic institutions is a text \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Dataset & \begin{tabular}{c} ImgNet \\ -LT \\ \end{tabular} & \begin{tabular}{c} Places \\ -LT \\ \end{tabular} & \begin{tabular}{c} MS1M \\ -LT \\ \end{tabular} & LoT-ints \\ \hline Classes & 1,000 & 365 & 74,532 & 25,129 \\ Train & 115,846 & 62,500 & 887,530 & 2,238,148 \\ Valid & 20,000 & 7,300 & - & 54,439 \\ Test & 50,000 & 36,500 & 3,530 & 58,154 \\ Max & 1,280 & 4,980 & 598 & 22,606 \\ Min & 5 & 5 & 1 & 1 \\ Dist. & Rsmpl. & Rsmpl. & Rsmpl. & Natural \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison between our dataset and others. The other three datasets are from [1]. _Rsmpl._ is short for _Resampled_, _LT_ is short for _Long-tailed_, and _IngNet_ for _ImageNet_. \begin{table} \begin{tabular}{c} \hline \hline Name Instances for Yale University \\ \hline Yale Univ, New Haven, CT \\ Yale#N# University \\ Yale Medical School, New Haven, Connecticut4 \\ Yale U., Phys Dept., 217 Prospect St., New Haven, CT 06511. \\ \hline \hline \end{tabular} \end{table} Table 1: An example illustrating some different variations extracted from academic publications that correspond to a same normalized name (Yale University, in this example). classification task that classifies non-standard institution names into classes consisting of their standard forms. The non-standard institution names are usually extracted by OCR or PDF parsing algorithms on academic publications, which could be written in different granularity (department/school/university), abbreviations (MIT/M.I.T.), use of accent characters or typographical errors, etc. (See Table 1). These lexical variations of institution names result in redundancy, incompleteness, and ambiguity [13], which poses serious problems for a bunch of downstream tasks, such as information retrieval or author profiling. In this work, we construct a large dataset of academic institutions named **LoT-insts** for the name normalization task from Microsoft Academic Graph (MAG) [14] and present it as a text classification task, with the extracted affiliation names being examples and their corresponding deduplicated institution names being classes. We present **LoT-insts** in a form that emphasizes its long-tailed distribution nature of it by isolating zero-shot, few-shot sets out from medium- and many-shot sets in the test data. Different from other similar computer vision datasets such as those sampled from ImageNet, Places, or MS1M-ArcFace datasets [1], our dataset has far more classes and several orders of magnitudes more instances (see Table 2). In addition to that, our data are naturally long-tail distributed rather than manually sampled after collection, thus reflecting a more authentic distributional nature of real-world data. To our best knowledge, this is the first natural language dataset that focuses on this problem. Considering that de-duplication of institutions in new classes is of practical importance, we propose a new open-set evaluation task named _open-set class verification (OSV)_, which verifies if two given examples are of the same never-seen class or not. We further reproduce several baseline methods of important previous works on name normalization, including Naive Bayes [15], sCool [16], and CompanyDepot [17]. These methods not only provide a first comparative evaluation of them on the same dataset but also provide an analysis of their performances in the frequent and non-frequent classes. In addition to reproducing existing models, we also propose a BERT-based, pretrained model on this large dataset, which shows better performance across many- and few-shot test sets. We believe these contributions combined could pave the first stone of the way towards long-tailed text classification. ## 2 Related Work There are generally three categories that aim at tackling the open long-tailed classification since it is first formulated in [1] as an image recognition task. The first is resampling, which is the reverse weighting of the sampling frequency of different categories of images according to the number of samples. For example, [18, 19, 20] are methods around oversampling the few-shot classes, while [19, 21] propose methods around undersampling the abundant data from dominant classes. Repeating tail samples may lead to overfitting of a few classes [22, 8] while discarding valuable data will inevitably affect the generalization ability of the learned model [7]. The second category is reweighting, which focuses on increasing the loss function contribution of the tail categories. Usually, a large weight is allocated to the training samples of the tail class in the loss function [23]. However, reweighting cannot deal with real large-scale scenes with long-tail data, which often leads to optimization difficulties [24]. The third one is transfer learning, which learns the general knowledge from the head classes, and then transfers the learned knowledge to the tail classes [11, 12]. Note that all of the methods above are proposed under the image recognition background, while there is a lack of related research in NLP due to the absence of a natural language dataset reflecting this long-tailed distribution phenomenon. As for the task of name normalization, there are generally three kinds of approaches. The first is rule-based methods, which use a set of expert-defined rules to categorize text into different classes, such as NEMO [25], and name normalization for product items [26], genes [27], diseases [28], and persons [29], etc. Secondly, methods that rely on the hand-crafted features with classical machine learning models, such as Naive Bayes [15]. And lastly, methods that use an external knowledge base. This type of method retrieves candidates from a knowledge base and ranks the outcomes from heuristics. Typical systems of this type include sCool [16] and CompanyDepot [17]. More broadly, text classification is a well-studied topic that our task belongs to, albeit mostly in small numbers of classes, such as sentiment classification [30], topic classification [31], and natural language inference [32]. Almost all of these datasets are well-balanced and contain small numbers of classes, which are not suited to studying the few-shot and zero-shot learning problems. Hierarchical text classification (HTC), on the other hand, usually involves a large number of classes with a tree-structured taxonomy. Datasets in this category involves LSHTC [33], BIOASQ [34] and WikiSeeAlsoTitles-350K [35] etc. Various methods have been proposed around these works, focusing on leveraging the structural information between classes, but researchers on this line are less concerned with the few-shot scenario and don't have a zero-shot setting for out-of-distribution (OOD) test samples. A great variety of deep neural network-based models have been proposed for these tasks, and most recently, the state-of-the-art comes with a pretrained model, such as BERT [36], or ELMO [37]. ## 3 Dataset Construction We collect our dataset from Microsoft Academic Graph (MAG) [14], which is a heterogeneous graph containing scientific publication records, citation relationships between those publications, as well as authors, institutions, journals, conferences, fields of study, etc. There are mainly three stages in collecting our dataset from MAG. The first stage is data cleaning, where we remove most of the labeling noises from MAG. The second is data filtering, which removes redundant examples. Finally, we partition the dataset for training and evaluation. ### Data Cleaning In the data cleaning stage, we extract a large set of mappings from non-standard, original institution names to their normalized ones. The overall procedures in this step are outlined in Figure 1. Two files are utilized in extracting our dataset from MAG. The first is _Affiliations.txt_, which records basic information of each institution, such as their ID, standard institution name, display name, etc. The second file is _PaperAuthorAffiliations.txt_, which records each author of a paper and their institution ID, original institution name (i.e., non-standard institution name), author name, and so on. The non-standard institution names are obtained by Internet crawlers or parsed from paper documents, etc. To extract our data, we first use _PaperAuthorAffiliations.txt_ to create mappings from the original institution names to their institution ID and then bridged on the institution IDs. We use _Affiliations.txt_ to find their standard institution name, which is treated as normalized institution names. We first deduplicate affiliation IDs by extracting all standard institution names from _Affiliations.txt_ and redirecting the IDs with the same standard institution name to a new ID. As for the non-standard institution names in _PaperAuthorAffiliations.txt_, we further preprocess these mappings to remove a bunch of noises such as unnecessary HTML tags over length names etc. However, not all of the mappings retained are correct. We found that there are a lot of conflicts within these mappings. The same original institution name could appear multiple times, and each time it is mapped to an ID, which is not necessarily always the same. Therefore, we collect all these conflicts and calculate a _confidence score_ for each of the ID that shows up for the original institution name. The confidence score is calculated as the portion of mapping occurrences this ID receives out from all mapping occurrences that the original institution name has. i.e. \[confidence(b,i)=\frac{N_{bi}}{\sum_{j=1}^{K}N_{bj}}\in[0,1]\] where \(N_{bi}\) stands for the mapping occurrences that point to the \(i\)-th ID from the original institution name \(b\). And the summation is over all the \(K\) IDs that \(b\) pointed to in all of its occurrences. We only retain those \(b\)s who have a majority votes and discard all ambiguous ones. For more details for preprocessing steos, please refer to Appendix A. ### Data Filtering In the second data filtering stage, we focus on filtering out the redundant examples in the dataset, which effectively removes bulks of easy examples dominating the dataset. An illustration of these redundant examples is shown in Figure 2. An institute called "Rowan college at Burlington County" has 4 very close variations, all of which are the same name with minor appended abbreviations ("NJ", "NJ, USA" or "New Jersey"), or some defects related to the OCR process. (The "1" is from the footnote numbers in the original PDF file.) Figure 1: Process of data cleaning during data collection. We first extract original institution names and institution IDs from the PaperAffiliations.txt file (leftmost). With preprocessing, we clean the data by removing noise from the original files and then deduplicate original names. Thus we get a record noting the occurrence of each mapping (middle-left). We then detect and remove wrong mappings and remove ambiguous mappings by using some voting method (middle-right). Finally, we clean the institution IDs by removing duplicated institutions. Thus we acquire high-quality mappings from clean non-normalized names to the clean IDs of their normalized names. Redundant examples usually share a major part of their string body with others, with minor differences at the beginning or end of the string. These are easy examples that can be classified correctly with simple string matching methods. However, they constitute a big portion (over 90%) of the dataset, which can push the classification accuracy spuriously high, submerging the more interesting, harder ones that need more sophisticated classifiers. Figure 2 shows an example of our method of filtering out the redundancy. First, we treat every original name as a node. Second, we connect two nodes if one node is a substring of the other, thus forming an undirected graph. We then find all the connected components in the graph. For each connected component, only one node is randomly chosen to be preserved, while all other nodes within the same connected component are discarded. ### Partitioning the Dataset We partitioned the dataset into different subsets for training and evaluation. The _open test set_ was collected by randomly sampling \(2\%\) of the _categories_. Thus the model will not see any examples from these categories during training. For the two _close test set_ and _valid set_, we randomly sample \(2\%\) of the examples from the remaining data for each of the sets. To better handle few-shot categories, we conduct extra steps to ensure that there is at least one example in training set for each category in the test set, and the test set covers as many categories as possible. Please refer to Appendix B for detailed steps. We further split the valid set and the two test sets into many-, medium-, and few-shot subsets by setting the threshold at 5 and 20 occurrences in the entire dataset. Since in open test sets, all examples are zero-shot examples for the training data, we name them as Frequent, Medium, and Rare subsets instead. An overall view of the whole partitioning results is shown in Table 3. Figure 3 shows the distributional characteristics of our data. For comparison, same figures are plotted for the ImageNet-LT data [1], which is sampled from a Pareto distribution (Figure 3). We found that our dataset is distributionally different from the Pareto distribution, which is used in the manual sampling process in ImageNet-LT. Since our data is not collected through sampling, we believe this reflects a better real-world scenario. ## 4 Tasks and Evaluation Metrics In this section, we will introduce three tasks: close set classification, open set classification, and open set verification. Moreover, the corresponding evaluation metrics for these three tasks are also provided. ### Closed-Set Classification (CSC) This task is mostly the same as the canonical classification task. As the test set only contains classes available in the training set, the model assumes that all test samples are from one of the known classes. We use 2 standard evaluation metrics, i.e., the accuracy (a.k.a. micro F\({}_{1}\)) and macro F\({}_{1}\) measure. ### Open-Set Classification (OSC) In this task, the model is asked to tell if a given sample belongs to an unseen class or not. To comply with all the previous methods in the literature, we conduct this task on top of the model trained in the CSC task setting. We set a threshold for the largest probability over all known classes and classify the example as an unseen class if the largest probability over Figure 2: Example of data filtering to eliminate the redundant examples. (a) shows some original institution names that correspond to Rowan University. In (b), for each example, we detect if it is a substring of another within the examples of the same class. If so, we connect the two examples with an edge. This process forms an undirected graph with those examples being its nodes. For each connected component in the undirected graph, we randomly retain only one example (darker colored), and all others are discarded (shallower colored). all known classes is smaller than the threshold. Since there is a sensitivity-specificity trade-off in determining the threshold, we set the evaluation metric as comparing the ROC (receiver operating characteristic) curve between different methods. ### Open-Set Verification (OSV) In addition to the open set classification task, we propose a new open set verification task, which provides a more fine-grained evaluation of the model's performance on the unseen data. In this task, we first sample a pair of examples in the open set, and then the model is asked to tell if the two provided samples belong to the same unseen class or not. It reflects whether the model could transfer its learned knowledge in telling apart examples from unseen classes. To achieve this, we manually sample a fixed test set of name pairs from the open test set, controlling the number of positive (belonging to the same class) and negative pairs (belonging to different classes) to be balanced. We can compare different models on the same testbed by applying the same fixed test set to all models. The prediction accuracy is used as the evaluation metric. ## 5 Methods In this section, we reproduce five baseline methods from the literature and then describe our proposed method. ### Baselines The baseline methods used in our work include two basic machine learning model which are Naive Bayes [15] and Fasttext [38] classifier, two search-based methods which are sCool [16] and CompanyDepot V1 (CDv1) [17], and one deep learning model which is BERT [36]. All of these methods are used in the CSC task, while only Naive Bayes and BERT are used in OSC and OSV tasks due to the nature of these methods. C.f. Appendix C for more descriptions of these models. ### Our Method In this paper, we introduce a special character-level BERT-based model, which is first pretrained from scratch on the institution name corpus and then fine-tuned as a sequence-level classifier. The overview of the model architecture is shown in Figure 5. Since institution names are usually shorter than natural sentences and contain many out-of-vocabulary (OOV) words, we modify the original BERT model to character level. The input embedding is modified to consist of 3 parts, i.e., character embedding, character position embedding, and word position embedding. We also remove the segment embed \begin{table} \begin{tabular}{l l|r r} \hline \hline & \multicolumn{2}{c}{class} & \multicolumn{1}{c}{instance} \\ \hline \multirow{3}{*}{Train set} & Many-shot & 6,693 & 2,179,840 \\ & Medium-shot & 5,446 & 44,215 \\ & Few-shot & 12,990 & 14,093 \\ & Overall & 25,129 & 2,238,148 \\ \hline \multirow{3}{*}{Valid set} & Many-shot & 6,693 & 46,212 \\ & Medium-shot & 5,446 & 5,553 \\ & Few-shot & 2,673 & 2,674 \\ & Overall & 14,812 & 54,439 \\ \hline \multirow{3}{*}{Close test set} & Many-shot & 6,693 & 47,153 \\ & Medium-shot & 5,446 & 5,551 \\ \cline{1-1} & Few-shot & 5,445 & 5,450 \\ \cline{1-1} & Overall & 17,584 & 58,154 \\ \hline \multirow{3}{*}{Open test set (zero-shot)} & Frequent & 132 & 42,985 \\ \cline{1-1} & Medium & 102 & 1,043 \\ \cline{1-1} & Rare & 278 & 492 \\ \cline{1-1} & Overall & 512 & 44,520 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of our dataset. Many-, middle-, and few-shot subsets are split according to a global frequency of 5 and 20. The open set subsets are also split in this way, except that the names are different since they are all zero-shot examples. Figure 3: Comparison of the long-tailed distribution between our dataset (c and d) and ImageNet-LT (a and b). The left figures are scattered plots between the frequency of a class, i.e., counts of examples within the same class (horizontal axis) w.r.t. the number of classes with this frequency (vertical axis). To reduce scatter, we merge the data along the horizontal axis by splitting it into 500 linearly spanned bins and then summing up the class counts within each bin. For ImageNet-LT data, 50 bins are used since it has much fewer classes. The right figures are the class-frequency plot, with classes in the horizontal axis sorted by their frequency. All axes are on a logarithmic scale. ding of the original BERT because we do not input a pair of sentences. To still give indications of word boundaries, we add word position embedding to the input embedding. The overview of the composition of input embedding is shown in Figure 4. In the pretraining stage, we only use the masked-language modeling (MLM) task, except that masks are all at the character level. In order to use as much data as possible during pretraining, the model is trained in all institution name records before the data cleaning step. In the fine-tuning stage, we choose sequence-level classification as the downstream task. A softmax layer is added after the output of [CLS] tokens to predict the corresponding class. We also use a resampling strategy in our method, which adjusts the sampling probability for each class according to its frequency. The probability \(p_{j}\) of sampling a data point from class \(j\) is given by: \[p_{j}=\frac{n_{j}^{q}}{\sum_{i=1}^{C}n_{i}^{q}} \tag{1}\] where \(n_{j}\) denotes the number of training examples for class \(j\). We choose \(q<1\) in order to avoid medium and few shot categories being submerged by many-shot ones. Specifically for the OSV task, we introduce an additional loss term during fine-tuning based on contrastive learning [39], which pushes two feature vectors close if they belong to the same class and pushes them away from each other otherwise. We follow the convention of using the output vector of [CLS] as the feature vector of the whole sequence, i.e., the institution name. Formally, the loss is defined as \[l(i,j)=y_{ij}d_{ij}^{2}+(1-y_{ij})[\alpha-d_{ij}]_{+}^{2}\, \tag{2}\] where \(d_{ij}\) means the Euclidean distance between the feature vector of a sample \(i\) and \(j\), and \(y_{ij}\) means whether sample \(i\) and \(j\) belong to the same class or not. \(\alpha\) is a tunable hyper-parameter. Fine-tuning by the contrastive loss will gain a feature space with meaningful distance metric, where the distance indicates their relativity. Since the scale of the training set is very large, we sample a part of the training data as anchors. For the OSV task, the distance can be used to determine if two given names belong to the same unseen class or not. ## 6 Experiments The experiment section consists of four parts, including evaluations of the four baseline models and our proposed model for the three tasks that come with our dataset, as well as some ablation studies on our proposed model for the dataset. ### Closed-Set Classification We report performances of all aforementioned methods on our close test set (Table 4). Apart from the observation that almost all baseline methods perform far worse on the few-shot test set than on the many-shot test set, our BERT-based method significantly outperforms all other baseline methods of various types, which indicates the effectiveness of pretraining on this task. It is also worth noting that our character-level pretrained BERT model further outperforms the original BERT fine-tuned on the medium- and few-shot subsets. However, our method's performance is inferior to BERT on many-shot subsets, indicating conflicts and trade-offs between the performances of many-shot and few-shot subsets, thus leaving space for future research. Figure 4: Overview of the input embedding layer. All three embedding components are learnable parameters, and element-wise added together. Word position embedding are aligned according to the word boundaries. Apart from the character vocabulary, 4 extra special marks: [CLS], [EOS], [PAD], [MASK], which are used in input embedding layer. When constructing character embedding, [CLS] and [EOS] are used at the beginning and end of the sequence, respectively. [PAD] is used for padding in mini-batch training, and [MASK] for pretraining. Figure 5: Overview of our model architecture. ### Open-Set Classification We report all methods' performance except for two retrieval methods that are not suited for this task. Using a mixed dataset which consists of the whole open set and an equal number of samples drawn from the close set, we plot ROC curves for these methods on each of the subsets (Figure 6). From Figure 6, we can tell that our proposed model has a very similar performance to the original BERT method in the overall test set and many-shot subset. However, our proposed model achieves significantly better performance in medium-shot and few-shot subsets than other approaches. ### Open-Set Verification Except for the two retrieval methods that are not suited for the verification task, we report the other two baselines and our method for this task. For the model fine-tuned with contrastive loss, we simply use the L2 distance between the feature vectors of samples in a pair as the distance measure. For other models, we first calculate the output distribution over all known classes for each input in the same pair and then calculate the Jensen-Shannon divergence (JSD) between them as their distance measure. Table 5 shows the accuracy of different methods. One can observe that training with the contrastive loss significantly increases the model's ability to tell apart different samples, even if they are from unseen classes. ### Ablation Study In this section, we describe two more experiments we did as an ablation study to reflect the importance of several components in our pretrained character-level model. **Character-level v.s. Word-level Pretraining** Although our model significantly surpasses the baseline BERT model in the aforementioned experiments, one may argue that it may be the resampling strategy in our model that made this difference. We then train another BERT model with resampling during fine-tuning (marked as BERT+RS in Table 4). \begin{table} \begin{tabular}{c c||c c c c|c c c|c} \hline \hline \multicolumn{2}{c||}{Metrics \(\backslash\)Methods} & \multicolumn{1}{c}{Naive Bayes} & \multicolumn{1}{c}{sCool} & \multicolumn{1}{c}{CD-V1} & \multicolumn{1}{c}{FastText} & \multicolumn{1}{c|}{BERT} & \multicolumn{1}{c}{BERT+RS} & \multicolumn{1}{c|}{BERT+RS+FP} & \multicolumn{1}{c}{Ours’} \\ \hline \multirow{2}{*}{Overall} & Accuracy & 72.20 & 76.72 & 79.97 & 74.93 & 83.30 & 83.40 & **84.00** & 83.73 \\ & Macro - \(F_{1}\) & 50.20 & 52.41 & 59.64 & 44.38 & 62.79 & 63.98 & 65.14 & **65.90** \\ \hline \multirow{2}{*}{Many-} & Accuracy & 78.74 & 84.41 & 86.32 & 85.08 & **90.01** & 89.67 & **90.01** & 89.12 \\ & Macro - \(F_{1}\) & 69.80 & 74.09 & 76.88 & 69.84 & 82.09 & 82.28 & **82.43** & 81.66 \\ \hline \multirow{2}{*}{Medium-} & Accuracy & 56.58 & 56.53 & 62.98 & 49.75 & 67.80 & 69.17 & 69.93 & **71.68** \\ & Macro - \(F_{1}\) & 55.82 & 55.97 & 62.56 & 38.56 & 67.55 & 68.82 & 69.94 & **71.05** \\ \hline \multirow{2}{*}{Few-} & Accuracy & 31.52 & 30.78 & 42.29 & 12.99 & 40.99 & 43.66 & 46.26 & **49.36** \\ & Macro - \(F_{1}\) & 31.16 & 30.52 & 42.03 & 8.69 & 40.75 & 43.33 & 45.99 & **49.06** \\ \hline \hline \end{tabular} \end{table} Table 4: The performance of different methods in CSC task. “RS” is short for ”resampling”. “FP” is short for ”further pretrain”.”Many-” is short for ”Many shot” as well as ”Medium-” and “Few-”. Figure 6: The performance in open set for three methods. (a) shows the overall performance while (b)(c)(d) shows the performance in frequent, medium and rare subset. \begin{table} \begin{tabular}{c c} \hline \hline Methods & Accuracy \\ \hline Naive Bayes & 69.12 \\ BERT & 78.88 \\ Our Method & **79.79** \\ \hline \hline \end{tabular} \end{table} Table 5: Test accuracy on OSV task. All other settings are set to be the same as our model, making the only difference being that our model is a character-level model, while BERT is at subword level. It can be seen that with resampling, the performance on the few-shot subset improved, suggesting the resampling strategy's effectiveness. However, the BERT+RS model is still inferior to our model, showing the necessity of using a character-level model. **Pretraining from Scratch v.s. Further Pretraining** Considering that BERT is pretrained on a general corpus, while our model is pre-trained solely on the proposed dataset, we add another stage of training in the BERT baseline, which is further pretraining the original BERT model on our dataset before fine-tuning (marked as BERT+RS+FP in Table 4). We can figure that the BERT+RS+FP improves over BERT+RS by a large margin, suggesting that our dataset does have a big difference from the general corpus used by BERT pretraining. Compared to our model, which is pretrained from scratch at the character level on our dataset, the model outperforms in the many-shot subset while still lags a significant gap in the medium and few shot subsets. The above two experiments show the necessity of using a character-level model, as well as suggest the difference between our dataset and a general natural language corpus. We conjecture that this is due to the enormous place names or surnames in various languages across the world that results in a vocabulary that the subword vocabulary of a general corpus could not meaningfully cover. The character-level model could better handle other challenges in our dataset, such as the character-level typo errors caused during PDF parsing or OCR procedures. ## 7 Conclusion In this paper, we present a large-scale institution name normalization dataset, which exhibits a long-tailed distribution in over 25k classes. We reproduce different types of publicly available methods for this task, providing a fair comparison between different methods. Also, a new BERT-based model, as well as a contrastive loss for this task, is proposed, which outperforms previous methods, setting a strong baseline for the dataset. Compared to other datasets focused on the long-tailed phenomenon, our dataset has one order of magnitude more training data than the largest existing long-tailed datasets and is naturally long-tailed rather than manually resampled. We believe it provides an important and different scenario to study this problem. We hope our study can pave the way toward long-tailed text classification. ## Acknowledgement This work was sponsored by the National Natural Science Foundation of China (NSFC) grant (No. 62106143), and Shanghai Pujiang Program (No. 21PJ1405700).
2303.06513
Detection of DDoS Attacks in Software Defined Networking Using Machine Learning Models
The concept of Software Defined Networking (SDN) represents a modern approach to networking that separates the control plane from the data plane through network abstraction, resulting in a flexible, programmable and dynamic architecture compared to traditional networks. The separation of control and data planes has led to a high degree of network resilience, but has also given rise to new security risks, including the threat of distributed denial-of-service (DDoS) attacks, which pose a new challenge in the SDN environment. In this paper, the effectiveness of using machine learning algorithms to detect distributed denial-of-service (DDoS) attacks in software-defined networking (SDN) environments is investigated. Four algorithms, including Random Forest, Decision Tree, Support Vector Machine, and XGBoost, were tested on the CICDDoS2019 dataset, with the timestamp feature dropped among others. Performance was assessed by measures of accuracy, recall, accuracy, and F1 score, with the Random Forest algorithm having the highest accuracy, at 68.9%. The results indicate that ML-based detection is a more accurate and effective method for identifying DDoS attacks in SDN, despite the computational requirements of non-parametric algorithms.
Ahmad Hamarshe, Huthaifa I. Ashqar, Mohammad Hamarsheh
2023-03-11T22:56:36Z
http://arxiv.org/abs/2303.06513v1
# Detection of DDoS Attacks in Software Defined Networking Using Machine Learning Models ###### Abstract The concept of Software Defined Networking (SDN) represents a modern approach to networking that separates the control plane from the data plane through network abstraction, resulting in a flexible, programmable and dynamic architecture compared to traditional networks. The separation of control and data planes has led to a high degree of network resilience, but has also given rise to new security risks, including the threat of distributed denial-of-service (DDoS) attacks, which pose a new challenge in the SDN environment. In this paper, the effectiveness of using machine learning algorithms to detect distributed denial-of-service (DDoS) attacks in software-defined networking (SDN) environments is investigated. Four algorithms, including Random Forest, Decision Tree, Support Vector Machine, and XGBoost, were tested on the CICDDoS2019 dataset, with the timestamp feature dropped among others. Performance was assessed by measures of accuracy, recall, accuracy, and F1 score, with the Random Forest algorithm having the highest accuracy, at 68.9%. The results indicate that ML-based detection is a more accurate and effective method for identifying DDoS attacks in SDN, despite the computational requirements of non-parametric algorithms. DDoS attacks, Machine learning algorithm, Software Defined Networks, Flooding, Vulnerabilities ## I Introduction Distributed Denial of Service (DDOS) attacks are a common and costly form of cyber-attacks that aim to disrupt the availability of a network or service by overwhelming it with traffic from multiple sources [1]. DDOS attacks can be launched using various tactics and techniques, such as flooding the network with packets or requests, or exploiting vulnerabilities in the network protocols or devices [2]. DDOS attacks can cause significant damage and disruption to businesses, organizations, and individuals, and can affect the reputation and trust of the targeted entities [3]. Software-Defined Networking (SDN) is a promising paradigm that enables the centralized control and management of network resources and has the potential to enhance the security and resilience of networks against DDOS attacks [3]. In SDN, the control plane and data plane are separated, and the control plane is implemented in software, which enables the dynamic and flexible configuration and management of the network [4]. However, SDN itself may be vulnerable to DDOS attacks, and there is a need for effective and efficient methods to detect and mitigate DDOS attacks in SDN [5]. To mitigate DDOS attacks, various detection and defense mechanisms have been proposed, including traditional intrusion detection systems (IDS) and firewalls, as well as more advanced techniques such as machine learning (ML) [6]. Machine learning algorithms are a powerful tool for detecting and classifying network traffic and have been widely applied to DDOS attack detection in SDN [6]. The efficiency of machine learning algorithms for DDOS attack detection in SDN depends on the specific algorithms, datasets, and evaluation methods that are used [7]. Different machine learning algorithms have different strengths and limitations and may perform differently depending on the characteristics and quality of the data [8]. In this paper, four machine learning algorithms (Random Forest, SVM, Decision Tree, and XGBoost) will be applied due to their ability to learn from and classify large amounts of data. However, the relative effectiveness of these algorithms has not yet been comprehensively compared in the context of DDOS attack detection in SDN. In addition, most of the previous studies did not rule out some features that are not feasible when applied to live network traffic. To address this gap in the literature, the current study aims to compare the performance of machine learning algorithms for detecting DDOS attacks before they occur in SDN and to consider the selection of appropriate features to obtain true accuracy for each. This comparison will be made using a dataset of simulated DDOS attacks and normal network traffic and will take into account a range of performance metrics including accuracy, precision, recall and f1-score. ## II Background and Related Works ### A DDOS Attacks and their Detection In this section, a brief overview of Distributed Denial of Service (DDoS) attacks is presented, along with a discussion on some of the existing and related work on DDoS attack detection. DDoS attacks are a form of cyber-attack that aims to reduce service availability by overwhelming a network's resources, making it difficult or impossible for authorized users to access their services. These attacks can be classified into two types: reflection-based and exploitation-based. Reflection-based attacks use an authorized third-party component to conceal the attacker's identity and overload the victim with response packets. Exploitation-based attacks, on the other hand, are conducted by the component of the third party [9]. In terms of detection methods, DDoS attacks can be categorized into three types: Signature-Based (SB), Anomaly-Based (AB), and entropy-based (EB) approaches. SB frameworks rely on measuring attributes across a signature database or from recognized malicious threats, making them easy to implement but less effective against evolving threats. AB systems use data mining and statistical approaches to track and compare network traffic against a defined baseline but may generate false positive alerts if the baseline is not properly set up. EB systems, on the other hand, rely on changes in entropy values to detect anomalous incidents, making them more sensitive than the other two types. However, in high-speed networks, an effective algorithm is required to reduce processing time and memory consumption. ### Related Work In recent years, the use of machine learning (ML) techniques to detect and mitigate distributed denial-of-service (DDoS) attacks has gained significant attention in the field of computer science. A number of studies have proposed various machine learning-based approaches to identifying DDoS attacks, utilizing different datasets and performance metrics. One of the most prominent studies in this area [10] a study on the effectiveness of using 25 time-based features to detect and classify 12 types of DDoS attacks using machine learning classifiers. The study found that the majority of models achieved an accuracy of around 99% in detecting DDoS attacks and an accuracy of around 70% in classifying specific DDoS attack types. Furthermore, the study found that using a smaller subset of time-based features, which reduces training time, still achieved high accuracy in detecting DDoS attacks. Another notable study [11] employed a machine-learning algorithm to detect DDoS attacks in an SDN environment. The authors reported similar results, with accuracy rates above 99%, and demonstrated that the Decision Tree (DT) algorithm had the highest accuracy rate of 99.90%. [12] proposed a new scheme for DDoS data collection in SDN by collecting a largely static data model using port statistics. The study found that the Support Vector Machine (SVM) with linear and radial basis function kernels was the most effective way to successfully identify a DDoS attack, with an accuracy rate of close to 100%. [13] compared the performance of different ML algorithms for detecting DDoS attacks and found that the K-Nearest Neighbor (KNN) and Decision Tree (DT) algorithms had high accuracy in detecting DDoS attacks. However, a comparative analysis showed that DT had a higher accuracy of 99.91% compared to KNN's 98.94% and also had a better running time. [14] suggested using ML-based approaches to detect DDoS attacks in SDN and utilized the KDD99 dataset to train and test the model. The study found that the decision tree algorithm and support vector machine (SVM) provided an 85% better accuracy and detection rate. [15] proposed using the ensemble algorithm to detect a DDoS attack in an SDN environment. The study used in SDN as the dataset and used K-means++ and Random Forest to obtain high detection accuracy and efficiency. The proposed method achieved 100% value accuracy, 100% accuracy, 100% accuracy, 100% retrieval, and 100% F1 scale with a processing time of 1.5 s. [16] proposed the use of ML-based methods to detect DDoS attacks in wireless networks. The study used the decision tree and the KDDCup'99 dataset for classification, and results showed that the J48 algorithm had a high accuracy of 99.94% in detecting DDoS attacks. However, in [17] the authors developed architecture for identifying and a mitigating LR-DDoS attack in SDN, which includes IDS trained using machine learning models and has a high detection rate (95%). The architecture was tested in a simulated environment designed to closely mimic real-world production networks. The IDS was able to successfully mitigate all detected attacks. Also, in [18], the authors proposed DNN-based IDS for real-time detection of DDoS attacks in SDN, which was efficient and achieves a high detection accuracy of 97.59%. The IDS was designed to counter the use of sophisticated techniques by attackers to launch DDoS attacks on vulnerable SDN networks. While these studies have made great strides in using ML algorithms for DDoS detection in SDN, there is still room for improvement. Many previous studies focused on a limited number of features, some of which used features that are useless when using the model in real time, in addition to being limited to a limited number of machine learning algorithms and did not compare the performance of these algorithms (Random Forest, SVM, Decision Tree, and XGBoost) way enough between each other before in the CIC-DDOS2019 dataset. To address these limitations, this paper aims to generate DDoS detection identifiers in SDN using these ML algorithms considering the use of the most effective real-time features and to comprehensively evaluate the performance of these algorithms in terms of accuracy and efficiency. ## 3 Dataset and ML Model ### Dataset The CIC-DDOS2019 dataset is a comprehensive dataset that was created by the Canadian Institute of Cybersecurity (CIC) to support research on detecting and mitigating Distributed Denial of Service (DDoS) attacks. The dataset contains a wide variety of DDoS attack types, including TCP, UDP, and HTTP floods, as well as botnet-based attacks. The dataset also includes both benign and malicious traffic, making it ideal for training and evaluating machine learning-based DDoS detection systems. Additionally, the dataset is large and diverse, with over 2 million flow records and over 3 million packets, providing a significant amount of data for research. The dataset includes detailed information about the attack parameters, such as the type of attack, the source and destination IP addresses, and the packet size. This information can be used to gain a deeper understanding of the characteristics of different DDoS attack types and to develop more effective detection and mitigation strategies [9]. ### Machine Learning Model 1. Support Vector Machine (SVM) is a machine learning algorithm designed for use in both classification and regression problems. Its primary objective is to find a hyperplane in an n-dimensional feature space that accurately separates the data into distinct classes. The learning process of SVM involves two phases. Firstly, the input data is mapped into an n-dimensional space where each feature is treated as a support vector. Secondly, the best hyperplane for class separation is determined through optimization. To handle complex non-linear relationships, SVM makes use of the "Kernel trick," which simplifies the computational process [19]. 2. Random Forest (RF) is a type of ensemble learning algorithm in supervised machine learning. It uses a large number of decision trees to produce stable and precise predictions. RF creates a "forest" of decision trees, typically trained using the "bagging" method. The idea behind bagging is to combine multiple learning models to enhance the overall outcome [20]. 3. Decision Tree (DT) is a straightforward approach in machine learning that's used for supervised learning problems. The algorithm builds a tree-like structure to map out the relationships between different inputs and the outputs they lead to. The input data is separated into branches based on its values, and the leaf nodes of the tree indicate the final prediction for a class or numerical value. One of the great things about Decision Trees is that they're easy to understand and interpret, which makes them a valuable tool for figuring out which inputs are important and how they influence the final predictions [21]. * XGBoost is an advanced machine learning technique that's used in supervised learning. It's based on a concept called Gradient Boosting, which means it takes multiple decision trees and combines their predictions to produce even more accurate results. XGBoost is a powerful tool that can handle large amounts of data and complex relationships between inputs and outputs with ease. It's become very popular in data science competitions and is widely used in many real-world applications [22]. ## IV Proposed Approach Dealing with Distributed Denial of Service (DDoS) attacks is a top priority when it comes to software-defined networking (SDN) due to its centralized controller architecture. The constant evolution of these attacks requires updated and innovative systems to respond effectively, and this is where the evaluation of machine learning (ML) methods for DDoS detection comes into play. By training the detection system to learn traffic patterns from new information, ML offers a more accurate and efficient solution compared to other detection methods. There are three main categories for detecting DDoS attacks, based on the detection metric and mechanism used: Information-theory based detection, ML-based detection, and ANN-based detection. For this study, we opted for ML-based detection due to its ease of implementation and relatively high degree of precision compared to Information-theory based models. The standard CICDDoS2019 dataset was used to train the ML models, and the network data was fed to the trained model to predict whether the data was anomalous or benign. This concept is depicted in Figure 1. ## Appendix A Machine Learning Approach Multiple machine learning (ML) techniques are employed to detect DDoS attacks. Four classification algorithms, including Random Forest (RF) classifier, Decision Tree (DT), Support Vector Machine (SVM), and XGBoost (GBDT), are selected based on the criteria of non-parametric algorithms. Non-parametric algorithms are a powerful tool in the world of machine learning, allowing us to capture the nuances of complex relationships between input and output variables [23, 24]. These algorithms are known for their flexibility and ability to handle a wide range of data types. However, this flexibility doesn't come without its challenges. When dealing with large datasets, non-parametric algorithms can be quite demanding computationally. To help manage this, we often employ techniques such as pruning or adding a penalty term to the objective function. Despite these efforts, non-parametric algorithms are still slower than their parametric counterparts, like linear regression, which can make them less suitable for real-time, large-scale applications. Despite these limitations, non-parametric algorithms remain a valuable tool in our machine learning toolbox. By offering the ability to model complex relationships, these algorithms have the potential to provide deep insights into a variety of problems. With ongoing efforts to optimize computational efficiency, we can only expect non-parametric algorithms to continue making a big impact in the world of machine learning. ## Appendix B CIC-DDoS2019 Dataset for Training Model The CICDDoS2019 dataset [20] was used in this research to train and evaluate the proposed model. This dataset contains both benign and up-to-date common DDoS attacks, providing a true-to-life representation of network traffic. The data is analyzed using Pandas (python library), because it provides data structures for efficiently storing large datasets and tools for working with them. In this research, Pandas was used to analyze the CICDDoS2019 dataset, which contains network traffic data with multiple variables. The library's powerful data manipulation capabilities, such as indexing, grouping, and merging, allowed for effective analysis and processing of the complex dataset. after pre-processing to dataset, it labeled based on 20 features, (destination IP, flow ID, minimum forward segment size, Figure 1: Proposed Methodology flow duration, forward header length, source port, minimum packet length, minimum forward packet length, mean packet length, maximum forward packet length, average packet size, average forward segment size, mean forward packet length, maximum packet length, total forward inter-arrival time, total length of forward packets, sub flow forward bytes, destination port, and source IP) [9]. However, we chose to exclude certain features, such as the timestamp, which was frequently used in prior studies [timestamp]. Despite being one of the most important features during feature extraction, we aimed to demonstrate the realism of our algorithms by not relying solely on the timestamp. Previous research has shown that algorithms relying on timestamp have a high accuracy in detecting the type of attack, near 99%, however, this approach is not practical in real-world scenarios as timestamps in actual network traffic will differ, and thus this model was not fit and did not provide high performance. Various attack types were considered with the exception of WebDDoS due to insufficient data (less than 400 records). The considered attacks included DrDoS_DNS, DrDoS_LDAP, DrDoS_MSQL, DrDoS_NetBIOS, DrDoS_NTP, Portmap, DrDoS_SNSSDP, TFTP, DrDoS_UDP and UDP-lag, in addition to BENIGN records. An equal number of Benign and malignant records, 113,828 each, were utilized to achieve the highest accuracy in modeling, resulting in a total of 1,365,936 records. The data was split into 80% for training the model and 20% for testing [9]. ### 3 Performance Measurements The accuracy of classification algorithms was assessed using four metrics, namely accuracy, precision, recall, and F1 score[25]. These evaluation scales are based on four key elements: True Positives (TP), which represents the number of DDoS traffic instances correctly classified; True Negatives (TN), referring to the number of normal traffic instances accurately classified; False Positives (FP), indicating the number of DDoS traffic instances misclassified; and False Negatives (FN), denoting the number of normal traffic instances misclassified. The efficiency of the algorithms was calculated using a confusion matrix, as depicted in Table 1. 1.1 The Confusion Matrix reflects the accuracy of classifying generated data into pre-determined categories throughout the learning, validation, and testing process. 1.2 Accuracy represents the proportion of DDoS attacks accurately identified in the dataset and is calculated according to Eq. (1). \[\text{Accuracy}=\frac{P+Q}{P+Q+R+S} \tag{1}\] 1.3 Recall quantifies the ability to correctly classify DDoS traffic and is calculated according to Eq. (2). \[\text{Recall}=\frac{P}{P+S} \tag{2}\] 1.4 Precision reflects the proportion of correctly categorized DDoS attacks out of all instances classified as DDoS and is determined by Eq. (3). \[\text{Precision}=\frac{P}{P+R} \tag{3}\] 1.5 F1-score is a composite measure of performance that balances Precision (A) and Recall (B) through a harmonic mean, as specified by Eq. (4). \[\text{Precision}=2*\frac{A*B}{A+B} \tag{4}\] \begin{table} \begin{tabular}{l|l l l} \hline & Predicted & & Positive & Negative \\ \multicolumn{3}{c}{_Actual_} & Positive & P & S \\ & Negative & R & Q \\ \hline \end{tabular} \end{table} Table 1: Confusion Matrix ## V Result and Analysis The performance of the machine learning (ML) model applied to the CICDDOS2019 dataset was evaluated utilizing a suite of metrics, including F1 score, accuracy, recall, and precision. These metrics offer a comprehensive evaluation of the model's capability to detect various types of DDoS attacks with accuracy. The accuracy results, ranked in descending order, revealed that the Random Forest algorithm had the highest accuracy of 68.9%, followed by XGBoost with 63.5%, Support Vector Machine with 59.2%, and Decision Tree with 47.8%. As such, Random Forest emerged as the most accurate among the evaluated algorithms. The results of the Random Forest algorithm are displayed in Table (2), revealing its low detection accuracy for some attack types, such as DrDoS_SSDP with an accuracy of 63% and DrDoS_UDP with an accuracy of 49%. On the other hand, the model achieved high accuracy for certain attack types, such as Portmap, with an accuracy of 99%. Table (3) presents the results obtained by the Decision Tree algorithm, which yielded lower accuracy scores compared to the Random Forest algorithm. Despite this, the Decision Tree model displayed a high accuracy in detecting benign records, reaching over 96% accuracy. Similarly, the model exhibited a high accuracy of 98% for the Portmap attack, but failed to accurately detect certain attacks, such as DrDoS_SNMP and DrDoS_SSDP, with accuracy scores below 30%. \begin{table} \begin{tabular}{l|l l l} _Label_ & _Precision_ & _Recall_ & _F1-score_ \\ \hline _BENIGN_ & 0.98 & 1.00 & 0.99 \\ _DrDoS_DNS_ & 0.55 & 0.62 & 0.58 \\ _DrDoS_LDAP_ & 0.56 & 0.78 & 0.65 \\ _DrDoS_MSSQL_ & 0.70 & 0.77 & 0.73 \\ _DrDoS_NTP_ & 0.90 & 0.90 & 0.90 \\ _DrDoS_NetBIOS_ & 0.82 & 0.21 & 0.33 \\ _DrDoS_SNMP_ & 0.66 & 0.94 & 0.78 \\ _DrDoS_SSDP_ & 0.63 & 0.13 & 0.22 \\ _DrDoS_UDP_ & 0.49 & 0.92 & 0.64 \\ _Portmap_ & 0.99 & 1.00 & 1.00 \\ _Syn_ & 0.56 & 0.80 & 0.66 \\ _TFTP_ & 0.70 & 0.54 & 0.61 \\ _UDP-lag_ & 0.94 & 0.35 & 0.51 \\ \end{tabular} \end{table} Table 2: Classification Report for Random Forest \begin{table} \begin{tabular}{l|l l l} _Label_ & _Precision_ & _Recall_ & _F1-score_ \\ \hline _BENIGN_ & 0.96 & 0.99 & 0.98 \\ _DrDoS_DNS_ & 0.11 & 0.33 & 0.16 \\ _DrDoS_LDAP_ & 0.44 & 0.56 & 0.49 \\ _DrDoS_MSSQL_ & 0.42 & 0.46 & 0.44 \\ _DrDoS_NTP_ & 0.82 & 0.34 & 0.48 \\ _DrDoS_NetBIOS_ & 0.39 & 0.21 & 0.28 \\ _DrDoS_SNMP_ & 0.31 & 0.02 & 0.03 \\ _DrDoS_SSDP_ & 0.27 & 0.08 & 0.12 \\ _DrDoS_UDP_ & 0.40 & 0.76 & 0.53 \\ _Portmap_ & 0.98 & 0.98 & 0.98 \\ _Syn_ & 0.55 & 0.70 & 0.62 \\ _TFTP_ & 0.63 & 0.44 & 0.52 \\ _UDP-lag_ & 0.79 & 0.37 & 0.51 \\ \end{tabular} \end{table} Table 3: Classification Report for Decision Tree Moreover, the Receiver Operating Characteristic (ROC) curve was employed to visually analyze the balance between the true positive rate and false positive rate of the model, thereby illuminating its capability in differentiating between benign and malicious traffic. Furthermore, the accuracy, recall, precision, and F1 score were calculated and reported for each label in the dataset, offering an in-depth analysis of the model's performance for each type of attack. Such information is imperative in evaluating the overall performance of the model and recognizing areas for enhancement. Figure 1 presents the ROC curve of the highest-performing Random Forest model, while Figure 2 displays the ROC curve of the lowest-performing Decision Tree model. ## VI Conclusion and Future Work This paper sought to assess the efficacy of machine learning techniques in identifying Distributed Denial of Service attacks in software-defined networking. Four algorithms, including Random Forest, Decision Tree, Support Vector Fig. 4: ROC Curve of SVM Fig. 5: ROC Curve of Decision Tree Machine, and XGBoost, were employed and evaluated utilizing the CICDDoS2019 dataset.in additional to drop some most used features as [Timestamp]. The results, evaluated through accuracy, recall, precision, and F1 score metrics, revealed that the Random Forest algorithm achieved the highest accuracy at 68.9%. The study determined that the use of ML-based detection is a more accurate and efficient approach than other methods, making it a valuable tool for actual DDoS attack identification. Despite the computational requirements of non-parametric algorithms, the results indicate their potential for providing insightful analysis of complex relationships between inputs and outputs in DDoS attack detection in SDN. Future research could focus on optimizing the computational efficiency of non-parametric algorithms, enabling their application in real-time, large-scale scenarios.
2303.09620
On the existence of global solutions for the 3D chemorepulsion system
In this paper, we give sufficient conditions for global-in-time existence of classical solutions for the fully parabolic chemorepulsion system posed on a convex, bounded three-dimensional domain. Our main result establishes global-in-time existence of regular nonnegative solutions provided that $\nabla\sqrt{u} \in L^4(0, T; L^2(\Omega))$. Our method is related to the Bakry--\'Emery calculation and appears to be new in this context.
Tomasz Cieślak, Mario Fuest, Karol Hajduk, Mikołaj Sierżęga
2023-03-16T19:50:19Z
http://arxiv.org/abs/2303.09620v2
# On the existence of global solutions for the 3D chemorepulsion system ###### Abstract. In this paper, we give sufficient conditions for global-in-time existence of classical solutions for the fully parabolic chemorepulsion system posed on a convex, bounded three-dimensional domain. Our main result establishes global-in-time existence of regular nonnegative solutions provided that \(\nabla\sqrt{u}\in L^{4}(0,T;L^{2}(\Omega))\). Our method is related to the Bakry-Emery calculation and appears to be new in this context. Key words and phrases:boundedness of solutions, chemotaxis, chemorepulsion, Lyapunov-like functional, Bernis-type inequality 2010 Mathematics Subject Classification: Primary 35B45, 35K45; Secondary 92C17 ## 1. Introduction In this paper, we study the problem of global existence of solutions of the fully parabolic chemorepulsion system. The two-dimensional case was solved in [4]. Unlike in the more-widely known chemoattraction case, 2D chemorepulsion leads to the global-in-time existence of classical solutions regardless of the size of the initial data. The question of global existence in three and higher dimensions remains open. In the present paper, we look into the 3D case and establish a conditional global regularity result for this model. First, we introduce the model. Let \(\Omega\subset\mathbb{R}^{n}\) be an open, bounded domain with a sufficiently smooth boundary. We consider the following fully parabolic chemorepulsion system \[\begin{cases}\partial_{t}u=\nabla\cdot(\nabla u+u\nabla v)\\ \partial_{t}v=\Delta v-v+u\end{cases}\qquad\text{in}\qquad(0,\infty)\times\Omega, \tag{1.1}\] with homogeneous Neumann boundary conditions (no-flux through the boundary) \[\frac{\partial u}{\partial n}\bigg{|}_{\partial\Omega}=0\qquad\&\qquad\frac{ \partial v}{\partial n}\bigg{|}_{\partial\Omega}=0, \tag{1.2}\] where \(n\) is the unit outward normal to the boundary, and with nonnegative initial conditions \[u(0,x)=u_{0}(x)\geq 0\qquad\&\qquad v(0,x)=v_{0}(x)\geq 0. \tag{1.3}\] The functions \(u\) and \(v\) describe densities of some living organisms and of a chemical substance which repels them, respectively. The '\(+\)' sign on the right-hand-side of the first equation in (1.1) corresponds to the repulsion mechanism. The opposite phenomenon appears in the widely studied chemoattraction case described by the Keller-Segel system. For an overview of results for such systems, we refer to the surveys [2, 11]. The latter survey contains a chapter concerning the construction of solutions, including the irregular ones, to the fully parabolic chemorepulsion system. Introduction Let \(\Omega\subset\mathbb{R}^{n}\) be a smooth bounded domain and let \(u_{0},v_{0}\in W^{1,p}(\Omega)\) for some \(p>3\) with \(0\not\equiv u_{0}\geq 0\) and \(v_{0}\geq 0\). We consider the following two cases: \[\left\{\begin{array}{ll}\frac{1}{2}\Delta\left|\nabla u\right|^{2}=\nabla \left(\Delta u\right)\cdot\nabla u+\left|D^{2}u\right|^{2}&\text{ in }\quad\bar{\Omega}.\\ \end{array}\right. \tag{1.1}\] Here \(\Delta\) is the Laplace operator of the form \[\left\{\begin{array}{ll}\frac{1}{2}\Delta\left|\nabla u\right|^{2}=\nabla \left(\Delta u\right)\cdot\nabla u+\left|D^{2}u\right|^{2}&\text{ in }\quad\bar{\Omega}.\\ \end{array}\right. \tag{1.2}\] The first case is the Dirichlet boundary condition (1.1). The Dirichlet boundary condition (1.2) is a Dirichlet boundary condition (1.1). The Dirichlet boundary condition (1.2) is a Dirichlet boundary condition (1.2). The Dirichlet boundary condition (1. **Lemma 2.2**.: _Let \(\Omega\subset\mathbb{R}^{n}\), \(n\in\mathbb{N}\), be a convex bounded domain with smooth boundary. Suppose that a function \(u\in C^{2}(\bar{\Omega})\) satisfies \(\frac{\partial u}{\partial n}=0\) on \(\partial\Omega\). Then_ \[\frac{\partial\left|\nabla u\right|^{2}}{\partial n}\Bigg{|}_{\partial\Omega} \leq 0.\] Proof.: See [5, p. 95]. We will use a higher-dimensional version of the Bernis-type inequality given by Winkler [18, Lemma 3.3] (with \(h(\varphi)=\varphi\)). **Lemma 2.3**.: _Let \(\Omega\subset\mathbb{R}^{n}\), \(n\in\mathbb{N}\), be a smooth, bounded domain. For all positive \(\varphi\in C^{2}\left(\bar{\Omega}\right)\) with \(\frac{\partial\varphi}{\partial n}=0\) on \(\partial\Omega\), we have the following inequality_ \[\int_{\Omega}\frac{\left|\nabla\varphi\right|^{4}}{\varphi^{3}}\leq\left(2+ \sqrt{n}\right)^{2}\int_{\Omega}\varphi\left|D^{2}\log\varphi\right|^{2}. \tag{2.2}\] Next, we prove two estimates holding in three-dimensional domains. The first one relates the Hessian of a function \(\varphi\) with \(\nabla\Delta\varphi\) (in contrast to the full third-order derivative). **Lemma 2.4**.: _Let \(\Omega\subset\mathbb{R}^{3}\) be a smooth, bounded domain. Then there is a positive constant \(C\) such that for every \(\varphi\in C^{3}(\bar{\Omega})\) with \(\frac{\partial\varphi}{\partial n}=0\) on \(\partial\Omega\) we have_ \[\left\|D^{2}\varphi\right\|_{L^{6}(\Omega)}\leq C\left\|\nabla\Delta\varphi \right\|_{L^{2}(\Omega)}.\] Proof.: According to [7, Theorem 19.1], there is \(c_{1}>0\) such that \[\left\|D^{2}\varphi\right\|_{L^{6}}\leq c_{1}\left\|\Delta\varphi\right\|_{L^ {6}}+c_{1}\left\|\varphi-\bar{\varphi}\right\|_{L^{6}}\] for every \(\varphi\in C^{2}(\bar{\Omega})\) with \(\frac{\partial\varphi}{\partial n}=0\) on \(\partial\Omega\), where \(\bar{\varphi}=\frac{1}{\left|\Omega\right|}\int_{\Omega}\varphi\). As \(W^{1,2}(\Omega)\hookrightarrow L^{6}(\Omega)\), we can further estimate \[\left\|D^{2}\varphi\right\|_{L^{6}}\leq c_{2}\left(\left\|\nabla\Delta\varphi \right\|_{L^{2}}^{2}+\left\|\Delta\varphi\right\|_{L^{2}}^{2}\right)^{1/2}+c_ {2}\left(\left\|\nabla\varphi\right\|_{L^{2}}^{2}+\left\|\varphi-\bar{ \varphi}\right\|_{L^{2}}^{2}\right)^{1/2},\] for every \(\varphi\in C^{3}(\bar{\Omega})\) with \(\frac{\partial\varphi}{\partial n}=0\) on \(\partial\Omega\), for some \(c_{2}>0\). The statement then follows by the Poincare inequality (cf. [8, Lemma A.1], for instance). Finally, we combine several of the lemmata above to obtain an estimate required in the proof of our main result. **Lemma 2.5**.: _Let \(\Omega\subset\mathbb{R}^{3}\) be a convex, smooth bounded domain and let \(\varepsilon>0\) and \(M>0\). Then there exists \(C>0\) such that for every \(0<\varphi\in C^{2}(\bar{\Omega})\) with \(\int_{\Omega}\varphi=M>0\) and \(\psi\in C^{3}(\bar{\Omega})\) that satisfy \(\partial_{\nu}\varphi=\partial_{\nu}\psi=0\) on \(\partial\Omega\) we have_ \[\int_{\Omega}\left|(\nabla\sqrt{\varphi})^{T}D^{2}\psi(\nabla \sqrt{\varphi})\right|\] \[\leq C\left(\int_{\Omega}\left|\nabla\sqrt{\varphi}\right|^{2} \right)^{3}+C+\varepsilon\int_{\Omega}\varphi\left|D^{2}\log\varphi\right|^{2} +\varepsilon\int_{\Omega}\left|\nabla\Delta\psi\right|^{2}. \tag{2.3}\] Proof.: By Holder's inequality, we have \[\int_{\Omega}|(\nabla\sqrt{\varphi})^{T}D^{2}\psi(\nabla\sqrt{ \varphi})|\] \[\leq\frac{1}{2}\int_{\Omega}|\nabla\sqrt{\varphi}|\left|D^{2}\psi \right|\frac{|\nabla\varphi|}{\varphi^{3/4}}\varphi^{1/4} \tag{2.4}\] \[\leq\frac{1}{2}\left\|\nabla\sqrt{\varphi}\right\|_{L^{2}(\Omega) }\left\|D^{2}\psi\right\|_{L^{6}(\Omega)}\left\|\frac{\nabla\varphi}{\varphi^ {\frac{3}{4}}}\right\|_{L^{4}(\Omega)}\left\|\varphi^{1/4}\right\|_{L^{12}( \Omega)}\] for all \(0<\varphi\in C^{2}(\bar{\Omega})\) and \(\psi\in C^{3}(\bar{\Omega})\). Since \(W^{1,2}(\Omega)\hookrightarrow L^{6}(\Omega)\) and \(\int_{\Omega}\varphi=M\) there is \(c_{1}>0\) such that \[\left\|\varphi^{1/4}\right\|_{L^{12}(\Omega)} =\left\|\varphi^{1/2}\right\|_{L^{6}(\Omega)}^{1/2}\leq c_{1} \left\|\varphi^{1/2}\right\|_{W^{1,2}(\Omega)}^{1/2}\] \[\leq c_{1}\left\|\nabla\varphi^{1/2}\right\|_{L^{2}(\Omega)}^{1/ 2}+c_{1}\left\|\varphi^{1/2}\right\|_{L^{2}(\Omega)}^{1/2}=c_{1}\left(\left\| \nabla\sqrt{\varphi}\right\|_{L^{2}(\Omega)}^{1/2}+M^{1/4}\right),\] for all \(0<\varphi\in C^{2}(\bar{\Omega})\) with \(\int_{\Omega}\varphi=M\). In combination with (2.4), Lemma 2.4, Winkler's inequality (2.2), the elementary estimate \[a(\sqrt{a}+\sqrt{b})\leq 2(a+b)\sqrt{a+b}\quad\text{for}\quad a,b\geq 0\] and Young's inequality, we see that by taking \(a=\left\|\nabla\sqrt{\varphi}\right\|_{L^{2}(\Omega)}\), \(b=M^{1/2}\) and with some \(c_{2}>0\) and \(C>0\), we have \[\int_{\Omega}|(\nabla\sqrt{\varphi})^{T}D^{2}\psi\nabla(\sqrt{ \varphi})|\] \[\leq c_{2}\left\|\nabla\sqrt{\varphi}\right\|_{L^{2}(\Omega)} \left\|\nabla\Delta\psi\right\|_{L^{2}(\Omega)}\int_{\Omega}\varphi\left|D^{2 }\log\varphi\right|^{2^{1/4}}\left(\left\|\nabla\sqrt{\varphi}\right\|_{L^{2} (\Omega)}^{1/2}+M^{1/4}\right)\] \[\leq 2c_{2}\left(\left\|\nabla\sqrt{\varphi}\right\|_{L^{2}}+M^{1/ 2}\right)^{\frac{3}{2}}\left(\int_{\Omega}\varphi\left|D^{2}\log\varphi\right| ^{2}\right)^{1/4}\left\|\nabla\Delta\psi\right\|_{L^{2}}\] \[\leq C\left\|\nabla\sqrt{\varphi}\right\|_{L^{2}}^{6}+C+\varepsilon \int_{\Omega}\varphi\left|D^{2}\log\varphi\right|^{2}+\varepsilon\left\|\nabla \Delta\psi\right\|_{L^{2}}^{2}\] for all \(0<\varphi\in C^{2}(\bar{\Omega})\) and \(\psi\in C^{3}(\bar{\Omega})\) with \(\int_{\Omega}\varphi=M\) and \(\partial_{\nu}\varphi=\partial_{\nu}\psi=0\) on \(\partial\Omega\). ## 3. Known properties of the solution Next, we list some known properties of solutions to (1.1)-(1.3) constructed in [4]. **Lemma 3.1**.: _Let_ \[\Omega\subset\mathbb{R}^{n},n\in\mathbb{N},\text{ be a smooth, bounded domain} \tag{3.1}\] _and let_ \[u_{0},v_{0}\in W^{1,p}(\Omega)\text{ for some }p>n\quad\text{with}\quad 0\not \equiv u_{0}\geq 0\text{ and }v_{0}\geq 0\text{ in }\Omega. \tag{3.2}\] _Then the system (1.1)-(1.3) has a maximal unique classical solution_ \[(u,v)\in C^{0}([0,T_{\max}];W^{1,p}(\Omega))\cap C^{\infty}(\bar{\Omega}\times (0,T_{\max})) \tag{3.3}\] _and if \(T_{\max}<\infty\), then \(\limsup_{t\nearrow T_{\max}}(\|u(\cdot,t)\|_{L^{\infty}(\Omega)}+\|v(\cdot,t)\| _{L^{\infty}(\Omega)})=\infty\). Moreover, \(u\) and \(v\) are positive in \(\bar{\Omega}\times(0,\infty)\)._ Proof.: The existence of such a solution has been proved in [4, Theorem 2.1] and positivity follows from the strict maximum principle. As noted in [4, (3)], integrating both equations in (1.1) immediately ensures, that both solution components are uniformly in time bounded in \(L^{1}(\Omega)\). **Lemma 3.2**.: _Suppose the assumptions of Lemma 3.1 hold. Then the solution \((u,v)\) of (1.1)-(1.3) given by Lemma 3.1 fulfills_ \[\|u(\cdot,t)\|_{L^{1}(\Omega)} =\|u_{0}\|_{L^{1}(\Omega)}\quad\text{and}\] \[\|v(\cdot,t)\|_{L^{1}(\Omega)} =\mathrm{e}^{-t}\left(\|v_{0}\|_{L^{1}(\Omega)}-\|u_{0}\|_{L^{1}( \Omega)}\right)+\|u_{0}\|_{L^{1}(\Omega)}\] _for all \(t\in(0,T_{\max})\)._ Moreover, [4] has identified a Lyapunov functional, which served as the main ingredient for solving the question of global existence the two-dimensional case. **Lemma 3.3**.: _Under the assumptions of Lemma 3.1 the solution \((u,v)\) satisfies_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\Omega}u\log u+\frac{1}{2}\int_{ \Omega}\left|\nabla v\right|^{2}\right)=-\left(\int_{\Omega}\left|\Delta v \right|^{2}+\int_{\Omega}\left|\nabla v\right|^{2}+\int_{\Omega}\frac{\left| \nabla u\right|^{2}}{u}\right) \tag{3.4}\] _for all \(t\in(0,T_{\max})\). In particular,_ \[\int_{0}^{T_{\max}}\left(\int_{\Omega}\left|\Delta v\right|^{2}+\int_{\Omega} \left|\nabla v\right|^{2}+\int_{\Omega}\frac{\left|\nabla u\right|^{2}}{u} \right)<\infty. \tag{3.5}\] Proof.: The differential inequality (3.4) is entailed in [4, Lemma 2.2], upon which (3.5) results by an integration in time as the Lyapunov functional is bounded from below. ## 4. The main estimate This section contains our main contribution, a calculation of the evolution of the Fisher information along the trajectories of (1.1)-(1.3). It is related to the Bakry-Emery calculation, see [1], applied however to a system of equations. Throughout this section, we fix a domain and initial data fulfilling (3.1) and (3.2) as well as the solution \((u,v)\) of (1.1)-(1.3), with maximal existence time \(T_{\max}\) given by Lemma 3.1. Moreover, we denote \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\Omega}\left|\Delta v\right|^{2}+ \int_{\Omega}\left|\nabla v\right|^{2}+\int_{\Omega}\frac{\left|\nabla u \right|^{2}}{u}\right)\eqqcolon\frac{\mathrm{d}}{\mathrm{d}t}I(t). \tag{4.1}\] Our aim is to obtain an estimate of \(I\). Notice that \(I\) is an extended version of the Fisher information. Indeed, in the case of a single heat equation, the quantity \(\int_{\Omega}\frac{\left|\nabla u\right|^{2}}{u}\) is called Fisher's information. The following remark explains our strategy. _Remark 4.1_.: We note that the inequality \(\dot{I}\leq cI^{2}\) would imply boundedness of \(I\) due to (3.5). Indeed, using Ladyzhenskaya's trick (see [10]) we would have \[\frac{\mathrm{d}}{\mathrm{d}t}\left(I(t)\mathrm{e}^{-\int_{0}^{t}cI(s)\, \mathrm{d}s}\right)\leq 0.\] Below we formulate and prove our main contribution. It extends the calculation controlling the evolution of the Fisher information to the case of a system of equations. **Lemma 4.2**.: _For all \(t\in(0,T_{\max})\), the estimate_ \[\dot{I}(t) \leq-2\int_{\Omega}u\left|D^{2}\log u\right|^{2}+8\int_{\Omega} \left(\nabla\sqrt{u}\right)^{T}D^{2}v\left(\nabla\sqrt{u}\right)\] \[\quad-2\int_{\Omega}|\nabla\Delta v|^{2}-4\int_{\Omega}|\Delta v |^{2}-2\int_{\Omega}|\nabla v|^{2}+2\int_{\Omega}\nabla u\cdot\nabla v\] _holds._ Proof.: We notice \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}|\Delta v|^{2}=2\int_{\Omega} \Delta v_{t}\Delta v\quad\text{in}\quad(0,T_{\max}).\] From the second equation in (1.1) we can substitute \(\Delta v=v_{t}+v-u\) (equivalently, we can take the inner product of the second equation in (1.1) with \(\Delta v_{t}\)), to get \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}|\Delta v|^{2} =2\!\int_{\Omega}\Delta v_{t}\left(v_{t}+v-u\right)=-2\!\int_{ \Omega}\nabla v_{t}\cdot(\nabla v_{t}+\nabla v-\nabla u) \tag{4.2}\] \[=-2\!\int_{\Omega}|\nabla v_{t}|^{2}-\frac{\mathrm{d}}{\mathrm{ d}t}\!\int_{\Omega}|\nabla v|^{2}+2\!\int_{\Omega}\nabla v_{t}\cdot\nabla u \quad\text{in}\quad(0,T_{\max}).\] From (4.2) we get \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\Omega}|\Delta v|^{2}+|\nabla v|^{2 }\right)=-2\!\int_{\Omega}|\nabla v_{t}|^{2}+2\!\int_{\Omega}\nabla v_{t} \cdot\nabla u\quad\text{in}\quad(0,T_{\max}) \tag{4.3}\] and from (4.1) and (4.3) we obtain \[\dot{I}(t)=-2\!\int_{\Omega}|\nabla v_{t}|^{2}+2\!\int_{\Omega}\nabla v_{t} \cdot\nabla u+\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\left|\nabla u \right|^{2}}{u}\quad\text{for all}\quad t\in(0,T_{\max}). \tag{4.4}\] Next, we compute the last term on the right-hand-side, \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\left|\nabla u\right|^{2}}{u }=4\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\left|\nabla\sqrt{u}\right|^{2} =8\int_{\Omega}\nabla\varrho_{t}\cdot\nabla\varrho\quad\text{in}\quad(0,T_{ \max}), \tag{4.5}\] where we applied the substitution \(\varrho\coloneqq\sqrt{u}\). From the first equation in (1.1) we have \[\varrho_{t}=\frac{u_{t}}{2\sqrt{u}}=\frac{\Delta u+\nabla u\cdot\nabla v+u \Delta v}{2\sqrt{u}}=\frac{\Delta u}{2\sqrt{u}}+\nabla\varrho\cdot\nabla v+ \frac{1}{2}\varrho\Delta v, \tag{4.6}\] where \[\Delta\varrho=\mathrm{div}\left(\nabla\varrho\right)=\mathrm{div}\left(\frac{ \nabla\mathrm{u}}{2\sqrt{\mathrm{u}}}\right)=\frac{\Delta\mathrm{u}}{2\sqrt{ \mathrm{u}}}-\frac{\left|\nabla\mathrm{u}\right|^{2}}{4\mathrm{u}^{3/2}}=\frac {\Delta\mathrm{u}}{2\sqrt{\mathrm{u}}}-\frac{\left|\nabla\varrho\right|^{2}}{\varrho} \tag{4.7}\] in \(\Omega\times(0,T_{\max})\). So, by plugging (4.7) into (4.6), we find \[\varrho_{t}=\Delta\varrho+\frac{\left|\nabla\varrho\right|^{2}}{\varrho}+ \nabla\varrho\cdot\nabla v+\frac{1}{2}\varrho\Delta v\quad\text{in}\quad\Omega \times(0,T_{\max}).\] Hence, (4.5) becomes \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\left|\nabla u \right|^{2}}{u} =8\int_{\Omega}\nabla\varrho\cdot\nabla\left(\Delta\varrho+\frac{ \left|\nabla\varrho\right|^{2}}{\varrho}+\nabla\varrho\cdot\nabla v+\frac{1}{2 }\varrho\Delta v\right)\] \[=8\left[\int_{\Omega}\nabla\varrho\cdot\nabla\left(\Delta \varrho\right)+\int_{\Omega}2\frac{\left(\nabla\varrho\right)^{T}D^{2} \varrho\left(\nabla\varrho\right)}{\varrho}-\int_{\Omega}\frac{\left|\nabla \varrho\right|^{4}}{\varrho^{2}}\right]\] \[\quad+8\left[\int_{\Omega}\left(\nabla\varrho\right)^{T}D^{2} \varrho\left(\nabla v\right)+\int_{\Omega}\left(\nabla\varrho\right)^{T}D^{2} v\left(\nabla\varrho\right)\right] \tag{4.8}\] \[\quad+4\int_{\Omega}\nabla\left(\varrho\Delta v\right)\cdot \nabla\varrho\quad\text{in}\quad(0,T_{\max}).\] Due to the Bochner formula (2.1), \[\nabla\varrho\cdot\nabla\left(\Delta\varrho\right)=-\left|D^{2} \varrho\right|^{2}+\frac{1}{2}\Delta\left(\left|\nabla\varrho\right|^{2} \right)\quad\text{in}\quad\Omega\times(0,T_{\max}),\] we get \[\int_{\Omega}\nabla\varrho\cdot\nabla\left(\Delta\varrho\right)+ \int_{\Omega}2\frac{\left(\nabla\varrho\right)^{T}D^{2}\varrho\left(\nabla \varrho\right)}{\varrho}-\int_{\Omega}\frac{\left|\nabla\varrho\right|^{4}}{ \varrho^{2}}\] \[=-\int_{\Omega}\left|D^{2}\varrho-\frac{\nabla\varrho\otimes \nabla\varrho}{\varrho}\right|^{2}+\frac{1}{2}\int_{\Omega}\Delta\left(\left| \nabla\varrho\right|^{2}\right)\quad\text{in}\quad(0,T_{\max}).\] We note that the boundary condition \(\left.\frac{\partial u}{\partial n}\right|_{\partial\Omega}=0\) implies that \[\left.\frac{\partial\varrho}{\partial n}\right|_{\partial\Omega}=\left.\frac{ \frac{\partial u}{\partial n}}{2\sqrt{u}}\right|_{\partial\Omega}=0,\] so that an integration by parts and an application of Lemma 2.2, which is possible thanks to the convexity of the domain \(\Omega\), yield \[\int_{\Omega}\Delta\left(\left|\nabla\varrho\right|^{2}\right)=\int_{\partial \Omega}\frac{\partial\left(\left|\nabla\varrho\right|^{2}\right)}{\partial n }\leq 0\quad\text{in}\quad(0,T_{\max}).\] Plugging the above into (4.8), we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\left|\nabla u \right|^{2}}{u} \leq-2\int_{\Omega}u\left|D^{2}\log u\right|^{2} \tag{4.9}\] \[+8\left[\int_{\Omega}\left(\nabla\varrho\right)^{T}D^{2}\varrho \left(\nabla v\right)+\int_{\Omega}\left(\nabla\varrho\right)^{T}D^{2}v\left( \nabla\varrho\right)\right]\] \[+4\int_{\Omega}\nabla\left(\varrho\Delta v\right)\cdot\nabla \varrho\quad\text{in}\quad(0,T_{\max}),\] where we also used the relations \[\int_{\Omega}\left|D^{2}\varrho-\frac{\nabla\varrho\otimes\nabla\varrho}{ \varrho}\right|^{2}=\int_{\Omega}\varrho^{2}\left|D^{2}\log\varrho\right|^{2 }=\frac{1}{4}\int_{\Omega}u\left|D^{2}\log u\right|^{2}\] in the first term on the right-hand side. We now focus on the last term in (4.9), \[4\int_{\Omega}\nabla\left(\varrho\Delta v\right)\cdot\nabla\varrho=4\int_{ \Omega}\left|\nabla\varrho\right|^{2}\Delta v+4\int_{\Omega}\varrho\nabla \varrho\cdot\nabla\left(\Delta v\right). \tag{4.10}\] Integration by parts yields \[4\int_{\Omega}\left|\nabla\varrho\right|^{2}\Delta v=-4\int_{\Omega}\nabla\left( \left|\nabla\varrho\right|^{2}\right)\cdot\nabla v=-8\int_{\Omega}\left(\nabla \varrho\right)^{T}D^{2}\varrho\left(\nabla v\right) \tag{4.11}\] in \((0,T_{\max})\). For the second term in (4.10) we substitute \(\Delta v=v_{t}+v-u\) from the second equation in (1.1) to obtain \[4\int_{\Omega}\varrho\nabla\varrho\cdot\nabla\left(\Delta v\right) =2\int_{\Omega}\nabla\varrho^{2}\cdot\nabla\left(v_{t}+v-u\right)\] \[=2\int_{\Omega}\nabla u\cdot\nabla v_{t}+2\int_{\Omega}\nabla u \cdot\nabla v-2\int_{\Omega}\left|\nabla u\right|^{2}\quad\text{in}\quad(0,T_{ \max}). \tag{4.12}\] Inserting (4.11) and (4.12) in (4.9) gives \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\frac{\left|\nabla u \right|^{2}}{u} \leq-2\int_{\Omega}u\left|D^{2}\log u\right|^{2}+8\int_{\Omega} \left(\nabla\varrho\right)^{T}D^{2}v\left(\nabla\varrho\right)\] \[\quad+2\int_{\Omega}\nabla u\cdot\nabla v_{t}+2\int_{\Omega} \nabla u\cdot\nabla v-2\int_{\Omega}\left|\nabla u\right|^{2}\quad\text{in} \quad(0,T_{\max}).\] Therefore, going back to (4.4), we have \[\dot{I}(t) \leq-2\int_{\Omega}u\left|D^{2}\log u\right|^{2}+8\int_{\Omega} \left(\nabla\sqrt{u}\right)^{T}D^{2}v\left(\nabla\sqrt{u}\right)\] \[\quad-2\int_{\Omega}\left|\nabla v_{t}\right|^{2}+4\int_{\Omega} \nabla v_{t}\cdot\nabla u+2\int_{\Omega}\nabla u\cdot\nabla v-2\int_{\Omega} \left|\nabla u\right|^{2}\] for all \(t\in(0,T_{\max})\). Since \[-2\int_{\Omega}\left|\nabla v_{t}\right|^{2}+4\int_{\Omega} \nabla v_{t}\cdot\nabla u-2\int_{\Omega}\left|\nabla u\right|^{2}\] \[=-2\int_{\Omega}\left|\nabla(v_{t}-u)\right|^{2}=-2\int_{\Omega }\left|\nabla(\Delta v-v)\right|^{2}\] \[=-2\int_{\Omega}\left|\nabla\Delta v\right|^{2}-4\int_{\Omega} \left|\Delta v\right|^{2}-2\int_{\Omega}\left|\nabla v\right|^{2}\quad\text{ in}\quad(0,T_{\max}),\] we obtain the desired estimate. Next, we simplify the previous differential inequality, which will allow us to argue in a more straightforward manner in the sequel. **Lemma 4.3**.: _Throughout \((0,T_{\max})\), it holds that_ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(4\int_{\Omega}\left|\nabla \sqrt{u}\right|^{2}+\int_{\Omega}\left|\Delta v\right|^{2}\right)\] \[\leq-2\int_{\Omega}u|D^{2}\log u|^{2}-2\int_{\Omega}\left|\nabla \Delta v\right|^{2}-2\int_{\Omega}\left|\Delta v\right|^{2}\] \[\quad+8\int_{\Omega}(\nabla\sqrt{u})^{T}D^{2}v\nabla\sqrt{u}.\] Proof.: This follows immediately from Lemma 4.2 and the fact that \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\left|\nabla v\right|^{2}=-2\int_{ \Omega}\Delta vv_{t}=-2\int_{\Omega}\left|\Delta v\right|^{2}-2\int_{\Omega} \left|\nabla v\right|^{2}+2\int_{\Omega}\nabla v\cdot\nabla v\] in \((0,T_{\max})\) ## 5. Proof of the main theorem We are now in a position to utilize our calculation from Lemma 4.2 and complete the proof of the announced result. As in the previous section, we fix a domain \(\Omega\) and initial data \(u_{0},v_{0}\) satisfying (3.1) and (3.2) as well as the solution \((u,v)\) of (1.1)-(1.3) given by Lemma 3.1. Moreover, as the solution is unique by Lemma 3.1, \(T_{\max}\) is infinite if and only if the solution with solution initial data \((u(\cdot,t_{0}),v(\cdot,t_{0}))\) for some \(t_{0}\in(0,T_{\max})\) exists globally. Thus, by switching to the solution with these initial data and recalling (3.3), we may assume \(u,v\in C^{\infty}(\bar{\Omega}\times[0,T_{\max}))\). The following lemma is the first step in a bootstrapping procedure yielding the required regularity of the solution. **Lemma 5.1**.: _Suppose \(n=3\) and that \(\Omega\) is convex. Let \(T\in(0,T_{\max}]\cap(0,\infty)\) and suppose that \(\nabla\sqrt{u}\in L^{4}(0,T;L^{2}(\Omega))\). Then there is \(C>0\) such that_ \[\int_{\Omega}u^{3}(\cdot,t)\leq C\qquad\text{for all}\quad t\in(0,T).\] Proof.: Taking \(\varphi=u\), \(\psi=v\) and \(\varepsilon=\frac{1}{4}\) in Lemma 2.5 and making use of Lemma 4.3, we arrive at \[\dot{J}(t)\leq c_{1}\left(\int_{\Omega}\big{|}\nabla\sqrt{u}\big{|}^{2}\right) ^{3}+c_{1}\leq c_{1}\left(\int_{\Omega}\big{|}\nabla\sqrt{u}\big{|}^{2}\right) ^{2}J+c_{1}\] in \((0,T)\) for some \(c_{1}>0\). Thus, with \(K(t)\coloneqq c_{1}\int_{0}^{t}\left(\int_{\Omega}\big{|}\nabla\sqrt{u}\big{|}^{ 2}\right)^{2}\), \(t\in(0,T)\), we have \[J(t)\leq\mathrm{e}^{K(t)}J(0)+c_{1}\int_{0}^{t}\mathrm{e}^{K(t-s)}\,\mathrm{d }s\leq\mathrm{e}^{K(T)}J(0)+c_{1}T\mathrm{e}^{K(T)}\] for all \(t\in(0,T)\). Since \(K(T)<\infty\) by assumption, we obtain boundedness of \(\sup_{t\in(0,T)}\|\nabla\sqrt{u(\cdot,t)}\|_{L^{2}(\Omega)}\), which, in conjunction with Lemma 3.2, implies the desired estimate as \(W^{1,2}(\Omega)\) embeds continuously into \(L^{6}(\Omega)\). Next, we show the higher regularity of the obtained solution. **Lemma 5.2**.: _Under the assumptions of Lemma 5.1 there is \(C>0\) such that_ \[\|u(\cdot,t)\|_{L^{\infty}(\Omega)}+\|v(\cdot,t)\|_{L^{\infty}(\Omega)}\leq C \qquad\text{for all}\quad t\in(0,T).\] Proof.: We fix \(3<r<q<\infty\). Making use of well-known semigroup estimates (cf. [17, Lemma 1.3 (ii) and (iii)]), we obtain \[\|\nabla v(\cdot,t)\|_{L^{q}(\Omega)}\] \[\leq\|\nabla\mathrm{e}^{t(\Delta-1)}v_{0}\|_{L^{q}(\Omega)}+\int _{0}^{t}\|\mathrm{e}^{(t-s)(\Delta-1)}u(\cdot,s)\|_{L^{q}(\Omega)}\,\mathrm{d}s\] \[\leq c_{1}\mathrm{e}^{-t}\|\nabla v_{0}\|_{L^{q}(\Omega)}+c_{2} \int_{0}^{t}\left(1+(t-s)^{-\frac{1}{2}-\frac{3}{2}(\frac{1}{3}-\frac{1}{q})} \right)\mathrm{e}^{-(t-s)}\|u(\cdot,s)\|_{L^{3}(\Omega)}\,\mathrm{d}s\] \[\leq c_{1}\|\nabla v_{0}\|_{L^{q}(\Omega)}+c_{2}\sup_{s\in(0,T)} \|u(\cdot,s)\|_{L^{3}(\Omega)}\int_{0}^{T}\left(1+s^{-\frac{1}{2}-\frac{3}{2}( \frac{1}{3}-\frac{1}{q})}\right)\,\mathrm{d}s\] for all \(t\in(0,T)\) and some \(c_{1},c_{2}>0\). Since \(-\frac{1}{2}-\frac{3}{2}(\frac{1}{3}-\frac{1}{q})>-1\) and recalling Lemma 5.1, we conclude that there is \(c_{3}>0\) such that \(\|\nabla v(\cdot,t)\|_{L^{q}(\Omega)}\leq c_{3}\) for all \(t\in(0,T)\). Since \(q>3\), \(W^{1,q}(\Omega)\) embeds continuously into \(L^{\infty}(\Omega)\) and so the above estimate in conjunction with Lemma 3.2 imply, that \(\sup_{t\in(0,T)}\|v(\cdot,t)\|_{L^{\infty}(\Omega)}\) is finite as well. Relying on the maximum principle and again on well-known semigroup estimates (cf. [17, Lemma 1.3 (iv)]), we further estimate \[\|u(\cdot,t)\|_{L^{\infty}(\Omega)}\] \[\leq\|{\rm e}^{t\Delta}u_{0}\|_{L^{\infty}(\Omega)}+\int_{0}^{t} \left\|{\rm e}^{(t-s)\Delta}\nabla\cdot(u\nabla v)(\cdot,s)\,{\rm d}s\right\|_ {L^{\infty}(\Omega)}\] \[\leq\|u_{0}\|_{L^{\infty}(\Omega)}+c_{4}\int_{0}^{t}\left(1+(t-s) ^{-\frac{1}{2}-\frac{3}{2}(\frac{1}{r}-\frac{1}{\infty})}\right)\|(u\nabla v) (\cdot,s)\|_{L^{r}(\Omega)}\,{\rm d}s\] \[\leq\|u_{0}\|_{L^{\infty}(\Omega)}+c_{4}\sup_{s\in(0,T)}\|(u \nabla v)(\cdot,s)\|_{L^{r}(\Omega)}\int_{0}^{T}\left(1+s^{-\frac{1}{2}-\frac {3}{2r}}\right)\,{\rm d}s\] for all \(t\in(0,T)\) and some \(c_{4}>0\). Since with \(\lambda\coloneqq\frac{rq}{q-r}\) and \(\theta\coloneqq\frac{\lambda-1}{\lambda}\in(0,1)\) we have \[\|(u\nabla v)(\cdot,s)\|_{L^{r}(\Omega)} \leq\|u(\cdot,s)\|_{L^{\lambda}(\Omega)}\|\nabla v(\cdot,s)\|_{L ^{q}(\Omega)}\] \[\leq\|u(\cdot,s)\|_{L^{\infty}(\Omega)}^{\theta}\|u(\cdot,s)\|_{L ^{1}(\Omega)}^{1-\theta}\|\nabla v(\cdot,s)\|_{L^{q}(\Omega)}\] for all \(s\in(0,T)\), we conclude that there is \(c_{5}>0\) such that \[\|u(\cdot,t)\|_{L^{\infty}(\Omega)}\leq c_{5}\left(1+\|u(\cdot,t)\|_{L^{ \infty}(\Omega)}\right)^{\theta}\] for all \(t\in(0,T)\). Thus, \(A\coloneqq 1+\sup_{t\in(0,T)}\|u(\cdot,t)\|_{L^{\infty}(\Omega)}\) fulfills \(A\leq(c_{5}+1)A^{\theta}\) and hence also \(A\leq(c_{5}+1)^{\frac{1}{1-\theta}}\). We are now in position to prove Theorem 1.1. Proof of Theorem 1.1.: Suppose that \(T_{\max}<\infty\), then Lemma 5.2 asserts boundedness of \(u\) and \(v\) in \(\Omega\times(0,T_{\max})\). However, this contradicts the extensibility criterion in Lemma 3.1. ## 6. Conclusion On the one hand, we obtained a condition which guarantees global existence of solutions for the chemorepulsion system in three-dimensional space. We hope that this condition may prove to be useful in future research on this problem. On the other hand, we notice from our computations that the concavity of the function \(v\) would greatly simplify our argument. It would lead to boundedness of the function \(I(t)\), and hence to the global existence of solutions, as shown in this paper. Indeed, we see from (2.3) that if the function \(v\) is concave, i.e., its Hessian is negative-semidefinite, \[x^{T}\,D^{2}v\,x\leq 0\qquad\text{for every}\qquad x\in\mathbb{R}^{n},\] we have the following differential inequality for \(I(t)\) \[\dot{I}(t)\leq 0.\] From the equation (1.1) we see that \(v\) is not far from being concave. Taking a simplified version of the equation for \(v\) [assuming \(v_{t}=0\) and neglecting \(v\) on the right-hand side of the second equation in (1.1)], we have \[\Delta v=-u\leq 0,\] which would also hold if the Hessian of \(v\) was negative-semidefinite. Verifying concavity of a solution of a parabolic boundary value problem posed on a convex domain has been studied before. In the context of one parabolic equation of certain type, some positive results can be found in [9], for example. However, we are not aware of any result in this direction for systems of equations. ## Appendix A A new inequality As a byproduct of our arguments, we discovered a differential inequality relating the second norm of the Hessian of the square-root of a positive function with the dissipation of the Fisher information along the heat flow. Due to the fact that both of these quantities appear in the calculation of the evolution of the Fisher information along the heat flow, the following inequality is interesting in its own right and may have further applications. In particular, it will be used by the first author in a forthcoming paper concerning a thermoelasticity problem. **Lemma A.1**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a smooth bounded domain. For every positive function \(u\in C^{2}(\bar{\Omega})\) with the boundary condition \(\left.\frac{\partial u}{\partial n}\right|_{\partial\Omega}=0\), we have_ (A.1) \[\int_{\Omega}\left|D^{2}\sqrt{u}\right|^{2}\leq C\int_{\Omega}u\left|D^{2} \log u\right|^{2},\] _where \(C=1+\frac{\sqrt{n}}{2}+\frac{n}{8}\)._ _Remark A.2_.: We note that the inequality (A.1) does not hold pointwise, i.e., there is no constant \(C>0\) such that, for every positive \(u\in C^{2}(\bar{\Omega})\), \[\left|D^{2}\sqrt{u}\right|^{2}\leq Cu\left|D^{2}\log u\right|^{2}\quad\text{in} \quad\bar{\Omega}.\] Proof of Lemma A.1.: We first note that \[\left[D^{2}\sqrt{u}\right]_{ij}^{2}=\left(\frac{\partial_{x_{i}x_{j}}u}{2u^{1 /2}}-\frac{\partial_{x_{i}}u\partial_{x_{j}}u}{4u^{3/2}}\right)^{2}=\frac{1}{ 4}\left(\frac{\partial_{x_{i}x_{j}}u}{u^{1/2}}-\frac{1}{2}\frac{\partial_{x_{ i}}u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}\] and \[u\left[D^{2}\log u\right]_{ij}^{2}=u\left(\frac{\partial_{x_{i}x_{j}}u}{u}- \frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^{2}}\right)^{2}=\left(\frac{ \partial_{x_{i}x_{j}}u}{u^{1/2}}-\frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^ {3/2}}\right)^{2}.\] in \(\bar{\Omega}\). Using the simple fact that \(\left(a+b\right)^{2}\leq 2\left(a^{2}+b^{2}\right)\), we get \[\left[D^{2}\sqrt{u}\right]_{ij}^{2} =\frac{1}{4}\left(\frac{\partial_{x_{i}x_{j}}u}{u^{1/2}}-\frac{ \partial_{x_{i}}u\partial_{x_{j}}u}{u^{3/2}}+\frac{1}{2}\frac{\partial_{x_{i} }u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}\] \[\leq\frac{1}{4}\left[2\left(\frac{\partial_{x_{i}x_{j}}u}{u^{1/2 }}-\frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}+\frac{1}{2} \left(\frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}\right]\] \[=\frac{1}{2}u\left[D^{2}\log u\right]_{ij}^{2}+\frac{1}{8}\left( \frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}.\] Therefore, we obtain \[\left|D^{2}\sqrt{u}\right|^{2} =\sum_{i,j=1}^{n}\left[D^{2}\sqrt{u}\right]_{ij}^{2}\leq\sum_{i, j=1}^{n}\left[\frac{1}{2}u\left[D^{2}\log u\right]_{ij}^{2}+\frac{1}{8} \left(\frac{\partial_{x_{i}}u\partial_{x_{j}}u}{u^{3/2}}\right)^{2}\right]\] (A.2) \[=\frac{1}{2}u\left|D^{2}\log u\right|^{2}+\frac{1}{8}\frac{\left| \nabla u\right|^{4}}{u^{3}}\quad\text{in}\quad\bar{\Omega}.\] Applying Lemma 2.3 to \((A.2)\), we get \[\int_{\Omega}\left|D^{2}\sqrt{u}\right|^{2}\leq\left(\frac{1}{2}+\frac{1}{8} \left(2+\sqrt{n}\right)^{2}\right)\int_{\Omega}u\left|D^{2}\log u\right|^{2},\] as required. ## Acknowledgments T.C. was supported by the National Science Center of Poland grant SONATA BIS 7 number UMO-2017/26/E/ST1/00989. K.H. was partially supported by the National Science Center of Poland grant SONATA BIS 10 number UMO-2020/38/E/ST1/00469. M.S. was supported by the National Science Center of Poland grant SONATA BIS 10 number UMO-2020/38/E/ST1/00596.
2305.15187
Using Models Based on Cognitive Theory to Predict Human Behavior in Traffic: A Case Study
The development of automated vehicles has the potential to revolutionize transportation, but they are currently unable to ensure a safe and time-efficient driving style. Reliable models predicting human behavior are essential for overcoming this issue. While data-driven models are commonly used to this end, they can be vulnerable in safety-critical edge cases. This has led to an interest in models incorporating cognitive theory, but as such models are commonly developed for explanatory purposes, this approach's effectiveness in behavior prediction has remained largely untested so far. In this article, we investigate the usefulness of the \emph{Commotions} model -- a novel cognitively plausible model incorporating the latest theories of human perception, decision-making, and motor control -- for predicting human behavior in gap acceptance scenarios, which entail many important traffic interactions such as lane changes and intersections. We show that this model can compete with or even outperform well-established data-driven prediction models across several naturalistic datasets. These results demonstrate the promise of incorporating cognitive theory in behavior prediction models for automated vehicles.
Julian F. Schumann, Aravinda Ramakrishnan Srinivasan, Jens Kober, Gustav Markkula, Arkady Zgonnikov
2023-05-24T14:27:00Z
http://arxiv.org/abs/2305.15187v2
# Using Models Based on Cognitive Theory to Predict Human Behavior in Traffic: A Case Study ###### Abstract The development of automated vehicles has the potential to revolutionize transportation, but they are currently unable to ensure a safe and time-efficient driving style. Reliable models predicting human behavior are essential for overcoming this issue. While data-driven models are commonly used to this end, they can be vulnerable in safety-critical edge cases. This has led to an interest in models incorporating cognitive theory, but as such models are commonly developed for explanatory purposes, this approach's effectiveness in behavior prediction has remained largely untested so far. In this article, we investigate the usefulness of the _Cannotations_ model - a novel cognitively plausible model incorporating the latest theories of human perception, decision-making, and motor control - for predicting human behavior in gap acceptance scenarios, which entail many important traffic interactions such as lane changes and intersections. We show that this model can compete with or even outperform well-established data-driven prediction models across several naturalistic datasets. These results demonstrate the promise of incorporating cognitive theory in behavior prediction models for automated vehicles. autonomous vehicles, gap acceptance, behavior prediction, cognitive theory. ## I Introduction Automated vehicles have become a major focus of the car industry in recent years due to their potential to revolutionize transportation. The promised benefits of automated vehicles include fewer accidents caused by human errors, increased accessibility of mobility solutions, and more efficient use of time while traveling [1, 2, 3]. However, despite significant investments [4], there are still only prototypes of automated vehicles on the street, and they are not yet widely available to the public [5, 6]. One major challenge to the widespread adoption of automated vehicles is ensuring that they are both efficient and safe, traveling in a timely and efficient manner while also maintaining a level of safety that is at least equivalent to human driving [5, 7]. However, many automated vehicles currently focus on ensuring safety, avoiding any action that could potentially lead to an accident. While this approach may reduce the risk of traffic participants being harmed, it misses out not travel efficiency and acceptance, requiring further efforts to make automated vehicles truly useful [5, 6]. One potential solution is to incorporate prediction models to reduce uncertainty about future human behavior and allow for more actions to be classified as safe [8, 9]. Accurate predictions of human behavior are especially critical in scenarios involving gap acceptance [10], which form a significant subset of space-sharing conflicts in traffic, including situations such as crossing an intersection or changing lanes [11]. Many models for predicting human behavior in these scenarios have been developed, including trajectory prediction models [12, 13, 14] and models predicting the binary choice of either accepting or rejecting the gap [15, 16, 17]. However, most of these include few assumptions about human decision-making - using a mainly data driven approach known for being unreliable in safety-critical edge cases [10, 18]. Meanwhile, there is a separate literature of cognitive theory developed to explain human decision-making in traffic [19, 20]. Inclusion of such theory into predictive models might help overcome the unreliability issues of purely data-driven approaches [18]. However, current cognitively plausible models have a number of limitations which hinder their use for behavior prediction. In particular, most such models are limited to a specific scenario [15, 20] and cannot handle complex inputs which prevents their applications to naturalistic datasets. As a result, it is currently unknown if incorporating cognitive theories in behavior prediction models could actually yield any benefits in terms of prediction accuracy and robustness. This study aims to explore the potential of one possible approach of incorporating cognitive theory into prediction models: the adaption of a specific existing explanatory model [19] to function as a prediction model, using gap acceptance as target scenario type. Adaptation of this model for prediction purposes is non-trivial, and does in itself represent a significant contribution to the field (Section III). Furthermore, we also conduct an ablation study to find the most promising configurations for the model (Section IV). Finally, we compare the performance of the resulting configurations of this model to state-of-the-art data-driven prediction models (Section V). ## II Background This section provides a description of the general type of gap acceptance scenarios addressed here, a brief overview of the tested cognitive model, and an introduction to a framework that facilitates unbiased comparisons of the model's predictive performance against existing benchmarks. We also discuss our changes of the tested model that enable its use as a predictive model. ### _Gap acceptance_ Gap acceptance problems are a type of traffic interaction that involves a space-sharing conflict between two agents with intersecting paths, such as intersections, pedestrian crossings, and lane changes on highways [10, 11]. There, these two agents can be differentiated by the possession of the right of way, with the vehicle with priority being referred to as the ego vehicle \(V_{E}\). In such a situation, the other agent, designated as the target vehicle \(V_{T}\), must then decide whether to cross \(V_{E}\)'s path in front of \(V_{E}\) (i.e., accepting the offered gap) or to wait until \(V_{E}\) has passed, thereby rejecting the gap. For example, if \(V_{T}\) approaches an intersection via a secondary road, it needs to decide whether the gap to the vehicle coming from the perpendicular direction is large enough to cross the intersection without waiting for that car to pass (Fig. 1). Accurately predicting \(V_{T}\)'s decision in such scenarios is crucial for \(V_{E}\), as \(V_{T}\)'s future behavior could limit \(V_{E}\)'s options, such as \(V_{E}\) being forced to slow down to prevent a collision by \(V_{T}\) accepting the gap. ### _The_ Commotions _model_ Markkula et al. [19] proposed a cognitive framework for modeling road user interactions in gap acceptance scenarios. Their framework includes a wide range of cognitive mechanisms, such as decision-making based on evidence accumulation [15], noisy perception [21] and applying a theory of mind [22] (Fig. 1). They implemented this framework in models for interactions between vehicles and/or pedestrians on straight crossing paths, i.e., including gap acceptance scenarios between two vehicles. What we will refer to here as the _Commotions_ model (after the name of the project in which the model was developed) is the most successful model variant identified in [19], applied to such scenarios. As illustrated in Fig. 1, the proposed model postulates that at each time step, both ego vehicle \(V_{E}\) and target vehicle \(V_{T}\) concurrently determine their current control inputs. This decision-making process of each agent is subject to sensory noise and Bayesian filtering during the perception of the position of the other agent. Based on their own short-term control input \(u\) (\(A\) is a discrete set) and both vehicles' long-term behavior \(b_{E}\) and \(b_{T}\) (i.e., preference for going first or second through the contested space), corresponding pairs of future trajectories - represented by pairs of \(\widetilde{\mathbf{\chi}}_{E}\) and \(\widetilde{\mathbf{\chi}}_{T}\) - are generated, with the constraint that the resulting interactions are safe. Each pair of trajectories is then evaluated (punishing large control inputs, time delays, and traffic rule violations), resulting in the value \(\widetilde{\mathcal{V}}_{E}\) representing the agent's own opinion and the value \(\widetilde{\mathcal{V}}_{T}\), which is the value the agent assumes that the other agent assigns to each trajectory pair for each possible combination of behaviors and control inputs. Each agent then weighs the evaluation \(\widetilde{\mathcal{V}}_{E}\) of their own trajectory based on the probability of the other vehicle behaving accordingly, assuming per the theory of mind that this probability is correlated with the respective value \(\widetilde{\mathcal{V}}_{T}\). Evidence accumulation is used to ensure no abrupt and seemingly arbitrary changes in behavior, by combining the weighted values with previous evaluations of a potential action \(u\) and only changing the applied control input \(u^{*}\) if this entails a sufficiently substantial improvement in this accumulated value, i.e., control is intermittent. Based on the currently chosen control input \(u^{*}\) each agent's states are then projected forward to the next time step. By repeatedly using this process for both agents, the model can generate a pair of simulated trajectories on the perpendicular intersection. To represent the models randomness, \(n_{p}\) different trajectory pairs are generated in repeated simulations. ### _The framework for benchmarking gap acceptance models_ To compare several prediction models in a fair and unbiased manner, we utilize a framework previously developed by Schumann et al. [10]. This framework facilitates the comparison of such models in any gap acceptance scenario according to a wide selection of metrics. Moreover, it grants precise control over the timing of the evaluated predictions and the allocation of individual samples to training and testing sets. The framework also permits the conversion of different types of predictions, including between binary and trajectory predictions, increasing the number of metrics that can be employed to compare models. For instance, the benchmark enables models that originally predict only gap acceptance probabilities to also generate predictions of full trajectories. Specifically, to transform a predicted probability \(a_{\text{pred}}\in[0,1]\) of accepting the gap into a set of predicted trajectories for a given sample, the framework uses two instances of a state-of-the-art trajectory prediction model [23]. One of these models is trained exclusively on samples with accepted gaps, while the other is trained on samples with rejected gaps. Both models are utilized to predict a set of trajectories based on the given sample's input, from which the final set is sampled with weights adjusted by \(a_{\text{pred}}\)[10]. ## III Commotions as a predictive model Although the _Commotions_ model's capability of expressing a number of empirically observed human interaction phenomena was demonstrated successfully in the original paper [19], it was not developed for use as a prediction model. As such, it has many limitations compared to existing models developed for this purpose. For once, the computational efficiency of its existing implementation makes training and testing on most Fig. 1: A depiction of the _Commotions_ model [19] and its high-level parts, showing how the position of ego vehicle \(V_{E}\) is updated at one point in time. Simultaneously, the target vehicle \(V_{T}\) also updates its kinematic state by using the same mechanics – only with mirrored inputs. datasets infeasible. In this paper, we address this problem by implementing parallel processing of multiple model predictions on a GPU and using analytical instead of numerical integration inside the model. This achieves a speed increase of roughly four orders of magnitude. Another problem with the _Commotions_ model is that it is constrained to the scenario of perpendicular intersections with straight trajectories seen in Fig. 1, which is incongruent to most real world situations. We utilize an expansion of the benchmarking framework II-C allowing us to project real-world two-dimensional trajectories onto the quasi-one-dimensional-scenario required as the input data. Namely, for each agent, we define a method for determining the most probable path from their current location towards the contested space, where the trajectories of the ego vehicle \(V_{E}\) and the target vehicle \(V_{T}\) intersect. The length of this path is then assumed to be equal to the distances of those agents to the contested space (the purple square in Fig. 1) along the respective perpendicular streets. While it might be possible to use the same approach to project predicted trajectories from the quasi-one-dimensional scenario to the original two-dimensional space, they would only be projected onto the aforementioned predefined most probable paths. As this would drastically limit the solution space, we instead use scenario-independent information from the predicted trajectories. For each pair of trajectories from simulation \(p\) we can determine if the contested space was reached first by \(V_{T}\) (\(a_{\text{pred},p}=1\) represents an accepted gap) or \(V_{E}\) (\(a_{\text{pred},p}=0\)). Simultaneously, the time \(t_{A,\text{pred},p}\) of \(V_{T}\) reaching the contested space can be extracted as well. Averaging over all predictions allows us to calculate the probability \(a_{\text{pred}}\in[0,1]\) of \(V_{T}\) accepting the gap. Combined with the predicted time of acceptance \(t_{A}\), which the framework accepts as another type of prediction [10], generating predicted trajectories in the original space then becomes possible (II-C). Finally, the _Commotions_ model is able to process merely the current position and velocity of only the two principal actors in a gap acceptance scenario, i.e., \(V_{E}\) and \(V_{T}\), and not any other agents in the scene. However, as this only hinders but does not prevent the model's predictive usage, this issue remains currently unaddressed. ## IV Evaluating configurations of the Commotions model In this section, we investigate the predictive performance of several configurations of the _Commotions_ model stemming from a number of design decisions that have to be made when using the _commotions_ model to predict human behavior. For example, the modeling of the interaction between \(V_{E}\) and \(V_{T}\) can utilize either an _interactive_ approach, where both agents utilize all aspects of the _Commotions_ model (Fig. 1) to determine their current control inputs \(u^{*}\) (_IM_), or a _non-interactive_ approach (_NM_), where only the behavior of \(V_{T}\) is predicted by the model, with \(V_{E}\) set to maintaining its original velocity. Meanwhile, another decision pertains to selecting the form of short-term control inputs \(u\), with the options being the application of either a _constant acceleration_ (_AC_) or _constant jerk_ (_JC_). As important parts of the model such as the creation of the trajectories \(\mathbf{\chi}_{E}\) and \(\mathbf{\chi}_{T}\) (Fig. 1 and II-B) are non-differentiable, we use Bayesian optimization [24] to fit the _Commotions_ model's parameters. However, regarding the optimization procedure, some open questions still remain. First, the user must decide whether to train the model in a _single optimization_ round (_IO_) or use a _two-stage optimization_ (_2O_), wherein the second stage of optimization is carried out over a reduced parameter search space surrounding the optimized parameters obtained in the first stage. Second, a choice between the two available _loss functions_\(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) used to fit the _Commotions_ model's parameters must be made. \(\mathcal{L}_{1}\) is adapted directly from the work of Zgonnikov et al. [20] (with \(t_{C}\) being the time when \(V_{E}\) reaches the purple intersection in Fig. 1) and evaluates every prediction \(p\) for each sample \(i\), while \(\mathcal{L}_{2}\) expands upon this by enforcing more varied predictions: \[\begin{split}\mathcal{L}_{1}=&\sum_{i}\frac{1}{n_{p }}\sum_{p=1}^{n_{p}}4\left|a_{i}-a_{\text{pred},i,p}\right|+\left(t_{i}-t_{A, \text{pred},i,p}\right)^{2}\\ \mathcal{L}_{2}=&\mathcal{L}_{1}+\sum_{i}100V_{i}-2 0\sqrt{V_{i}}+1\\ t_{i}&=\min\left\{t_{A,i},\max\left\{t_{C,i},t_{A, \text{pred},i,p}\right\}\right\}\\ V_{i}&=\min\left\{\mathbb{V}_{p}\left(t_{A, \text{pred},i,p}\right),\frac{1}{100}\right\}\end{split} \tag{1}\] ### _Setup_ #### Iv-A1 Datasets The predictive performance of the different model configurations is compared using three datasets, each focusing on a different scenario. * _L-GAP_[20], a driving simulator dataset, contains scenarios in which \(V_{T}\) must decide whether to turn left in front of or behind \(V_{E}\) approaching on the opposite lane. * _roundD_[25], a real-world dataset captured by a drone, covers roundabouts where \(V_{T}\) must decide whether to enter the roundabout in front of or behind \(V_{E}\) which is already inside the roundabout. * The _UDISS_ dataset [26], created in a driving simulator, focuses on a perpendicular intersection where \(V_{T}\) must cross either in front of or behind \(V_{E}\), which is driving along the other road with the right of way. While the latter two datasets include other agents besides \(V_{E}\) and \(V_{T}\), in this paper we ignore those due to the aforementioned limitations of the _Commotions model_, with the resulting datasets being referred to respectively as _round\({}_{\text{2V}}\)_ and _UDISS\({}_{\text{2V}}\)_. We also restrict the provided input trajectories to two input time steps (\(n_{I}=2\)), as this provides sufficient information to extract the two agents' current positions and velocities, which are the only inputs the _Commotions_ model is able to process. #### Iv-A2 Train/test splits On each dataset, we perform eleven training-and-testing cycles for each configuration. In ten of these, the split between training and testing set is random. In the last split however, we place the samples that exhibit the most unintuitive human behavior - smallest accepted gaps and largest rejected gaps - into the _critical_ testing set. This latter approach allows us to evaluate the robustness of the model's predictive capabilities against the most challenging and safety-critical cases. #### Iv-A3 Metrics To evaluate the models' predictions made on the testing set, we employ three metrics which have previously been used to assess different aspects of gap acceptance predictions [10, 12, 16]. First, the area under the receiver-operator curve (_AUC_) assesses binary predictions (accept/reject gap) at two different time points: the initial opening of the gaps and the time corresponding to a fixed (dataset-specific) characteristic gap size [10]. Second, the average displacement error (_ADE_) metric evaluates full predicted trajectories at the characteristic gap size. Third, we use the true negative rate under perfect recall [10] (_TNR-PR_), a metric that rates the usefulness of binary predictions made on the smallest possible gaps at the last point in time when they can aid in adjusting \(V_{E}\)'s planned path accordingly. However, due to a lack of gaps accepted after this point in time on the _UDISS_ dataset, the _TNR-PR_ cannot be calculated on that scenario, resulting in eleven viable combinations of metrics and datasets we can use to compare model configurations. Furthermore, when we transform binary predictions into trajectory predictions, so that for example the _ADE_ metric can be applied to the _Commotions_ model, we use _Trajectron++_[12], a state-of-the-art trajectory prediction model, in accordance to the method laid out in Section II-C. ### _Results_ Following the setup described above, we test 16 configurations of the _Commotions_ model (resulting from four independent design choices) on the eleven combinations of datasets and metrics, resulting in 88 comparisons for each design choice on both random and critical split test sets. For example, on the _L-GAP_ dataset, the _AUC_ averaged over the ten random test sets for predictions made at the fixed-size gap (a size of \(3.36\,\mathrm{s}\)) ranges from \(0.936\) to the value \(0.970\) produced by _CM\({}_{N}\)\({}_{A12}\)_, which utilizes the non-interactive modeling approach (_NM_) and acceleration control (_AC_) and was trained in one round of optimizing (_IO_) \(\mathcal{L}_{2}\). Comparison between the configurations of the Commotions model (Tab. I) indicates that there was no consistently better alternative for any of the four design choices. Still, we are able to make some recommendations. For example, the non-interactive modeling approach (_NM_) appears to be more likely to outperform its opposite on the critical test set, while having the added advantage of faster evaluations by obviating half of the _Commotions_ model's calculations updating \(\mathbf{\chi}_{E}\) (Fig.1). Similarly, using acceleration control (_AC_) produces better predictions slightly more often, possibly by enabling the model to predict faster human reactions. Although the number of optimization rounds appears to be largely irrelevant, using only one round of optimization (_IO_) may make the model even more robust on the critical test sets, with faster training being another benefit. Comparatively, the most significant factor seems to be the choice of the loss function - as long as one differentiates by metric. Specifically, \(\mathcal{L}_{1}\) is a better choice when minimizing _ADE_, while \(\mathcal{L}_{2}\) is superior on the other three metrics. This is expected, as the regularization achieved by \(\mathcal{L}_{2}\) enforcing some variance in the predictions also leads to a larger spread of predicted trajectories, resulting in a larger average displacement error. When seeking the best configuration of the _Commotions_ model, rather than comparing the binary choices, we can compare the 16 configurations among themselves as well, either by the average result over the ten random test sets or the result on the critical test sets. As model performance mainly depends on the chosen metric, here we discuss _ADE_ separately from other metrics. Specifically, we found that the _CM\({}_{NA11}\)_ configuration is best, having a lower _ADE_ in \(79\%\) of all the 90 possible comparisons - i.e, on two types of results, three datasets, and against 15 other configurations. Using the same approach on the remaining metrics, we find the most promising configuration to be _CM\({}_{NA12}\)_ with better metric values in \(70\%\) of all cases. These results further support _CM\({}_{NA11}\)_ and _CM\({}_{NA12}\)_ (non-interactive modeling, acceleration vehicle input, single-round optimization) as the optimal configurations of the _Commotions_ model. ## V Comparing Commotions to Established Models In this section, we assess the potential of the _Commotions_ model by comparing the predictive performance of two of its configurations (_CM\({}_{NA11}\)_ for the _ADE_ and _CM\({}_{NA12}\)_ for other metrics) against established prediction models. Besides the _Trajectron++_ model (_T++_) introduced in Section IV-A3, we also used a logistic regression model (_LR_) as a baseline, with both methods having previously demonstrated good performance on similar gap acceptance problems [10]. While these models have far fewer restrictions on the type of input data they can process, we artificially constrain the used input data to the _Commotions_ model's limitations to allow for an equitable comparison. The only exception is the dimensionality of the input for _T++_, as this model can only process the original two-dimensional trajectories, but not the projected quasi-one-dimensional inputs of the _Commotions_ model. ### _Setup_ Regarding the chosen datasets, testing and training splits as well as the chosen metric, this experiment follows the setup of the previous ablation study (IV-A). Within the same setup, we evaluated the two configurations of the _Commotions_ model (_CM\({}_{NA11}\)_ and _CM\({}_{NA12}\)_) against the state-of-the-art models (_T++_ and two versions of _LR_). ### _Results_ Comparison of the models (Fig. 2 and Tab. II) demonstrates that the _Commotions_ model can compete with established models, although variations were observed depending on the metric and dataset. Notably, the _Commotions_ model routinely outperforms the other models in terms of the _ADE_, consistently on the random test sets and, when compared to _LR_, even on the critical test sets. For instance, the average _ADE_ achieved by the _Commotions_ model on the ten random test sets of the _rounD_ dataset is \(1.08\,\mathrm{m}\), compared to \(1.43\,\mathrm{m}\) for _T++_ and \(1.33\,\mathrm{m}\) for _LR_. This may be attributed to the model's capacity to forecast both the probability of accepting a gap and the time at which it may be accepted, with the additional information being used to filter out the most aberrant trajectories suggested by the transformation function (II-C). However, on the other metrics, the _Commotions_ model's performance is mostly similar to the other two models (no significant difference on 10/16 random and 8/16 critical splits). Nonetheless, it appears to be more robust than _LR_ when predicting unintuitive human behavior, with consistently better outcomes on the critical test. This suggests that constraining a model's predictions using cognitive theory to make it less susceptible to out-of-domain edge cases is a viable way to improve the model's reliability. The _Commotions_ model's worst performance can be observed on the _L-GAP_ and _rounD_ datasets when compared to _T++_ using metrics other than the _ADE_. While this might indicate a superiority of the _T++_ model, this deviation in performance may be at least partly explained by the aforementioned differences in the inputs provided to the models. To investigate the extent of potential impact of this difference on our results, we compared the second _LR_ model taking two-dimensional inputs to the original _LR_ model processing the one-dimensional inputs. The results of the comparison (Tab. III) show that, at least for the _LR_ model, processing the two-dimensional original inputs (as _T++_ does) appears to simplify the prediction task compared to using the quasi-one-dimensional inputs that the _Commotions_ model relies on. This seems plausible, as the projection employed to transform the input data from two-dimensional to one-dimensional likely leads to information loss, leaving fewer cues for the models to make accurate predictions. However, more research is required to accurately assess the impact of input dimensionality on predictions. Thus, a final verdict on the comparative advantage of the _Commotions_ model or _T++_ is still pending. ## VI Conclusion This study evaluates the predictive performance of the different configurations of the _Commotions_ model, which integrates state-of-the-art theories of human perception, decision-making, and motor control, in gap acceptance scenarios, comparing the best configurations with other established models. The results demonstrate that the _Commotions_ model can compete with or even outperform state-of-the-art behavior Fig. 2: Behavior prediction performance of the two _Commotions_ model (_CM_) configurations compared to _Trajectron++_ (_T++_) and logistic regression (_LR_) across three datasets (L-GAP, rounD, and Leeds) according to considered metrics (AUC, ADE, TNR-PR). For the random splits, the small markers indicate the results per individual split, while the large markers depict their average. prediction models, as long as the same input information is provided. Notably, the average displacement error of predicted trajectories is most often significantly lower than the one achieved by other tested models. We also seek to assess the potential impact of the _Commotions_ model's restriction to the quasi-one-dimensional scenario of a perpendicular intersection on its predictive performance. Unable to overcome this restriction, we instead compare two versions of the logistic regression model for this investigation. Our findings suggest that allowing _Commotions_ model to instead process two-dimensional trajectories as inputs would be beneficial. As an added benefit, this expansion could also enable the model to function as a dedicated trajectory prediction model. Consequently, such an expansion of the _Commotions_ model is likely worthwhile, even if it comes at the cost of more expensive computations. In addition, investigating the impact of other limitations, such as the number of processable input time steps, should be addressed in future research, as it would provide benefits for model designing even beyond the _Commotions_ model. However, due to its theoretical basis, the _Commotions_ model will always be restricted to scenarios such as gap acceptance, where a small number of potential behaviors, like accepting or rejecting a gap, make it feasible to create and evaluate all distinct future trajectories \(\mathbf{\widehat{\chi}}\). This limits the model's general applicability compared to models like _Trajectron++_. Additionally, as the model itself is non-differentiable, the resulting need for gradient-free optimization makes the model's training process relatively cumbersome, hampering its feasibility further. Nevertheless, our findings provide encouraging evidence supporting the usefulness of the _Commotions_ model, at least for predicting human behavior in gap acceptance scenarios, justifying further research into both this specific model and the general approach of integrating cognitive theory into prediction models. For example, it would be worthwhile to investigate how the cognitive assumptions in the _Commotions_ model (or other cognitive models) might be leveraged in model architectures that are specifically designed for use in the prediction context.
2310.11157
PyOcto: A high-throughput seismic phase associator
Seismic phase association is an essential task for characterising seismicity: given a collection of phase picks, identify all seismic events in the data. In recent years, machine learning pickers have lead to a rapid growth in the number of seismic phase picks. Even though new associators have been suggested, these suffer from long runtimes and sensitivity issues when faced with dense seismic sequences. Here we introduce PyOcto, a novel phase associator tackling these issues. PyOcto uses 4D space-time partitioning and can employ homogeneous and 1D velocity models. We benchmark PyOcto against popular state of the art associators on two synthetic scenarios and a real, dense aftershock sequence. PyOcto consistently achieves detection sensitivities on par or above current algorithms. Furthermore, its runtime is consistently at least 10 times lower, with many scenarios reaching speedup factors above 50. On the challenging 2014 Iquique earthquake sequence, PyOcto achieves excellent detection capability while maintaining a speedup factor of at least 70 against the other models. PyOcto is available as an open source tool for Python on Github and through PyPI.
Jannes Münchmeyer
2023-10-17T11:23:24Z
http://arxiv.org/abs/2310.11157v1
# PyOcto: A high-throughput seismic phase associator ###### Abstract Seismic phase association is an essential task for characterising seismicity: given a collection of phase picks, identify all seismic events in the data. In recent years, machine learning pickers have lead to a rapid growth in the number of seismic phase picks. Even though new associators have been suggested, these suffer from long runtimes and sensitivity issues when faced with dense seismic sequences. Here we introduce PyOcto, a novel phase associator tackling these issues. PyOcto uses 4D space-time partitioning and can employ homogeneous and 1D velocity models. We benchmark PyOcto against popular state of the art associators on two synthetic scenarios and a real, dense aftershock sequence. PyOcto consistently achieves detection sensitivities on par or above current algorithms. Furthermore, its runtime is consistently at least 10 times lower, with many scenarios reaching speedup factors above 50. On the challenging 2014 lquique earthquake sequence, PyOcto achieves excellent detection capability while maintaining a speedup factor of at least 70 against the other models. PyOcto is available as an open source tool for Python on Github and through PyPl. ## 1 Introduction One of the fundamental tasks in seismology is creating detailed seismicity catalogs. Highly complete catalogs can reveal, for example, spatial migrations, locking patterns, or changes in seismicity rate (Gonzalez-Vidal et al., 2023; Moutote et al., 2023; Tan et al., 2021). The standard workflow for event detection consists of two steps: phase picking and phase association. The phase picking step identifies the times of seismic phases arrivals in continuous waveforms. The phase association step aims to find consistent sets of picks that can be associated to a seismic source, called an event. This grouping enables downstream analysis steps requiring multi-station data, for example, localisation or magnitude estimation. In addition, phase association allows to discard spurious picks. Traditional phase association algorithms often rely on greedy, combinatorical strategies (Johnson et al., 1995). However, these approaches scale poorly with an increasing number of picks. While this has already become a challenge due to the growing number of seismic stations in large-scale deployments, the problem has been supercharged with the advent of highly sensitive, deep-learning-based seismic phase pickers. Deep-learning-based pickers employ neural network models and are trained on millions of manually labeled seismic phase pick. They outperform traditional picking models substantially in terms of sensitivity and pick precision (Zhu and Beroza, 2019; Mousavi et al., 2020; Munchmeyer et al., 2022). To deal with this flood of phase picks, in recent years, a wave of new phase association algorithms have been published. These approaches range from improved grid-search strategies to complex deep learning architectures. We review the main contributions in the subsequent background chapter. However, before we want to discuss the main challenges and performance indicators for seismic phase associators. The key metric for seismic phase associators is the quality at which they recover seismic events. This includes two aspects: the fraction of events being recovered, i.e. true positive rate or recall, and the fraction of identified events being incorrect, i.e. false positive rate. Usually a tuning parameter can be used to trade-off between those metrics: either a higher recall with a higher rate of false positives or a lower recall with lower false positive rate. The second metric concerns the same questions on pick level: how many picks have been correctly associated and how many picks have incorrectly been associated. Similar trade-offs to the event metrics exist. As ground-truth catalogs for seismicity are not available, seismic phase associators are usually evaluated on synthetic data, i.e., phase picks predicted using travel time calculation and random noise picks. In addition, models are tested qualitatively on real-world example scenarios without ground-truth. A metric often disregarded is the run time of the algorithms. However, given the ever-growing number of picks, we consider this metric essential to understand the scalability of current algorithms and their applicability to large scale deployments. Run time issues make some of the current associators non-applicable to such deployments, as we show in our examples where some associators did not complete associating a single day of phase picks within 48 hours. While the recently published associators improve on all of these metrics when faced with large collections of seismic picks, our experiments show that associators are still a limiting factor when building seismicity catalogs. This refers to both the precision and recall of events and picks, and the run times, with several associacitors requiring much more time for association than the phase pickers for picking. For this reasons, we propose PyOcto, a novel Python-based associator inspired by the Octotree data structure. PyOcto is based on the idea of dividing space-time into potential origins. It achieves fast run times by only exploring promising origin regions, making it a high-throughout phase associator. PyOcto is available as an open source code with a range of different input and output interfaces for easy use. ## 2 Background Before describing the PyOcto architecture, we introduce the most popular novel seismic phase association methods published within the last years. All described algorithms rely on first arriving P and S phase picks without taking into account later phases. REAL (Zhang et al., 2019) is an optimized grid-search algorithm. Instead of searching a full space-time grid, REAL is based on the assumption that a station close to the event will record the first P pick. Starting with one P pick, a grid search is performed in a volume around the picking stations. This reduces the search space from the whole study area to a smaller volume. In addition, it removes the time dimension from the search, as the approximate origin time for each origin can be inferred from the starting pick. REAL can use homogeneous and 1D velocity models. HEX (Woollam et al., 2020) is a hyperbolic phase associator. Assuming a homogeneous velocity model, it postulates that the picks of one event need to occur on a hyperbola. HEX uses the probabilistic RANSAC algorithm to fit such hyperbolas to the picks. In this algorithm, random candidate sets of picks are drawn and a hyperbola is fit. If the hyperbola contains sufficiently many picks, an event is declared. GaMMA (Zhu et al., 2022) is based on a similar assumption of a hyperbolic moveout but uses a different optimisation scheme. The method interprets the picks as a Gaussian mixture with each event a different mixture component. GaMMA uses an expectation-maximization (EM) algorithm for optimizing the clusters. As run times for the EM algorithm grow substantially superlinearly with the number of picks, GaMMA uses DBSCAN (Ester et al., 1996) to group picks before applying the EM algorithm to each cluster. Ross et al. (2023) proposed Neuma, a generalisation of GaMMA using an Eikonet (Smith et al., 2020) to enable arbitrary 3D velocity models instead of the homogeneous velocity model. In addition to these optimization based algorithms, several deep learning models have been proposed for phase association. PhaseLink (Ross et al., 2019) uses a recurrent neural network applied to pick times, phase type and station locations to identify pairwise associations between picks. It then employs an aggregation step to infer consensus sets of matching phases that correspond to event detections. GENIE (McBrearty and Beroza, 2023) uses a Graph Neural Network. Similar to PhaseLink, GENIE uses the arrival time, phase type and station location as inputs. In contrast to PhaseLink, GENIE treats all picks jointly and outputs the full association result from the neural network. Both GENIE and PhaseLink are trained on synthetic data generated using 1D velocity models. The training step needs to be conducted once for each target region, afterwards the models can be applied to arbitrary amounts of data. ## 3 Methods In the following sections we present the PyOcto associator. We start with the core algorithms and then discuss details, optimisations and implementation details. A schematic overview of the full associator is provided in Figure 1. Throughout the description we add the parameter names used in the implementation in italics in brackets to allow easier cross-referencing. ### Core algorithm PyOcto is based on partitioning space-time into cells. The key idea is to mimic a grid-search associator while only looking at "useful" grid cells. We achieve this by using a data structure inspired by an octotree with an additional time axis. The data structure consists of a collection of 4D volumes (3D in space, 1D in time), that we will call nodes in the following to highlight the resemblance of a tree data structure. We show a simplified version of this with only one space axis in Figure 2. Each volume/node \(V\) is associated to the list of picks \(picks(V)\) that could have originated from the node. More formally, let \(V\) be a node and \((s,t)\) a pick at station \(s\) at time Figure 1: Schematic view of the full PyOcto pipeline. The picks are split by time into base nodes. For each base node, the grey box is executed. Several of these boxes can be executed in parallel. Within each box, the space partitioning algorithm (see Figure 2 and the localisation/pick matching steps are conducted. Events are output and finally deduplicated. \(t\).1 We write Footnote 1: For simplicity we omit the phase of the pick here. The inclusion of phase type is natural and only involves taking different travel time models for P and S waves. \[(s,t)\in picks(V)\Leftrightarrow\exists(x_{0},t_{0})\in V:t_{0}+tt(x_{0},s)=t+\epsilon \tag{1}\] with \(tt(x_{0},s)\) the travel time from the origin \(x_{0}\) to the station \(s\). We include an \(\epsilon\) to indicate that the equation only needs to hold up to a given uncertainty (_tolerance_). This uncertainty takes into account inaccuracies in the velocity model and the pick times. There are two crucial insights about the definition of picks belonging to a node. First, while for each pick there exists a location/time in the node where it could have originated, this location/time might be different for each pick. Therefore, a set of picks originating from a node is not a sufficient condition for associating these picks into an event. This becomes obvious when looking at very large nodes. On the other hand, it is a necessary condition, i.e., if there is an event with sufficiently many picks in the dataset, there must be a node that contains all these picks. Second, the assignment of picks to nodes is not unique. A pick might be contained in multiple nodes, even if these nodes are disjoint. However, only few nodes will contain enough picks to produce an event. The key idea of PyOcto is to cleverly identify these nodes. PyOcto starts with a large node spanning the whole study area and a long time. All picks recorded during this time (with adjustments for boundary effects) can be assigned to the node. We initialize a list of active nodes with this node. The association then repeatedly takes the active node with the highest number of picks and performs one of the following actions: * if the node can not create an event anymore: discard node * if the node is small enough: try creating an event * otherwise: split the node and add children to the list of active nodes We use a priority queue for the list of active nodes to efficiently retrieve the node with the highest number of picks. In the following, we describe the different actions. **Splitting a node:** The most common action is splitting a node. For this action, we split the node \(V\) into two disjoint children \(V_{1}\) and \(V_{2}\), such that \(V=V_{1}\cup V_{2}\). We split \(V\) in half along the coordinate axis in which \(V\) has the largest extent. To compare the time axis, we multiply it with a constant velocity, by default 5 km/s. We then build the sets \(picks(V_{1})\) and \(picks(V_{2})\) by iterating over all candidates in \(picks(V)\). This check can easily be performed using equation (1). As noted before a pick can be assigned to both of these sets at the same time. **Discarding a node:** Essential for the high performance of PyOcto is to discard nodes early if they can not produce an event anymore. For this, we use the following criteria: * minimum number of total picks (_n_picks_) * minimum number of P picks (_n_p_picks_) * minimum number of S picks (_n_s_picks_) Figure 2: Schematic view of the gridding scheme with only one spatial dimension and the time dimension. Picks are indicated by crosses, the station locations are marked on the left by black triangles. Two events are contained, marked by red stars with P (solid) and S (dashed) moveout shown in black. The background shows the gridding with each cells shading corresponding to the number of picks per area. Only cells with at least 6 matching P picks and 6 matching S picks were explored. For each area, only the smallest cell explored is shown, i.e., all larger cells explored before in the same region are not visualised. * minimum number of stations with both P and S picks (_n_p_and_s_picks_) All thresholds are configurable and should be adjusted to the dataset. As a subvolume can never contain more picks than the parent node, once a node violates any of these criteria it can not create an event anymore and can be discarded. **Creating an event:** If a cell is smaller than a predefined threshold along all axes (_min_node_size_), PyOcto tries creating an event. For this, we locate an event based on all picks in a cell. The full localisation procedure is described in 3.2. We then identify whether all picks fit the determined location and remove potential outliers. These outliers might occur as not all picks in the node need to necessarily stem from the same source location/time. In addition, we scan all other picks to identify if further picks are consistent with the list of picks. This operation can be performed efficiently using a binary search in time. We add these picks to the list of picks. This procedure is repeated multiple times (_refinement_iterations_), by default 3, to stabilise the event. If at any point in this iteration the picks do not fulfill the conditions for nodes outlined above, the event creation is stopped as unsuccessful. Even though the node already gives a preliminary location and station set, the location procedure is required for multiple reasons. First, while the node groups a candidate set of picks, there is no guarantee that all of these can be associated to a common origin. Second, the optimal location for a set of picks does not necessarily need to fall within the node, in particular, because the same set of picks can be contained in multiple nodes. This is also the reason why it might be possible to associate additional picks to the location. While traversing the nodes by number of picks makes it likely to select nodes already containing the majority of picks for an event, this can not be guaranteed in face of spurious picks. In contrast to some other associators (e.g., GaMMA), PyOcto can not use amplitude information for association. However, obtaining accurate amplitudes for events at low signal-to-noise leves, as for the majority of events detected with deep learning, is challenging. From our anecdotal experiments on real data, we did not see a major advantage from the use of amplitude information. ### Localisation procedure To identify the most likely origin for a set of picks, we use the equal differential-time (EDT) loss (Lomax et al., 2000). Compared to an L2 loss on the travel time residual, the EDT loss has two advantages. First, it is independent of the origin time, thereby reducing the search space. Second, it is more stable against outlier picks. As we expect outliers to be contained in our pick set, this is a useful property for our application. To find the minimum of the EDT loss, we use a greedy algorithm. Starting with the whole study volume, we split the volume in half \(k\) times (_location_split_depth_) into \(2^{k}\) subvolumes. For each subvolume, we calculate the EDT loss at the volume center. From the volume with the lowest EDT loss, we go up \(l\) splits (_location_split_return_). This volume, with a size of \(2^{k-l}\) is used as the new start for the search and we repeat the splitting and search procedure. We iterate this step until the volume reaches a predefined size (_min_node_size_location_). This greedy algorithm has a trade-off between accuracy and runtime. When splitting the volume into only few pieces and only using a low \(l\), this leads to low runtime but potentially suboptimal minima. On the other hand, too fine splitting in each step will increase runtimes at virtually no gains in location accuracy. We set the default to \(k=6\) and \(l=4\), but make the parameter individually configurable. We note that insufficient values for \(k\) and \(l\) can lead to striping artifacts, i.e., locations at the edges of larger volumes caused by insufficient sampling. ### Velocity models At its core, PyOcto relies on travel times. These travel times need to be obtained from seismic velocity models. Two types of queries occur in the PyOcto algorithm. First and most commonly, volume queries of type \((s,t)\in\mathit{picks}(V)\), i.e., identifying if a pick can originate from a volume. Second, for the localisation algorithm, traditional travel times between the proposed origin and the station are required. Both of these queries will be executed in very high numbers and therefore need to be implemented efficiently. PyOcto implements two velocity models, a homogeneous model and a 1D layered model. For the homogeneous model, we assume constant P and S velocities. To solve the volume query, we identify the earliest and latest times a pick from the volume could arrive at the station. The earliest time is achieved by the earliest origin time in the volume plus the travel time to the closest point in the volume. Similarly the latest time can be derived using the point with the highest distance to the station. The derivation of the travel times from a fixed origin are trivial using Pythagoras theorem. Both queries run in \(\mathcal{O}(1)\) time. For the layered velocity model, we use a precalculation step to substantially improve performance. First, we calculate P and S arrival times on a dense grid using an eikonal solver. This step takes a few seconds but only needs to be run once. For extracting travel times we run 2D bilinear interpolation between the 4 closest grid nodes. For the area queries, i.e., if a pick can result from a volume we use the observation that for a 1D velocity model, the shortest travel time must be at the closest epicentral distance and the longest travel time at the furthest. However, it is not a priori clear at which depth these times occur. Potential candidates are the shallowest and deepest points of the queried depth interval, plus all local extrema within the depth interval. To efficiently query the local extrema, we cache all local extrema at each distance. As for typical velocity models each distance has at most a handful of local extrema, they can simply all be checked when necessary. In addition, to correct for station elevation, we add an elevation correction based on a constant velocity and vertical incidence. While this is an approximation, errors are negligible for association purposes. To determine travel times for localization, we use bilinear interpolation between the 4 closest precalculated travel times. PyOcto does not support 3D velocity models as performing efficient, i.e., constant run time, volume queries as required for the splitting algorithm is non-trivial. This is identical to most common algorithms, that are limited to homogeneous or 1D models. In contrast, deep learning models are able to use arbitrarily complex models. PyOcto supports different velocity models for the splitting and the localisation step. In principle, it would be easy to extend the localisation step to 3D models. However, we have not tested this and only expect substantial improvements for regions with velocity structures strongly deviating from a layered model. PyOcto supports station terms, i.e., constant time offsets for phase arrivals at a station, which can occur due to local structure. We implement additive station terms, i.e., the station term is added to the predicted travel time from the velocity model. This is the same sign convention as used by NonLinLoc (Lomax et al., 2000). Station terms are not determined dynamically but have to be defined before running the association. However, they can be obtained by iteratively running PyOcto and inferring station terms from the residuals of the previous run. For efficient calculation of distances, PyOcto relies on local coordinate transforms. By default, we suggest transverse Mercator projections. The transformation from latitude and longitude values to local coordinates needs to be performed only once before the association step. While distance measures will become inaccurate for very large study areas, we did not observe any issues in our case studies with diameters up to \(\sim 1500\) km. ### Initialisation As described in the introduction of the algorithm, the association starts with a node spanning the whole study area. In principle, this node could also span the whole study time. However, in practice this is suboptimal because it will require several costly splits along the time axis that are mostly trivial. Instead, we do not start with a single node but with a list of base nodes. Each base node spans the whole study area but only a part of the time. For this, we split the time into regular, non-intersecting segments, by default 20 minutes long (_time_slicing_). Each segment is then filled with all picks that originate during the segment plus the ones occuring in a buffer time before the start of the segment (_time_before_). This buffer time should be roughly the maximum travel time through the study area. As two subsequent base nodes might both contain all picks for one event, the early splitting might lead to duplicate events. For this reason, we deduplicate the events after all base nodes have been processed. ### Optimisations While the splitting algorithm with early stopping is a solid basis for an efficient algorithm, several points need to be taken into account that might affect runtime. Before going into details, we review the general runtime principles. While a formal analysis of algorithm complexity is difficult, we can make several observations. First, run time crucially depends on the number of nodes processed. It is therefore essential to stop the processing of each branch of the search tree as early as possible. Second, location procedures are expensive as they require many travel time queries. They should therefore not be triggered too often. Based on these observations, we define multiple optimisations. As a first optimisation, PyOcto keeps track of all picks that have already been assigned to events. Once a pick has been assigned to an event, it is not considered anymore and removed from all nodes. Without these picks, the adjacent nodes most likely will not fulfill the necessary minimum number of picks. This step substantially improves runtime, as events will usually produce many adjacent nodes with high numbers of picks which do not need to be processed multiple times. The second observation treats the case of a group of picks that can not be associated to a common origin. The same group of picks often appears at many neighboring nodes. As trying to create an event from these picks does not yield a consistent origin, these picks are not marked as used. As often many neighboring cells contain the same set of picks, this leads to repeated but useless tries of locating the same set of picks. To mitigate this situation, we cache all sets of picks that have been processed as candidate sets for localisation. If a set has been processed before, it will be skipped in the next try. Note that this optimisation only works because the location search depends only on the pick set but not on the location of a node. The last optimisation is relevant in the case of a large number of stations with spurious picks. With a growing number of stations, it becomes likely that a set of distant stations by chance produces picks that can be associated. This does not only lead to false detections but also substantially increases run time. At the same time, these false events are easy to identify manually because of the inconsistent pick pattern, i.e., the existence of many non-picking stations between the picking stations. To remove this issue, we introduce two distance conditions, a relative and an absolut condition. The absolute condition is a simple cutoff on the maximum distance between stations and sources for the space partitioning (_association_cutoff_distance_). This condition excludes picks too far from a given cell when checking if the pick could have originated there. However, in the localisation and pick matching step, all picks are taken into account, ensuring that the output contains all associated picks even at larger distance. This condition is most helpful in large, homogeneous networks and in networks without large amounts of out-of-network events. For the case of inhomogeneous networks or networks with substantial out-of-network events, we introduce a relative distance condition, based on the assumption that it is unlikely for a station to detect an event if many closer stations did not detect it. For every distance from a volume, we can calculate the fraction of stations within this distance that have at least one pick compared to the total number of stations. We then identify the maximum distance where this fraction is still above a predefined threshold (_min_pick_fraction_). All picks at stations above this threshold are removed. As the nodes have a spatial extent, for each station we choose the distance maximizing the number of retained picks. This means that for stations with picks we use the minimum distance to the node while for all other stations we use the maximum distance. While this optimization yields substantial runtime improvements for datasets with high numbers of stations, it comes at a cost. To check the condition, at every node the distance to all existing stations needs to be calculated. For small deployments, associations by chance are anyhow unlikely, rendering the additional runtime mostly useless. The optimisation can therefore be deactivated. Lastly, PyOcto uses a memory protection strategy. As PyOcto processes nodes ordered by their number of picks, it needs to always hold a queue of active nodes. This can, in the worst case, degrade into a breadth-first search, which is very memory intensive. Therefore, once the total number of nodes exceeds a predefined threshold (_queue_memory_protection_dfs_size_), PyOcto processes the next nodes using depth-first search. This is highly memory efficient, as only the current call stack needs to be kept in memory. At the same time, this can lead to increased run times. In our experiments, this optimization was only required for very large sets of picks in short times (>100,000 picks per day). ### Implementation PyOcto is implemented in Python and C++. The interface of PyOcto is implemented in Python to provide an accessible interface in a common scripting language. Inputs and outputs are passed as Pandas data frames. PyOcto has a slim set of dependencies. The backend of PyOcto is implemented in C++. The functions are natively embedded into Python using pybind1. The association function is parallelised using pthreads. Parallelisation is achieved by assigning base nodes to threads. This causes very low synchronisation overhead as only the base node queue and the event list are shared between threads. The list of used picks is not shared between threads, instead events are deduplicated at the end of the association step. By default, PyOcto uses all available threads. However, the thread count can be set manually (_n_threads_). To allow an easy experimentation with PyOcto, the software implements several compatibility interfaces: * a function to read the input format from GaMMA (Zhu et al., 2022) * a function to read the input format from REAL (Zhang et al., 2019) * a function to process SeisBench picks (Woollam et al., 2022) * a function to use obspy Inventory objects as input (Beyreuther et al., 2010) * an output interface for NonLinLoc (Lomax et al., 2000) * an automated selection strategy for local coordinate transforms PyOcto is available as open source code under MIT license, a permissive open-source license. Pre-built wheels for Linux, Mac OS, and Windows are available on PyPI and can be installed using pip. ## 4 Benchmark on synthetic catalogs ### Setup To quantitatively assess the quality of PyOcto, we test it on synthetic catalogs. We use two complementary scenarios: (i) uniformly distributed seismicity in a shallow layer; (ii) realistic subduction zone seismicity. We compare the proposed PyOcto algorithm to two established associators: GaMMA and REAL. We choose these algorithms as they have well-documented, open-source implementations and have both been used in numerous application cases already (Wilding et al., 2023; Gonzalez-Vidal et al., 2023; Tan et al., 2021; Liu et al., 2020). We do not compare to any deep learning associators, as optimizing these pickers requires substantially more parameter choices and a fair comparison is therefore harder to guarantee. Note that this study in not intended as a full-scale benchmark of seismic phase associators as this would be out of scope for the paper. Instead, we restrict ourselves to this smaller-scale case study. Both scenarios use the same procedure for data generation. Each test case consists of one day of seismicity with a predefined number of events and a predefined noise rate. For each event, we draw a source time uniformly within the day and draw a location and a magnitude from the distributions described below. Based on the magnitude and hypocentral distance, we estimate detection probabilities at each station. From these probabilities we randomly select whether a station has a P and an S arrival using correlated Bernoulli variables with correlation 0.5 between the two phases. We predict travel-times using a 1D velocity model from Graeber and Asch (1999). To each individual travel-time we add a Gaussian random normal variable with standard deviation 1 % of the total travel time but at least 0.4 s standard deviation. Finally, we add noise picks not associated to any event to the data set. The number of noise picks is defined as the product of the number of event picks times the user-defined noise rate. For each pick, the phase, time and station are drawn according to a uniform random distribution. We use event numbers of 100, 500 and 2000, and noise rates of 0.3, 1.0 and 3.0. We compare PyOcto to GaMMA and REAL. For each model, we manually selected reasonable parameters. All parameters are reported in Tables S1, S2, and S3. For PyOcto and REAL we report results for the versions with homogeneous velocity models and 1D layered velocity models. We provide the associators with the same velocity model we used for data generation. We therefore expect slightly too optimistic performance results for the 1D models, however, the comparison between these models should still provide reasonable results. For all associators we require at least 10 picks for an event detection. We furthermore require at least 4 stations with both P and S pick for REAL and PyOcto. We do not enforce the last condition for GaMMA as the option is not implemented. We ensure that all events in our synthetic catalogs fulfill these conditions. We evaluate the associators based on 6 metrics: precision, recall, F1 score, missing picks per event, incorrectly associated picks per event, and run time. Precision is the fraction of cataloged events among all detections. Recall is the fraction of events detected among all cataloged events. F1 score is the harmonic mean of precision and recall. To calculate these metrics, we define matches between cataloged and detected events through their picks. A cataloged event \(A\) and a detected event \(B\) are considered a match if at least 60 % of the picks of \(A\) are also picks of \(B\) and vice versa. We use a pick-based matching instead of a location- and time-based matching as it is more stable for high event densities. We execute the test on 16 virtual CPU cores with 8 physical cores and 64 GB main memory. We measure runtimes from the invocation to the output of the models. We do not measure data-independent preprocessing steps such as velocity model building as these steps only need to be executed once in an application scenario. Exact machine configurations can vary slightly between tests, therefore the reported runtimes should be interpreted rather as an indication than an exact measure. We limit the total aggregated runtime of all tests per associator to 48 h. All tests not finished at this point are reported as missing. ### Uniform shallow seismicity As a first scenario, we study shallow seismicity. We use 100 stations arranged in a 10x10 grid with a station spacing of \(0.2^{\circ}\times 0.2^{\circ}\). Event locations are randomly distributed within the network with a depth up to 30 km. No out-of-network events are generated. Magnitudes are generated from a Gutenberg-Richter distribution with a minimum magnitude of 0.5 and \(b=1\). Dataset statistics are reported in Table 1. Figure 3 shows the performance metrics for the shallow scenario. Full results in numerical form can be found in Table S4. PyOcto and REAL obtained results for all cases with both the homogeneous and the 1D velocity model. GaMMA did not provide solutions for the cases after 500 events and a noise factor of 1.0 as the computation did not finish within the 48 h time limit. In all cases, PyOcto achieves the highest F1 score or a result within 0.01 F1 score of the best model. The 1D model slightly outperforms the homogeneous model. REAL with a homogeneous model achieved a slightly worse performance, followed by REAL with a 1D model. GaMMA shows a clear degradation in F1 \begin{table} \begin{tabular}{r r r r r r r} \hline Events & Noise & Event picks & Noise picks & Total picks & Picks per event & Picks per station \\ \hline 100 & 0.3 & 4,047 & 1,214 & 5,261 & 40.47 & 52.61 \\ 100 & 1.0 & 4,894 & 4,894 & 9,788 & 48.94 & 97.88 \\ 100 & 3.0 & 5,257 & 15,771 & 21,028 & 52.57 & 210.28 \\ 500 & 0.3 & 25,658 & 7,697 & 33,355 & 51.32 & 333.55 \\ 500 & 1.0 & 24,525 & 24,525 & 49,050 & 49.05 & 490.50 \\ 500 & 3.0 & 23,646 & 70,938 & 94,584 & 47.29 & 945.84 \\ 2,000 & 0.3 & 101,614 & 30,484 & 132,098 & 50.81 & 1320.98 \\ 2,000 & 1.0 & 98,680 & 98,680 & 197,360 & 49.34 & 1973.60 \\ 2,000 & 3.0 & 94,710 & 284,130 & 378,840 & 47.35 & 3788.40 \\ \hline \end{tabular} \end{table} Table 1: Dataset statistics for the shallow seismicity scenario. We do not differentiate between P and S picks as both are generated in almost equal number. The picks per station include the noise picks. \begin{table} \begin{tabular}{r r r r r r r} \hline Events & Noise & Event picks & Noise picks & Total picks & Picks per event & Picks per station \\ \hline 100 & 0.3 & 2,241 & 672 & 2,913 & 22.41 & 145.65 \\ 100 & 1.0 & 2,331 & 2,331 & 4,662 & 23.31 & 233.10 \\ 100 & 3.0 & 2,142 & 6,426 & 8,568 & 21.42 & 428.40 \\ 500 & 0.3 & 11,414 & 3,424 & 14,838 & 22.83 & 741.90 \\ 500 & 1.0 & 11,194 & 11,194 & 22,388 & 22.39 & 1119.40 \\ 500 & 3.0 & 10,818 & 32,454 & 43,272 & 21.64 & 2163.60 \\ 2,000 & 0.3 & 45,544 & 13,663 & 59,207 & 22.77 & 2960.35 \\ 2,000 & 1.0 & 45,011 & 45,011 & 90,022 & 22.51 & 4501.10 \\ 2,000 & 3.0 & 45,213 & 135,639 & 180,852 & 22.61 & 9042.60 \\ \hline \end{tabular} \end{table} Table 2: Dataset statistics for the subduction scenario. We do not differentiate between P and S picks as both are generated in almost equal number. The picks per station include the noise picks. Figure 3: Synthetic evaluation of the different associators in the shallow seismicity scenario. Each associator is indicated by a color. For the missing/additional picks, missing picks are indicated with a bar below 0, additional picks with a bar above 0. Missing results due to exceeded runtimes are indicated by grey Xs. A result for REAL 1D with 100 events and 1.0 noise is not available as the model reproducibly crashed with a segmentation fault. All results in numerical form are reported in Table S4. Figure 4: Synthetic evaluation of the different associators in the subduction scenario. For further details see the caption of Figure 3. All results in numerical form are reported in Table S4. score with growing number of event or noise picks but still achieves good performance (F1 \(\geq\) 0.89) for all cases where solutions were obtained. For the case with 2000 events and a noise factor of 3.0, REAL (homogeneous, 0.84) performs best, closely followed by PyOcto (1D, 0.83), REAL (1D, 0.74), and PyOcto (homogeneous, 0.67). We suspect that REAL shows slightly better performance here because the actual grid search is less affected by noise picks than the approximation using space partitioning used in PyOcto. We note that this case is extremely challenging with each station reporting on average one pick every 23 s. Up to 500 events and a noise rate of 1.0, PyOcto (1D and homogeneous) and GaMMA are very exact in terms of picks with few additional or missed picks. In contrast, REAL (homogeneous) misses roughly 3 picks per event, REAL (1D) between 5 and 10. While we are not fully certain about the missed picks, we assume it is because REAL discards picks based on the ratio between station residuals and event residuals, i.e., a low average pick residual for an event will lead to discarding picks with higher residuals even if their absolute value is not excessively high. We note that the number of missed picks for REAL could likely be reduced through targeted parameter tuning. For configurations with high numbers of events, in particular, in conjunction with high noise, REAL and PyOcto both include false picks with the events. PyOcto includes more false picks than REAL, again likely related to selection criteria. The homogeneous version of PyOct produces about 1.5 times as many false picks as the 1D variant, likely because of the overall higher tolerance value necessary to mitigate the less accurate velocity model. In terms of run time, PyOcto substantially outperforms GaMMA and REAL in all cases. The run time factor between PyOcto and the next-fastest methods exceeds 10 in almost all cases, often even reaching factors of 20 and above. Run times for the homogeneous and the 1D velocity model for PyOcto are almost identical in all cases. We suspect that while the travel time lookup for the 1D velocity model is slightly more expensive than for the homogeneous model, this effect is offset by more focused origins from the better travel times, leading to fewer nodes that need to be explored. ### Subduction zone For the subduction zone scenario, we base our catalog on the IPOC network (GFZ German Research Centre For Geosciences and Institut Des Sciences De L'Univers-Centre National De La Recherche CNRS-INSU, 2006) and the catalog by Sippl et al. (2018). We chose the deployment and the catalog as a typical example of a well-instrumented, highly active subduction zone with diverse seismicity. We draw event locations and event magnitudes independently from the catalog. We use the IPOC stations, in total 20 stations. The study area covers approximately \(5^{\circ}\) North-South and \(3^{\circ}\) East-West up to a depth of 200 km. Out-of-network seismicity is located up to \(1^{\circ}\) from the network. This is a typical challenge for associators in subduction zones were offshore events will occur substantially outside the network. Dataset statistic are reported in Table 2. The results in the subduction scenario largely mirror the ones from the shallow scenario but with nuanced differences that we point out in the following (Figure 4, Table S4). First, the difference between 1D and homogeneous models is more pronounced with 1D models clearly outperforming homogeneous models in terms of F1 score. Furthermore, the homogeneous models (GaMMA, REAL, and PyOcto) consistently miss around 2.5 picks per event. This highlights that the assumption of a homogeneous velocity model is insufficient for subduction zones. Nonetheless, PyOcto and REAL with homogeneous velocity model still achieve F1 scores consistently above 0.93 for cases with up to 500 events. In contrast, GaMMA performance clearly is below the other models already at 100 events per day and degrades substantially above. Second, among the 1D models, PyOcto outperforms REAL more clearly than in the shallow case. It consistently exhibits a higher F1 score and lower numbers of missed and false picks. Even at 2000 events with a noise rate of 3.0 (with a pick per station on average every 9.5 s), PyOcto still achieves an F1 score of 0.57. Third, run time differences are even more pronounced with PyOcto outperforming REAL often by a factor of 1000. This is caused by the larger search grid required by REAL to handle the depth and the out-of-network events. We note that we already reduce the impact of the larger grid size for REAL by using a larger grid spacing for the subduction scenario. In contrast, PyOcto can easily handle large search domains due to its splitting approach that scales logarithmically with volume. For the subduction scenario, PyOcto with a 1D model in most cases only needs about half the time of PyOcto with a homogeneous velocity model. This suggests that the more accurate velocity model leads to fewer nodes needing to be explored. GaMMA shows competitive run times compared to PyOcto and REAL for cases with 100 events but run times substantially exceed the ones of REAL (and thus even more PyOcto) at 500 events and above. No solutions for 2000 events and noise rates of 1.0 and 3.0 could be obtained. ## 5 Application to the 2014 lquique sequence In addition to the synthetic tests, we evaluate the different associators on a real scenario. For this, we study the 2014 lquique sequence. Starting with an 8 month long slow slip transient, the 2014 lquique sequence contained a magnitude 6.6 foreshock on 16th March and the mainshock on the evening of 1st April (Socquet et al., 2017; Soto et al., 2019). We look at the time between 15th March 2014 and 15th April 2014. This time span includes the largest foreshock, the mainshock, and the phase of most intensive aftershock activity. For this study, we use data from the 20 stations in the CX network. We note that generally more stations from other networks are available in the area. However, as we do not aim to produce a comprehensive catalog but rather to test the associators, we restrict ourselves to the high-quality CX stations. Using the CX data, we build a small earthquake detection workflow. First, we pick P and S arrivals in the continuous waveforms using PhaseNet (Zhu and Beroza, 2019) trained on INSTANCE (Michelini et al., 2021) using SeisBench (Woolam et al., 2022). We use a pick threshold of 0.05 for both P and S waves, i.e., every pick that Figure 5: Catalogs generated for the lquique sequence (15th March 2014 to 15th April 2014) using different phase associators. We visualize the output locations as provided by the associators. Please note that in a comprehensive workflow, absolute and relative relocation techniques should be used as a refinement step. Cross section plots are shown in Figure S1. has a confidence value above 0.05 assigned to it by the deep learning picker is treated as an arrival. This is intentionally a very low threshold to further stress test the associators. Second, we pass the picks to each associator to obtain catalogs. For each associator, we provide picks in daily chunks. As in our benchmark, we require at least 10 picks and 4 stations with both P and S pick. We note that this is an extremely simplistic catalog generation workflow that misses essential postprocessing steps, such as absolute and relative relocation or magnitude estimation. However, it is sufficient to investigate the difference between the associators. Figure 5 shows the seismicity in the IPOC area, including Northern Chile, as determined with the different associators. All catalogs clearly show the main features of the seismicity: an intense cluster of events around the luguique mainshock in the North-West, moderate seismicity along the subducting slab, and a strong band of deeper seismicity. Table 3 shows statistics for the number of events per catalog, the number of associated picks and the fraction of total picks associated. Overall, the PyOcto and REAL catalogs are largest, with the catalogs from REAL containing slightly more events. For both PyOcto and REAL, the catalogs with homogeneous velocity models are slightly larger. The catalog from GaMMA is about a quarter smaller. Overall, PyOcto and REAL associated between 43 % and 46 % of all picks while GaMMA associated 34 %. We note that this does not imply that all remaining picks are incorrect, as many might stem from events that have not been recorded at sufficiently many stations to meet the quality control criteria or even be associated. Figure 6 shows the daily number of events and the average number of P and S picks per event per day. Across all days, the number of events is very similar between all variants of REAL and PyOcto, with PyOcto always detecting slightly more events than REAL in the early parts of the aftershock sequence. GaMMA consistently finds fewer events, with the absolute and relative difference becoming particularly large on days with high seismicity rate. This indicates that the model is less able to deal with high rates of seismicity. However, comparing the seismicity rate to the expected Omori decay in activity, it is apparent that all models miss events in the earliest days after the mainshock. Our results can not distinguish if this is a limitation of the picking model or the association models. Looking at the average number of picks per event, the only noticeable difference between the associators is that REAL consistently finds about 0.6 S picks more per event. However, the temporal development of picks per event is interesting. Overall, the number of P picks per event seems to correlate slightly positively with the total number of events. For the S picks, the rate of association also follows systematic patterns across all associators, but a correlation with the number of events is not as apparent. We suggest that the shifts in the num \begin{table} \begin{tabular}{l r r r r r r r r r} \hline \hline Associator & Events & Pep & P pipe & S type & Associated & P associated & S associated & Total picks & Time [s] \\ \hline GaMMA & 12,718 & 16.92 & 9.90 & 7.02 & 0.34 & 0.31 & 0.39 & 634,647 & 1021 \\ PyOcto & 16,660 & 16.77 & 9.62 & 7.15 & 0.44 & 0.39 & 0.52 & 634,647 & 12 \\ PyOcto1D & 16,362 & 16.56 & 9.49 & 7.06 & 0.43 & 0.39 & 0.50 & 634,647 & 15 \\ REAL & 16,747 & 17.35 & 9.66 & 7.69 & 0.46 & 0.40 & 0.56 & 634,647 & 1487 \\ REAL1D & 16,489 & 17.51 & 9.78 & 7.73 & 0.46 & 0.40 & 0.55 & 634,647 & 1557 \\ \hline \hline \end{tabular} \end{table} Table 3: Catalog statistics for the luguique sequence catalog with different associators. The table shows the number of events, picks per event, the fraction of associated picks among all picks, and the total number of picks. We abbreviate _picks per event_ as _ppe_. Times refer to average run times per day of data. Figure 6: Daily earthquake rates, daily number of associated P picks per event, and daily number of associated S picks for the catalogs generated using the different associators. Vertical black lines indicate the times of the largest foreshock and the mainshock. ber of associated picks are related to the picker performance over time, which is in turn affected by the event distribution. More large events will cause more impulsive, i.e., easier to detect arrivals. At the same time, a higher seismicity rate will also cause higher noise levels, making phase detection and picking overall more challenging. While this study does not focus on the location accuracy of different associators, as we do not perceive this as the main output of the models, we still provide a brief analysis of our findings in the Iquique sequence. Each method produces a distinct signature of location artifacts (Figures 5 and S1). GaMMA features a substantial number of shallow detections not present in the other catalogs. These are primarily mislocations, likely caused by the initialisation of the sources for the expectation-maximization algorithm at the surface. They occur primarily outside the network. REAL shows clear gridding artifacts caused by the discretisation of the search grid. Finer search-grids would reduce this effect, but come at a substantial compute cost, with halving the grid-space leading to roughly 8 times longer run time. PyOcto shows line-shaped artifacts, however, these are particularly visible with regard to event depth. These stripes are caused by failures in the minimization of the EDT loss in the localization procedure. The artifact is more pronounced for the homogeneous velocity model than the 1D velocity model, likely because the EDT loss is more focused for the 1D model. Stripes could be reduced or eliminated by increasing the sampling depth in the octotree search for localization. However, this would lead to increased runtime. In conclusion, while all associators give a good overview of the general spatial patterns of the seismicity, the locations should only be treated as preliminary estimates. For accurate location, absolute or relative relocation tools, e.g., NonLinLoc (Lomax et al., 2000) or HypoDD (Waldhauser, 2001), should be employed. We measured average runtimes per day for each associator. As in the synthetic benchmark, PyOcto was by far the fastest, taking 12 s (homogeneous) / 15 s (1D model). Gamma took about 17 minutes per day, REAL took 25 minutes (homogeneous) / 26 minutes (1D model). This means a speed-up factor of 70 to 130 for PyOcto compared to the baselines. As a reference, loading the waveform data from disk and picking it took around 60 to 90 s per day. This means that in this scenario, run times for PyOcto association are one order of magnitude below the times for picking, while for the other associators the association largely dominates the total run time. ## 6 Conclusion In this paper, we introduced PyOcto, a novel seismic phase associator based on space-time partitioning. We tested PyOcto in two distinct synthetic earthquake scenarios with different numbers of events and different noise levels. PyOcto consistently showed detection performance on par or even superior to the state of the art approaches GaMMA and REAL. At the same time, PyOcto achieves substantial speedups, often with factors above 50. We furthermore compared the algorithms on the challenging 2014 Iquique sequence. Here too, PyOcto produces a very complete seismicity catalog. Similar to the synthetic cases, PyOcto again achieves a speedup of above 70 compared to the other associators, with phase association taking substantially shorter time than the phase picking. This makes the algorithm future-proof in face of ever-growing seismic networks and potentially more sensistive, future phase pickers. PyOcto is available as an open-source tool. ## Acknowledgements This work has been partially supported by MIAI@Grenoble Alpes (ANR-19-P3IA-0003). I thank Frederik Tilmann and Marius Isken for insightful discussions that helped improve the algorithm design. I thank Sophie Giffard-Roisin for her comments that helped improve the manuscript. ## Data and code availability PyOcto is available at [https://github.com/yetinam/pyocto](https://github.com/yetinam/pyocto) and Zenodo (publication with DOI in progress). The code for the benchmark is available in the same repository. PyOcto can be installed from PyPI using pip. Waveform data for the CX network ([https://doi.org/10.14470/PK615318](https://doi.org/10.14470/PK615318)) was obtained through the GEOFON FDSN webservice. ## Competing interests The author has no competing interests.
2307.15926
Exposing Hidden Attackers in Industrial Control Systems using Micro-distortions
For industrial control systems (ICS), many existing defense solutions focus on detecting attacks only when they make the system behave anomalously. Instead, in this work, we study how to detect attackers who are still in their hiding phase. Specifically, we consider an off-path false-data-injection attacker who makes the original sensor's readings unavailable and then impersonates that sensor by sending out legitimate-looking fake readings, so that she can stay hidden in the system for a prolonged period of time (e.g., to gain more information or to launch the actual devastating attack on a specific time). To expose such hidden attackers, our approach relies on continuous injection of ``micro distortion'' to the original sensor's readings, either through digital or physical means. We keep the distortions strictly within a small magnitude (e.g., $0.5\%$ of the possible operating value range) to ensure that it does not affect the normal functioning of the ICS. Micro-distortions are generated based on secret key(s) shared only between the targeted sensor and the defender. For digitally-inserted micro-distortions, we propose and discuss the pros and cons of a two-layer least-significant-bit-based detection algorithm. Alternatively, when the micro-distortions are added physically, a main design challenge is to ensure the introduced micro-distortions do not get overwhelmed by the fluctuation of actual readings and can still provide accurate detection capability. Towards that, we propose a simple yet effective Filtered-$\Delta$-Mean-Difference algorithm that can expose the hidden attackers in a highly accurate and fast manner. We demonstrate the effectiveness and versatility of our defense by using real-world sensor reading traces from different industrial control (including smart grid) systems.
Suman Sourav, Binbin Chen
2023-07-29T08:09:52Z
http://arxiv.org/abs/2307.15926v1
# Exposing Hidden Attackers in Industrial Control Systems using Micro-distortions ###### Abstract For industrial control systems (ICS), many existing defense solutions focus on detecting attacks only when they make the system behave anomalously. Instead, in this work, we study how to detect attackers who are still in their hiding phase. Specifically, we consider an off-path false-data-injection attacker who makes the original sensor's readings unavailable and then impersonates that sensor by sending out legitimate-looking fake readings, so that she can stay hidden in the system for a prolonged period of time (e.g., to gain more information or to launch the actual devastating attack on a specific time). To expose such hidden attackers, our approach relies on continuous injection of "micro distortion" to the original sensor's readings, either through digital or physical means. We keep the distortions strictly within a small magnitude (e.g., \(0.5\%\) of the possible operating value range) to ensure that it does not affect the normal functioning of the ICS. Micro-distortions are generated based on secret key(s) shared only between the targeted sensor and the defender. For digitally-inserted micro-distortions, we propose and discuss the pros and cons of a two-layer least-significant-bit-based detection algorithm. Alternatively, when the micro-distortions are added physically, a main design challenge is to ensure the introduced micro-distortions do not get overwhelmed by the fluctuation of actual readings and can still provide accurate detection capability. Towards that, we propose a simple yet effective _Filtered-\(\Delta\)-Mean-Difference_ algorithm that can expose the hidden attackers in a highly accurate and fast manner. We demonstrate the effectiveness and versatility of our defense by using real-world sensor reading traces from different industrial control (including smart grid) systems. ## I Introduction Given the central role of Industrial Control Systems (ICS) in different critical infrastructures like smart grids [2], water treatment systems [3], and nuclear power plants [4], the issue of ICS security has become increasingly important. High-profile attacks like Stuxnet [5] and the Ukraine power grid blackouts [6] have shown that adversaries can wreak havoc by compromising ICS devices and manipulating their readings or behavior. In many cases, attackers who have already gained some foothold in the system choose to stay hidden for a prolonged period of time so as to launch the attack in a coordinated time (e.g., to maximize the attack's impact), or to launch the "frog boiling attack" [7, 8, 9] without triggering detection mechanism. Instead of reacting to the launch of an actual attack only when the attack causes the ICS to deviate from the expected system behavior, we aim to expose hidden attackers proactively before they do any damage. Specifically, we consider an attacker who wants to inject malicious sensor data into an ICS, however, she cannot put herself directly on-path (i.e., to gain direct control of the original sender or to launch a man-in-the-middle attack via ARP spoofing). We call such an attacker an off-path attacker and her attack consists of two logical steps: (1) to make the original sensor unavailable (e.g., by crashing the sensor's firmware [10] or through other forms of denial-of-service attack against the sensor [11]), (2) to impersonate the sensor and inject crafted data [12]. One such end-to-end attack is demonstrated by researchers from IOActive in Black Hat USA 2017 conference against a nuclear plant that follows a similar attack sequence [13, 14]. We provide further discussion and justification of our threat model in Section III. The attacker wants to remain hidden, so if there are any intrusion detection mechanisms in the ICS, the attacker will use her best knowledge about both the system's operation behavior and the intrusion detection rules to carefully craft the fake sensor readings, making them look normal and indistinguishable from the real sensor's readings. While there have been significant advances in securing ICS against attacks on sensor readings (e.g., [15, 16, 17]), implementation of many of these solutions require major upgrading of the existing ICS, e.g., by introducing authentication schemes at both the sensors and their corresponding receivers which can significantly amplify the cost of such upgrading. Other works [18, 19, 20] are based on assumptions that may not hold for advanced and persistent attackers, e.g., assuming the attackers do not know about the system's operational behavior or cannot observe some unique features of the sensors before launching the attack. In this work, we seek to design a practical solution for fast and accurate attacker detection in legacy ICS. The solution should be based on assumptions that even advanced attackers cannot easily bypass, while making minimal changes to a legacy ICS. Furthermore, the solution should have a minimal impact on the functioning of all legacy devices in the ICS. **Injecting micro-distortion based on a sequence of secret:** The cornerstone of our defense strategy is to continuously introduce a very small distortion (which we will call "micro-distortion" hereafter) to the readings of the sensor that we want to protect. This micro-distortion needs to be within a very low magnitude, such that the normal functioning of the ICS remains unaffected. In other words, these distortions should be tolerable by the devices that rely on the sensor reading, and the system should behave almost identically both with or without the distortions. To use the presence of such distortion to authenticate the sensor, we generate the distortion based on a secret that is shared only between the sensor and a defender. The secret contains a sequence of binary values 0 and 1 (i.e., a one-time pad), one value to be used for each time instance. There are two potential ways to introduce micro-distortion to the reading of a sensor in an ICS: * **Digitally:** To upgrade the sensor's firmware or add a bump-in-the-wire device between the sensor and the ICS network, so as to introduce the micro-distortion into the original sensor reading. While this requires some change to the sensors, it doesn't affect the other parts of ICS (e.g., controllers), which requires more overhead to upgrade. * **Physically:** To deploy a micro-actuator that will physically add the micro changes to the underlying system so that the original sensor will pick that up in its reading. In either case, the secret sequence shared between a sensor and the defender forms the basis for the defender to distinguish between the real sensor and a fake one. Table I compares the digital and physical addition of distortions and summarizes their advantages over existing solutions. **Digital introduction of micro-distortion:** By digital means, specific bits of the sensor readings can be directly manipulated to introduce the micro-distortion. Therefore, a natural choice is to set the least significant bit (LSB) based on the shared secret key to keep the distortions minimal. Such digital tweaking of the LSB is extremely effective for attacker detection (i.e., with exponentially reducing error rate). However, because of the potential drawbacks described in Table I, it is important to study physical addition of micro-distortion too. **Physical introduction of micro-distortion:** In this case, one cannot directly set a bit of a sensor reading; instead, the distortion will be added to the natural variation of sensor readings. Hence, one key challenge for effective detection is that, by design, the magnitude of the micro-distortion is much lower than the sensor's actual readings. As a result, it can be easily overwhelmed by the latter. For example, if the actual readings are drawn uniformly randomly and independently from all the possible range of values, and when the micro-distortion \(\epsilon\) equals \(0.5\%\) of the possible range, it will require more than 80,000 samples in order to reduce both the false positive (FP) and false negative (FN) rate below \(1\%\). If the sensor reading is sent every minute, such an approach requires an unacceptable detection delay of around \(2\) months. To overcome this, we leverage the observation that sensor readings in many ICS (and power grids in particular) often change gradually in a significant fraction of time (i.e., consecutive measurements have small difference). Based on this observation, we devise an effective _Filtered-\(\Delta\)-Mean-Difference_ algorithm that is based on statistical gauges calculated over the consecutive change of the sensor readings, instead of the raw sensor reading sequence directly. We demonstrate the effectiveness of our defense using real-world sensor reading traces from different types of ICS -- two smart grid systems, one Secure Water Treatment (SWaT) testbed, and a couple of synthetic datasets. For the smart grid systems, our experiments confirm that our detection algorithm under physical distortion can detect hidden attackers in a highly accurate (with false positive and false negative rate at \(1\%\) or even lower) and fast (i.e., using less than 100 samples) manner, achieving more than \(100\) times gain in terms of the detection delay compared to the baseline detection approach. For the SWaT system, a similar false positive and false negative rate at \(1\%\) or lower can be achieved using only 50 samples. **Summary of enhancement from conference version and main contributions.** This paper is an extension based on a preliminary version of our work published in a conference [1], where we only consider physical addition of micro-distortions to the sensor readings. The main addition in this paper include: (1) We propose new methods for digital addition of micro-distortion. (2) We analyze the pros and cons of different proposed techniques, especially between digital insertion and physical insertions of micro-distortion. (3) We provide more thorough evaluations and analysis based on new datasets, including SWaT dataset and a couple of synthetic datasets. Those new results demonstrate the versatility and effectiveness of the proposed detection algorithms and provide theoretical insights regarding the limitations and insights behind the different detection algorithms. In particular, we establish the requirement of a filtration step in the detection algorithm for ICS where there can be some large instantaneous changes. Overall, this paper is significantly enhanced from our conference version, providing a new and a more comprehensive proposal of different approaches, and more detailed analysis and experimental studies of the proposed approaches. As summarized in Table I, our micro-distortion-based solutions offer two promising candidates for an ICS defender to choose from. Whether digital or physical distortion is more suitable depends on the exact ICS setting, but overall they both offer a more effective, less costly, easier-to-deploy, and harder-to-bypass solutions compared to existing solutions, making them suitable for legacy ICS systems. In particular, the secret key needs to be shared only between the sensor and the defender. Other components (e.g., controllers) can use the sensor's readings directly without requiring to filter out the injected distortions. This not only reduces the chance of secret leakage but also makes deployability easier as none of these receiving components require upgrading. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \hline Distortion & Advantages over other schemes & Additional advantages & Potential drawbacks \\ \hline Digital & 1) Smaller attack surface: Secret only shared between the protected sensor and the defender & Extremely fast and accurate detection: error rate reduced exponentially with the number of readings. & 1) The digital distortor needs to be connected to the network, hence is an easier cyber attack target compared to physical distorter. \\ \hline Physical & 2) Easy to deploy: No change/upgrade required for legacy receivers (e.g., controllers) & 1) Physical distorter can be better isolated from attackers. & 3) More costly to introduce the digital distorter. \\ \hline \hline \end{tabular} \end{table} TABLE I: The comparison of digital and physical distortions, as well as other schemes. ## II Related Work Attack detection in ICS, as opposed to traditional fault detection, is more challenging as the adversary here is usually persistent, intelligent, and stealthy. They can make use of the knowledge of the system to remain undetected [21]. While attacks that manipulate the controller logic in ICS can be detected using software attestation [22] and deception technology [23], the existing solutions to counter sensor reading manipulations often face challenges when dealing with hidden attackers. For example, traditional bad-data detection techniques, such as the largest residue test [24], may fail to detect an intelligent attacker who can change the state estimation of the system by introducing errors that fall within the range space of the observation matrix. Similarly, approaches such as [25] cannot work when redundant sensors that measure the same physical phenomenon are all compromised. Numerous detection techniques for false data injection attacks (FDIA) in power grids (e.g., [26, 27, 28, 29]) rely on power grid state estimation and model the system as a stochastic linear/non-linear system that follows a strict mathematical modeling. In such cases, an attack is called stealthy if it is able to fool the system operator without being detected by a residue-based detector. To remain stealthy and avoid detection, the attacker needs to create false measurement data satisfying the constraints of state estimation. If the attacker controls sufficient number of sensors to create self-consistent FDIA, or if it can mislead the control center state estimator to perceive a wrong operating state, the attacker would be able to bypass the existing detection mechanisms and cause large damage [30, 31]. In comparison, the novelty of our proposed approach is that it is independent of the specific state of a system; rather it relies on micro-distortions based on a shared secret key for fast and accurate detection of hidden attackers. Other works like [32, 33, 34] etc. use computationally heavy and resource-consuming data-driven algorithms and machine learning (ML) models to detect false data. These techniques are orthogonal to our work in two ways. Firstly, our work relies on a shared secret key to create the micro-distortions, and a direct comparison against ML techniques that do not consider any shared secret key does not result in a fair comparison. Secondly, most of these techniques are computationally heavy and require costly resources and hardware, and hence these works go against our primary design goal of being low-cost. Several watermarking-based authentication mechanisms where an actuator superimposes a random signal, known as the watermark, on the control policy-specified input while checking for an appropriate response from the sensors, were studied in [35, 16]. There, the physical watermarking is added to the control output and studied specifically in the context of replay attacks, where an attacker just replays previously observed measurements of a system. In [17], the watermarking scheme is extended for false data injection attacks where adversaries have the power to substitute real measurements with generated stealthy signals and was further improved in [36, 37]. Though similar in concept to a shared key, most of these solutions are focused on designing watermarked control inputs to detect counterfeit sensor outputs. Moreover, these solutions are specific to linear dynamical systems described by time-invariant parameters and rely on accurate modeling of the system states. In contrast, our work pertains to detecting attackers by adding micro-distortions directly at the sensors which can then be leveraged to detect hidden attackers. In [20, 38], the authors propose to authenticate sensors and detect data integrity attacks in CPSs by using a sensor's hardware characteristics along with physics of a process to create unique fingerprints for each sensor. Essentially, they create a noise fingerprint based on a set of time domain and frequency domain features that are extracted from the sensor and process noise. The main drawback of using such in-situ fingerprints is that such fingerprints can be learnt and potentially recreated by the attackers after the attacker makes sufficient observation. In comparison, in our work, we propose to artificially introduce micro-distortions based on a shared secret key between the distortion source and the detector (note that the original senders and receivers of those sensor readings do not need to be changed). The use of secret key removes the aforementioned drawback. Furthermore, as shown in [20], the false positive (FP) and false negative (FN) rate that approach can achieve is around \(5\%\), which may not be acceptable for settings where any false positive or negative incurs a high cost to deal with. In comparison, even for a small observation window, the FP and FN rate of our approach is close to \(0\%\). Another relevant research area that we draw inspiration from is the line of work (see Chapter 6 of [39] and references therein) that deals with the technique of _probing_. In [39], probing is defined as the broad technique of perturbing a power system to enhance its monitoring capabilities; rather than just passively collecting measurements, perturbations are introduced into various grid components in order to actively create opportunities for gathering more knowledge about the power system. It is shown that introducing a perturbing (analog) signal can be useful for different tasks, including fault location identification in a power line, topology and phase identification of the underlying power grid, state and parameter estimation of power systems, etc. Similarly, we also use small perturbations (or micro-distortions in our term). However, our micro-distortion is based on a secret key, and the goal is to detect the presence of hidden attackers. Different approaches based on cryptographic primitives have also been proposed to address this problem. For example, homomorphic encryption based solutions were proposed in [40, 41, 42] and public-encryption systems in [43, 44]. They utilize computationally heavy encryption/decryption algorithms, which not only increase delay but also require high upgradation costs for legacy systems. Solutions like [45, 46] rely on the installation of additional specialized equipment which can increase the upgradation costs significantly. ## III Threat Model As shown in Figure 1, we consider a hidden attacker that aims to (eventually) inject false sensor data to the ICS in a stealthy manner by first making the targeted sensor (or more specifically, the field device that sends out the sensor readings) unavailable and then impersonating the sensor by injecting crafted sensor data to the ICS. An attacker can make a sensor unavailable in several different ways. Once the sensor is offline/crashes, any communication to and from the sensor is not possible. One common option is to crash a sensor by exploiting flaws in its software. These flaws are usually more easily discoverable by attackers than flaws allowing an attacker to take complete control of a sensor, which is a strictly higher requirement for the attackers than just crashing the device. In fact, a search of programmable logic controller (PLC) related vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database shows that in the year 2021, around 80% of newly reported vulnerabilities can cause Denial of Service/crashing of the PLC, while only around 20% of vulnerabilities can potentially allow full control of the PLC. For example, see vulnerabilities like CVE-2021-22789 to CVE-2021-22792, etc. As a proof of concept, we exploited the recent CVE-2022-32137 [10] vulnerability of the CODESYS V2 runtime and successfully crashed a PLC used in an energy research testbed. The CODESYS V2 runtime attacked here is widely used by many leading PLC manufacturers. We were not able to take full control of that PLC using the same vulnerability though. There also have been several studies where the embedded devices can be crashed or made unavailable by purposely introducing faults through high voltage or electromagnetic fault injections [47, 48], by explicit physical means (e.g., by simply disconnecting their power or network cable, or more dramatically in the case when a substation was attacked by gunmen) [49, 50, 51], or by denial-of-service attack via networks (e.g., jamming the transmission [13, 14] and the very recent Brokenwire attack against Combined Charging System [52]). Also, there have been several studies on impersonating a sensor to inject false sensor data (e.g., [53, 12, 54]). Such false data injection can be done in several ways, such as sending false data into exposed network ports (similar to Kaminsky's attack [55, 56]), through wireless signal injection [13, 14], or application-layer injection through exposed web or database APIs, or through session hijacking [57, 58]. Note that our threat model does not deal with an on-path man-in-the-middle (MitM) attacker who can directly see and modify legitimate readings. Such attacks are more common today. However, many such attacks (e.g., through ARP spoofing) can be easily detected/prevented by mainstream network security solutions, e.g., network intrusion detection systems or dynamic ARP inspection by Ethernet switches. Also, as discussed earlier, fully controlling a sensor is more difficult task for an attacker than making the sensor unavailable. Hence, in our threat model we assume that the attacker cannot gain direct control of the original sensor nor can she be on-path of the real communication to launch a MitM attack, and we focus on detecting such off-path false-data-injection attackers during their hiding phase. In particular, a concrete end-to-end attack that combines these two steps was demonstrated by researchers from IOActive in Black Hat USA 2017 conference against a nuclear plant [13, 14]. There, the demonstrated attack blocks the real data from the sensors through a denial-of-service attack and sends spoofed data to a nuclear plant's monitoring system through wireless links. The researchers demonstrate the attacks using the same model of devices as used in a real-world nuclear plant and they show that such attacks can potentially lead to severe catastrophic consequences. Also, in vehicular networks, often the ultra-sonic distance sensors of autonomous vehicles and drones are attacked by a similar combination of jamming and spoofing [59, 60]. If the attacker does nothing after making a sensor unavailable (i.e., does not send any fake measurements), the absence of the sensor report will be easily detected as a deviation from the expected behavior of a sensor and can lead to an alarm of anomaly. One may consider the case that an attacker can make a sensor unavailable only when the attacker is ready to launch the actual attack campaign (i.e., only at the planned time when she actually wants to cause damage). This will minimize the period that the attacker needs to impersonate the sensors and hide away from the detection mechanism proposed in this work. However, there are important reasons why the attacker needs to crash a sensor early. One practical reason is that, the best time for an attacker to attack a sensor is during the system maintenance/upgrading period when more ad-hoc traffic is ongoing or when the attacker has easier access to the system (either physically or remotely). On the other hand, the best time for launching an actual attack campaign to maximize the impact could be during peak hours or other critical moments (e.g., during the holiday season or during critical moments in a war). Some of the attacks to make the original sensors unavailable can only be done when the attacker has physical access to the system (e.g. by physically damaging the sensor like in [51]), and that time might not be ideal for launching an attack. Also, when the attacker wants to launch a large-scale attack, it may need to prepare these for a prolonged period of time. Another important reason for the attacker to crash the sensor earlier than the actual attack campaign is because prior to launching the attack, the attacker may not know if she will be Fig. 1: Our threat model over a typical ICS. The blue line represents the connection between the devices, the dashed red line represents the attack path where the remote attacker (or attacker with temporary local access) crashes Sensor 1 and she then impersonates the sensor (e.g., via a compromised device or a rogue device brought in by the attacker). The red line shows the traffic from the impersonated sensor to the other connected devices through the Ethernet switch. In the absence of an attack, similar traffic flow exists between Sensor 1 and the other devices. successful or not. For example, an attacker might try to exploit some known vulnerability and is not sure if the targeted device has the latest security patch. In such a scenario, the attacker would need to make an attempt in order to confirm whether it can exploit such a vulnerability. This likely cannot be done at the last minute of the actual attack campaign. Once the sensor crashes, the attacker would not be able to bring the sensor back, hence, to evade detection, the attacker needs to impersonate that sensor by sending out fake measurements. Last but not least, for many ICS, especially those equipped with anomaly detection capability, the attacker needs to use the so-called "frog boiling attack technique" [7, 8, 9] during her attack campaign, where the attacker tries to gradually poison the system by sending false measurements that are small deviations from the real ones. By gradually doing this, the attacker steers the system state to a state of its liking without triggering alarms. These attacks need to inject a large number of fake data during a "hidden" phase, instead of directly sending a fake data with large deviation, as the latter is more likely to trigger an anomaly detection. This is another reason for an attack to require a prolonged "hiding" period where our defence mechanism will be useful to provide early detection. To make the attacker as strong as possible, we assume the attacker can observe any sensor for a significantly long duration, before taking control of the sensor. As such, the attacker is assumed to have all historical data of the system, from which it can gain complete knowledge about the physical system. However, the attacker is unaware of the shared secret between a sensor and the defender. Additionally, for the scope of this work, while trying to stay hidden, an adversary can only modify sensor output of the compromised sensor and does not change control signals. ## IV Why existing solutions are not desirable Before presenting our solutions, we first discuss some straightforward approaches and explain why they are not desirable. **Using standard encryption or authentication scheme.** One straightforward solution would be to use some secret key to encrypt the sensor's messages, or to add a message authentication code to the sensor's readings. Overall, such techniques inherently require all affected devices in the ICS to be upgraded to accommodate the introduced changes, which might be rather costly. Also, if the sensor's reading is consumed by multiple devices in a broadcast or multicast group, sharing a single key exposes it to a large attack surface. Any compromised member in the group can impersonate that sensor. Using asymmetric keys can mitigate this risk, but incurs higher computation overhead on both the sensor and the other devices. In contrast, in our design the secret key is just shared between the distortion source and the defender. Our design does not require changes to the original sender and receiver devices and also imposes smaller attack surface. **Simpler ways to show the possession of shared secret by the sensor.** We see that introducing the standard encryption and authentication schemes is against our design goal. However, there are still simpler ways to let the sensors use the key to authenticate itself to the defender. One approach is to just send the secret key \(k_{i}\) out of band to the defender. Another approach is to let the sensor double its sending rate and always send readings in a pair, where the first value in the pair is the original reading \(d_{i}\), and the following one is the distorted reading \(d^{\prime}_{i}\). There are a few problems with these simpler approaches for a sensor to show to the defender that it possesses the shared secret: (a) this cannot be achieved if the distortion is introduced through the physical mean, as the sensor will still send out in its original sending rate and cannot send out separate message streams containing the secret. (b) even when the distortion is introduced digitally, the additional secret stream from the sensor or the doubling of the sending rate may cause unexpected effects on legacy devices that depend on the sensor readings. If the sending of the keys is made totally independent from the sending of the sensor readings (hence reducing the chance of affecting the legacy devices), it may be possible for an attacker to crash the software process that transmits the sensor readings without crashing the process that transmits the one-time secret. ## V Detection Using Digital Micro-Distortion When digital manipulation of the sensor data through a secured device is possible, hidden attackers can be detected by simply updating the least significant bit (LSB) of a sensor reading using the secret key. Here, for the sensor reading at time slot \(i\), the digital distorter just needs to simply rewrite the sensor reading's LSB to be the \(i^{th}\) bit of the secret key. For a perfect one-time pad where the attacker cannot guess the current bit of the secret key even when it is aware of all the previous bits, this mechanism can lead to fast and accurate attack detection. In practice, however, given that the one-time pad is imperfect, a potential drawback of this approach is that the secret key can now be observed directly by an attacker (i.e., she just needs to extract the LSB of the sensor readings). In case of a weak one-time pad (e.g., that is generated with non-truly-random data or with a weak generation algorithm or when padding parts are re-used), the attacker can use this historical data to break the system's security (see [61] for such known plain-text attacks). To mitigate such risk, we propose a two-layer security mechanism that uses three independent streams of secret keys (\(sk1\), \(sk2\), and \(sk3\)). Here, the role of the first secret key (\(sk1\)) is essentially to randomly mix the second and third secret keys (\(sk2\) and \(sk3\)) in a way that makes it more difficult for any attacker relying on pattern determination to predict any of the keys. The secret key at the first stage (\(sk1\)) is used to dictate the timing of the use of the other secret keys. That is, in the second stage, for each instance either \(sk2\) or \(sk3\) is used as determined by \(sk1\). For example, consider that the sensor reading's LSB to be digitally updated to the \(i^{th}\) bit of \(sk2\), only at the instances when the \(i^{th}\) bit of \(sk1\) is \(1\) (w.l.o.g.). Similarly, at the instances when the \(i^{th}\) bit of \(sk1\) is \(0\) (w.l.o.g.), the sensor reading's LSB is updated to the \(i^{th}\) bit of \(sk3\). This two-layer defense provides different sources of entropy, leading to a non-linear combination of the secret keys, thereby increasing the difficulty required to decipher them. Even when the LSB of the actual sensor reading is known to the attacker and the attacker can see the associated distorted outgoing sensor readings, it would be difficult for the attacker to recover any of the three secret key streams. Figures 2 (a) and (b) show examples of digital insertion of distortions for the simple and the two-layered scheme, respectively. For this case, we assume reliable data transmission between the distortion source and the detector. (Detecting and correcting transmission errors is the standard practice in communication networks used in ICS). Hence, we can minimize the magnitude of the micro-distortion by manipulating only the LSB in the sensor reading. This can lead to extremely fast attack detection with an error rate that reduces exponentially with the number of trials. To successfully evade detection, the attacker has to correctly predict the secret key bit used in each time-slot. So, the probability of false negative outcome over \(t\) time-slots is given by \[\Pr[FN]=\left(\Pr[\text{Correct guess by the attacker}]\right)^{t}\] Note that \(\Pr[FN]\) quickly comes down to \(0\) as \(t\) increases, for as long as \(\Pr[\text{Correct guess by the attacker}]\) is \(<1\). Specifically, for our designed two-layer scheme, the probability of the attacker correctly guessing any bit is closer to \(0.5\). For as low as \(20\) sensor readings, the probability that an attacker can successfully guess all \(20\) bits correctly is lower than \(10^{-6}\). Though detection can be fast, as discussed earlier, the addition of a digital distorter can result in additional attack surface given the bump-in-the-wire setting. It might make it easier for a remote attacker that can just compromise the digital distortion devise to get hold of the secret key, which would then invalidate the basic premise of the detection mechanism. In comparison, a physical distortion device can be kept isolated (i.e., with an air gap) from the communication network, making it almost impossible for a remote attacker to gain control of it. Another limitation of using such a digital distortion device is the cost consideration. The cost involved here consists of the cost to add the bump-in-the-wire hardware or the cost to update the sensor's firmware. More importantly, such changes require vendor support, may void the warranty, and require a costly re-certification. The expense of doing so can often go against our design goal of getting a low-cost solution. As such, in the following sections, we will discuss the physical addition of micro-distortion. ## VI Detection Using Physical Micro-distortion ### _Adding Micro Distortions Physically_ Different from the digital means of introducing micro-distortion, in this approach, we will deploy a micro-actuator that will physically introduce the micro changes to the underlying system, so that the original sensor will pick that up in its reading. For example, if the sensor is measuring the power consumption of a system, we introduce a small programmable load that can be turned on or off based on the secret key. Specifically, for each sensor, we determine the micro-distortion \(\epsilon\) value based on the magnitude of the sensor readings (e.g., \(<0.5\%\) of the sensor's operating range). For each sensor reading \(d_{i}\), if the corresponding key-value \(k_{i}\) is \(1\), then the sensor's reading is distorted by adding a value of \(\epsilon\) to the reading; else is distorted by subtracting a value of \(\epsilon\) from the reading. Note that, adding distortions in this form maintains a zero mean distortion in the long run. In other words, given \(k_{i}\) (the secret key for a time slot \(i\)), the sensor's original data reading \(d_{i}\) is changed to \(d_{i}^{\prime}=d_{i}+(2k_{i}-1)\epsilon\). Specifically, an increment or decrement by \(\epsilon\) based on the value of \(k_{i}\). Do note that our design for physically adding micro-distortion is inherently robust to analog noise arising due to device imperfections, measurement errors, or environmental factors, given that our technique takes multiple samples to detect the presence or absence of the micro-distortions. Consider the micro-distortion as the signal from an authenticated sensor that our detector seeks to decode. The noise (be it natural variation in the original system, the measurement errors, or the device imperfections) can be handled by looking at multiple samples to achieve a high decoding accuracy despite of varying signal-to-noise ratio. In the more challenging case, it will take a larger number of readings and a longer time to detect a potential attack in highly noisy systems. In fact, we have run our proposed method using noisy real-world data and our experimental results demonstrate the robustness of our approach under such real-world analog noise. ### _Magnitude of the Micro-distortion_ The magnitude of the micro distortion (\(\epsilon\)) can be selected based on the following two key considerations. Firstly and most importantly, it should not cause a noticeable disturbance to the system and should be tolerable by the system. Hence, the chosen value should not be greater than the natural variation of the targeted metric or the natural noise level -- e.g., as in our experiments, the power supply and demand in a power system can naturally change at a level above 0.5%, making our chosen magnitude of 0.5% and 0.25% in the conducted experiments small enough to not cause any disturbance. In comparison, for measurements like the frequency of the power system, which needs to be much more stable, the allowed distortion magnitude would need to be significantly smaller. Under the above hard constraint, the second consideration is the trade-off between the attack detection delay and the magnitude of distortion introduced into the system. One can consider the micro-distortion as the signal from an authenticated sensor that our detector seeks to decode. The natural variation in the original system can be considered as the noise that can cause difficulty for the detector to decode the signal. Hence, a higher magnitude of the micro-distortion increases the signal-to-noise ratio, and hence makes it faster for the defender to determine the presence or absence of potential hidden attackers. In comparison, a lower magnitude of the micro-distortion can lead to a longer detection delay as the defender needs to collect more samples to determine whether the micro-distortion is present or not; however, a lower magnitude distortion makes it easily tolerable by the system. For the digital micro-distortion case, as we assume reliable data transmission between the distortion source and the detector, we minimize the magnitude of the micro-distortion by manipulating only the LSB in the sensor reading. ### _A Straw Man Detection Scheme: Simple Mean Difference_ If one considers a perfectly stable noiseless system where the sensor readings remain constant for the detection period, it is easy to see that our approach can detect a compromised sensor extremely fast, by simply letting the defender check for the pattern of the distortion based on the same secret. An attacker without the knowledge of the secret would not be able to replicate the pattern, hence cannot bypass the detection. In fact, if the attacker just makes a random guess, in each slot, it has a \(50\%\) chance of guessing wrongly and therefore being detected. As discussed in the digital distortion section, the probability that the attacker can remain undetected after \(20\) slots is as low as \(0.5^{20}<10^{-6}\). However, under normal functioning of an ICS, the sensor readings are subject to state changes in the ICS along with possible noise. With the possible sensor readings spanning a much wider (e.g., 200x) range of a micro-distortion, the micro-distortion becomes a negligible signal that easily gets overwhelmed by the magnitude of the actual sensor readings. **Simple Mean Difference.** By the "law of large numbers" [62], the mean value of a large number of observations made for a random variable approaches the random variable's expected value as more observations are taken. Consider the set of readings distorted as \(d^{\prime}_{i}=d_{i}+\epsilon\) as set \(S_{1}\) (i.e., with corresponding secret key \(k_{i}=1\)), and the remaining set as \(S_{0}\), i.e., with \(k_{i}=0\) and corresponding reading distorted as \(d^{\prime}_{i}=d_{i}-\epsilon\)). Since we select the two sets from all the readings over a time window based on the random secret, we can view the original readings \(d_{i}\) from both sets as observations drawn from the same distribution. Hence, with a larger number of observations, the difference between the mean value of all distorted readings from set \(S_{1}\) and the mean value of all distorted readings from set \(S_{0}\) should approach \(2\epsilon\). If the attacker does not know the secret key, it cannot introduce any statistical difference between these two sets. As such, the difference of the mean value for \(S_{0}\) and \(S_{1}\) should approach 0 when under attack. This shows that the detection can eventually be achieved if the detector can examine a sufficiently large number of samples. The issue of simply relying on the law of large numbers, however, is that the detection can be rather slow. For example, if the actual readings are drawn uniformly randomly and independently from all the possible range of values, and when the micro-distortion \(\epsilon\) equals to \(0.5\%\) of the possible range, our evaluation result shows that it requires more than 80,000 samples in order to reduce both the false positive and false negative rate below \(1\%\). Similarly, it needs more than 140,000 samples in order to further reduce that to below \(0.1\%\). Even if the sensor reading is sent every second, this translates to almost one whole day for achieving \(1\%\) false positive/negative rate, and nearly \(40\) hours of readings for \(0.1\%\) false positive/negative rate. If the sensor reading is sent only every minute, it will further inflate the required detection delay to around \(2\) and \(3.5\) months, respectively. ### _Our Design: \(\Delta\) based Mean Difference_ Fortunately, in real-world ICS, the sensor readings present some good statistical properties that allow much faster detection of such attacks. Specifically, in many ICS (and power grid in particular), physical properties being sensed (e.g., the amount of power being generated or consumed) tend to change in a gradual manner (i.e., with small differences between consecutive time slots) in a significant fraction of time. Based on that, we propose a detection algorithm, where instead of comparing the difference of the mean of the sensor readings from sets \(S_{1}\) and \(S_{0}\) (as defined earlier), we look at the change in sensor reading between consecutive time slots, which we refer to as \(\Delta\). \(\Delta\)**-sequence creation.** Given the distorted sensor reading sequence \(d^{\prime}_{1},....,d^{\prime}_{n}\) and the secret key sequence \(k_{1},...,k_{n}\), we define the \(\Delta\) sequence as \(\Delta_{1},...,\Delta_{n-1}\) and \(\Delta^{\prime}\) sequence as \(\Delta^{\prime}_{1},...,\Delta^{\prime}_{n-1}\), where \[\Delta_{i}=d_{i+1}-d_{i}\text{ and }\Delta^{\prime}_{i}=d^{\prime}_{i+1}-d^{ \prime}_{i}\] While \(\Delta_{i}\) gives the difference between the original sensor readings in consecutive time slots, \(\Delta^{\prime}_{i}\) gives the difference between the distorted sensor readings in consecutive time slots. **Data partitioning step.** We define set \(S_{01}\) as the collection of all moments \(i\) such that \(k_{i}=0\) and \(k_{i+1}=1\) and we define set \(S_{10}\) as the collection of all moments \(i\) such that \(k_{i}=1\) and \(k_{i+1}=0\). We define set \(S_{00}\) and set \(S_{11}\) similarly. It could be seen that for an \(i\) that belongs to different sets, the relationship between the corresponding \(\Delta_{i}\) and \(\Delta^{\prime}_{i}\) is different. Specifically, for \(i\in S_{00}\) or \(i\in S_{11}\), since the same distortion is applied to both \(d^{\prime}_{i}\) and \(d^{\prime}_{i+1}\), we can see that \(\Delta^{\prime}_{i}=\Delta_{i}\). On the other hand, \(\Delta^{\prime}_{i}=\Delta_{i}+2\epsilon\) for \(i\in S_{01}\), while \(\Delta^{\prime}_{i}=\Delta_{i}-2\epsilon\) for Fig. 2: (a) Shows the addition of digital distortion through updating the LSB using a single secret key. (b) Shows the two-layer defense using three secret keys. In each time-slot, key 1 determines whether key 2 or key 3 is used for updating the LSB. (c) Shows the addition of physical distortion while illustrating the different notations used. \(i\in S_{10}\). See Figure 2(c) for an example of physical addition of micro-distortion. Since each of the random key \(k_{i}\) is drawn with equal probability from \(0\) and \(1\) in an independent manner, it is easy to see that a moment \(i\) (in regard to \(\Delta\) and \(\Delta^{\prime}\) sequence) has an equal probability of falling into one of the four sets \(S_{01}\), \(S_{10}\), \(S_{00}\), and \(S_{10}\). As the value in the \(\Delta\) sequence does not depend on the value of the secret key sequence, we have: \[\mathbb{E}[avg(\Delta_{i}|i\in S_{01})]=\mathbb{E}[avg(\Delta_{i} |i\in S_{10})]\] \[=\mathbb{E}[avg(\Delta_{i}|i\in S_{00})]=\mathbb{E}[avg(\Delta_ {i}|i\in S_{11})]\] Consider the gauge \(x=avg(\Delta_{i}^{\prime}|i\in S_{01})-avg(\Delta_{i}^{\prime}|i\in S_{10})\) We have: \[\mathbb{E}[x]=\mathbb{E}[avg(\Delta_{i}^{\prime}|i\in S_{01})- avg(\Delta_{i}^{\prime}|i\in S_{10})]\] \[= \mathbb{E}[avg((\Delta_{i}+2\epsilon)|i\in S_{01})-avg((\Delta_ {i}-2\epsilon)|i\in S_{10}])\] \[= 4\epsilon+\mathbb{E}[avg(\Delta_{i}|i\in S_{01})]-\mathbb{E}[ avg(\Delta_{i}|i\in S_{10})]\] \[= 4\epsilon\] An example of this separation can be seen in Figures 3(b) and 4(b). In Figures 3(a) and 4(a) we see that in the original data, the probability density function of the \(\Delta_{i}\) values of the four sets are somewhat similar (given that the sets are chosen randomly). In Figure 3(b), once the micro-distortions are added, a clear separation of the \(\Delta_{i}\) mean values can be seen between the sets \(S_{01}\) and \(S_{10}\). However, in Figure 4(b), we see that the number of samples considered (\(n\)) was insufficient to introduce a clear separation of the \(\Delta_{i}\) mean values, which could possibly lead to incorrect attack detection. In this case, considering data from a longer duration, i.e., a larger sample size, would increase accuracy. Lastly, in Figures 3(c) and 4(c), we see that if an attacker were to randomly inject micro-distortions, without knowing the sets \(S_{01}\) and \(S_{10}\), then the attacker will not be able to introduce any significant statistical difference between sets \(S_{01}\) and \(S_{10}\), leading to detection. **Detection Condition.** As shown, if we calculate the difference between the mean of all \(\Delta_{i}^{\prime}\) in set \(S_{01}\) and that in set \(S_{10}\) as \(x\), the expected value of \(x\) should be \(4\epsilon\) when there is no attack. In comparison, in case of an attack, as we assume that the attacker does not know the value of \(k_{i}\), it cannot introduce any significant statistical difference between the set \(S_{01}\) and \(S_{10}\). As a result, in this case, the expected value of \(x\) should approach \(0\). In other words, we can use the expected value of \(x\) to differentiate between the attack and non-attack cases. In particular, we calculate the \(\Delta\) mean difference \(\mu_{01}=avg(\Delta_{i}^{\prime}|i\in S_{01})\) and \(\mu_{10}=avg(\Delta_{i}^{\prime}|i\in S_{10})\). Thereafter, we check whether \(\mu_{01}-\mu_{10}\in\{2\epsilon,6\epsilon\}\) (as the difference concentrates near the expected value of \(4\epsilon\)) and raise alarm to detect attack if the condition is not satisfied. While this seems similar to using the expected difference of the mean value between \(S_{0}\) and \(S_{1}\) (the procedure, we refer to as 'Simple Mean Difference'), the benefit of calculating using \(\Delta^{\prime}\) sequence is that, for many ICS, the absolute value of the elements in the \(\Delta^{\prime}\) sequence can be significantly lower than those in the distorted reading sequence (i.e., the \(d^{\prime}\) sequence). This is because in many ICS (including many power grid systems), while a particular physical measurement can have readings that span a large range (e.g., the peak power generation or consumption in an energy system can be \(100\times\) or even \(100\times\) bigger than its non-peak period), it turns to change gradually at most times. Thereby making the distribution of value in \(\Delta\) and \(\Delta^{\prime}\) sequence concentrate more heavily towards smaller values, i.e., the variance of the corresponding sequence is much smaller. The smaller values, in turn, make it possible to use a small number of samples to approach a given (small) neighborhood of the expected value with a higher probability. **Filtration Step.** Though the above steps provide a complete detection algorithm, in many systems, although most of the changes across two consecutive time slots have low magnitude, there can be some high-magnitude changes from time to time. For example, when a household turns on a heater, its power demand can increase dramatically, although most of the time, the power demand in the household only has small changes. Filtering out some high \(\Delta_{i}^{\prime}\) values (considering absolute values of \(\Delta_{i}^{\prime}\)) that can cause high variance in the \(\Delta^{\prime}\) sequence can often result in significant improvements. For systems with intermittent large abrupt changes (like in power-grid systems), even though this filtration would reduce the sample size, it would significantly bring down the variation as well, making it much easier for attack detection while also improving the accuracy. This fact becomes more evident from our experiments on the smart grid meter dataset and the SWaT dataset (see Sections VII-A and VII-C). The SWaT dataset considers the change in water-levels in different tanks and therefore does not have any large abrupt changes. In contrast, the smart grid meter dataset measures the power usage of a particular household, where abrupt and large changes occur due to switching on (or switching off) of an appliance. We observe that having a filtration step significantly improves the detection time and accuracy for the smart grid dataset, whereby making almost no difference for the SWaT dataset. Consequently, the \(|\Delta_{i}^{\prime}|\) readings that are greater than a particular threshold \(\Delta_{th}\) are removed from consideration, where \(||\) represents the absolute value function. \(\Delta_{th}\) is based on the past (correct) operation of the sensor and is determined in a way that the number of \(\Delta_{i}^{\prime}\) readings removed is not too much for the time duration considered, i.e., \(n\). However, an attacker might take advantage of such a filtration procedure by introducing high noise to faked sensor outputs which would likely result in most of the noisy data being filtered out, thus delaying the detection of the attacker. Such attackers can be checked by choosing another threshold \(m\) (based on the \(\Delta_{th}\) and system under consideration) which ensures that there is always sufficient \(\Delta_{i}^{\prime}\)'s that get through even after the filtration step when there is no attack. See Algorithm 1 for pseudocode for the 'filtered \(\Delta\) mean difference algorithm'. In the absence of the filtration step, we refer to the algorithm as '\(\Delta\) mean difference algorithm'. As a by-product benefit of our solution, observe that now the one-time pad is not sent in its clear form through any communication between a sensor and the defender. There always exists the possibility of misinterpretation of a natural change of the sensor reading as a micro-distortion to any wiretapper or observer. So, even when the algorithm generating the one-time pad is weak, our use of the one-time pad hidden within the naturally occurring changes will make it harder for the attacker to exploit the weakness. ## VII Experiments and Evaluations In this section, we run our proposed detection algorithm on some real-world datasets and evaluate their performance. In each case, we observe the sensor readings for the duration that it takes to obtain \(n\) samples, after which the algorithm outputs the presence/absence of an attacker. ### _Attack Detection in Smart Grid Meter Dataset_ We make use of the publicly available "Rainforest Automation Energy Dataset for Smart Grid Meter Data" [63] that contains 1 Hz data from two residential houses over a period of 72 days for House 1 and 59 days for House 2. Overall, this dataset contains over 11.3 million power readings. We consider the total power usage data for House 1 given by the sensor'mains'. The data statistics and the statistics of its associated \(\Delta\)-sequence are given in Table II. Also, see Figure 5 to see typical fluctuations in the power consumption of the house and the generated \(\Delta\)-sequence. Based on the data, we choose \(\epsilon\) to be 40 W (\(\approx 0.25\%\) of the maximum power usage). From our experiments (see Table III), we observe that even for a very small sample size of 30, the filtered \(\Delta\) mean difference algorithm gives us a very good FP/FN rate of less than \(1\%\), i.e., with around 99.9 percent accuracy, the algorithm can detect the presence of an attacker (if any) in less than 30 seconds. This case also highlights the advantage of the filtration step that essentially removes the sudden changes in the power drawn when some appliance is turned on (or off). We see that without such a filtration step, the FP/FN rate is around \(4.2\%\) for a similar duration of 30 seconds. Notice from Table II, that the maximum \(\Delta\) is quite large. Filtering these high \(\Delta\) values reduces the variance of the considered \(\Delta^{\prime}\) sequence allowing for faster and more accurate detection. In contrast, the simple mean difference algorithm required around 20,000 samples to achieve a similar level of accuracy of less than \(1\%\). ### _Attack Detection in Solar Power Dataset_ The solar power or the PV dataset is collected from a solar plant deployment in Singapore. The data contains minute-wise values of the power generated from 7 stations from a period of 1/05/2020 to 17/06/2020, with the power generated given in kW for each station and the aggregate power output of the solar plant. We consider the sensor giving the aggregate power output of the solar plant to run our experiments. The data statistics and the statistics of the associated \(\Delta\)-sequence are given in Table IV. Also, see Figure 6 to see typical fluctuations in the solar data and the generated \(\Delta\)-sequence. We know that at night time, the solar power output sensor readings are all zeros, where the presence of an attacker can be detected extremely fast. Hence, our evaluation only considers the more challenging daytime values (from \(8\)am to \(6\)pm), which is 10 hours, or 600 points (per-minute) with a chosen \(\epsilon\) as \(7.5\)kW (or 0.5% of the maximum output of around \(1.5\)MW in this solar plant). As such, for each algorithm, we evaluate at most 600 samples, after which we assume that the attacker can be detected. Even at daytime, we observe from Table V that we can detect an attacker with relatively high accuracy (of around 99.8\(\%\)) with 60 samples (that translates to 1hr time) for the \(\Delta\) mean based algorithms both with and without the filtration step. In this case, as observed from Table IV, the maximum \(\Delta\) is not that high, and therefore, we see that though the filtration step helps, it does not have a significant impact on the accuracy as we saw earlier in Section VII-A. In addition, we also see that the simple mean difference algorithm for these short intervals performs quite poorly. ### _Attack Detection in SWaT Dataset_ The Secure Water Treatment (SWaT) dataset is a publicly available dataset obtained from a state-of-the-art water treatment testbed [20]. SWaT imitates the complete process of a real water treatment plant, and produces 5 gallons/minute \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Simple Mean} & \multicolumn{2}{c|}{Mean} & \multicolumn{2}{c|}{Filtered} \\ & Difference & \multicolumn{2}{c|}{Difference} & \multicolumn{2}{c|}{Mean Difference} & \multicolumn{2}{c|}{Difference} \\ \cline{2-9} n & FP & FN (\%) & FP & FN (\%) & FP & FN (\%) \\ & (\%) & EDA & RDA & (\%) & EDA & RDA & (\%) & EDA & RDA \\ \hline 30 & 69.82 & 12.3 & 12.99 & 0.0 & 0.0 & 1.79 & 0.0 & 0.0 & 1.29 \\ \hline 60 & 71.76 & 13.39 & 13.59 & 0.0 & 0.0 & 0.20 & 0.0 & 0.0 & 0.12 \\ \hline 90 & 74.15 & 13.78 & 13.84 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline 120 & 75.83 & 14.74 & 14.75 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline 600 & 76.43 & 18.36 & 18.47 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline \end{tabular} \end{table} TABLE V: Table showing the false positive and the false negative rate (expressed as percentage) for different \(n\) for simple, \(\Delta\) and filtered \(\Delta\) mean difference algorithms done over 10,000 trials for the solar dataset with \(\epsilon=7.5\) kW. For filtration \(\Delta_{th}=30\) kW. Fig. 5: Figure depicting typical fluctuations in smart grid dataset. The variation in \(\Delta\) is plotted in \(\log\) scale. \begin{table} \begin{tabular}{|c|c|c|} \hline & Solar Power (kW) & \(\Delta\)-sequence (kW) \\ \hline Maximum Value & 1576.54 & 1132.67 \\ \hline Minimum Value & 0.0 & -925.18 \\ \hline Average Value & 394.118 & \(9.30\times 10^{-9}\) \\ \hline Median Value & 276.36 & -0.519 \\ \hline \end{tabular} \end{table} TABLE IV: Table showing the power output data statistics for the solar plant and its associated \(\Delta\)-sequence. Fig. 6: The solar power output and \(\Delta_{i}\) fluctuation over a day. filtered water. Overall, it contains second-wise data from 18 sensors along with other status and state indicators. We consider the measurements of sensor 'LIT101' (level indication transmitter) that records the second-wise water level of the raw water tank in millimeters on 29-05-2020, as the sensor under attack. The data statistics and the statistics of the associated \(\Delta\)-sequence are given in Table VI. Also, see Figure 7 for typical fluctuations in the SWaT data and the generated \(\Delta\)-sequence. Based on the data, we choose \(\epsilon\) to be 4.08 mm (\(\approx 0.5\%\) of the maximum water level). From our experiments (see Table VII), we observe that even for a very small sample size of 50, the filtered \(\Delta\) mean difference algorithm gives us a very good FP/FN rate of less than \(1\%\), i.e., with around 99.9 percent accuracy, the algorithm can detect the presence of an attacker (if any) in less than 50 seconds. We see that the results with and without a filtration step, the FP/FN rate is almost identical for all cases, indicating that the filtration does not help in this case. This is because of the fact that most of the second-wise changes that are seen here are gradual (as reflected in the maximum and minimum \(\Delta\) values in Table VI). ### _Attack Detection in Synthetic Dataset_ In this section, we analyze the performance of our algorithm in the worst and best-case scenarios. Of course, the best-case scenario is when the system is totally steady, i.e., all sensor readings are identical or uniformly changing by either increasing or decreasing at a constant rate. For the uniformly changing case, the \(\Delta\) between consecutive readings becomes constant. In these cases, as discussed earlier, an attacker can be detected extremely fast. In contrast, the worst-case scenario for our detection algorithm is the case where the sensor readings are uniformly distributed over a large range. The \(\Delta^{\prime}\) sequence generated for this case will also be uniformly distributed. We create a synthetic dataset where each element is drawn uniformly and independently at random from a range of (0-100), and the chosen value of \(\epsilon\) is \(0.5\). If we look at \(1\%\) or \(0.1\%\) as a target FP/FN rate, we observe that using the simple mean difference algorithm requires around \(83,000\) and \(150,000\) samples, respectively. That is about 1 day and almost 2 days, respectively, for a per-second sample. In comparison, achieving a similar result with the \(\Delta\)-mean difference algorithm requires around \(195,000\) and \(295,000\) samples respectively which translate to slightly more than 2 days and 3 days respectively for a per-second sample. This confirms for uniformly random independently generated samples, using the original samples is better than using the \(\Delta\)-sequence. Intuitively, calculating the \(\Delta\) results in a random variable within the range of \(-100\) to \(100\), while using the original is in the range of \(0\) to \(100\). Expansion of the range explains the larger variance, and in turn the larger detection time. See Table VIII for obtained FP/FN rate for different number of observations duration (\(n\)). ## VIII Conclusion In this paper, we presented micro-distortion based detection algorithm that can help detect hidden attackers. It provides fast and accurate detection, and requires less changes to legacy systems. The key challenge is to integrate the distortions based on the shared secret key into the sensor readings itself while keeping the distortions as small as possible in such a way that it does not affect the overall system performance. Our approach ensures that none of the original receivers of the sensor reading needs to know the secret key of a sensor to make use of the sensor's reading. Not only zero upgrading or change is needed for those receivers, this also reduces the risk \begin{table} \begin{tabular}{|c|c|c|} \hline & Water Level (mm) & \(\Delta\)-sequence (mm) \\ \hline Maximum Value & 816.968 & 3.022 \\ \hline Minimum Value & 491.484 & -2.080 \\ \hline Average Value & 698.777 & 0.029 \\ \hline Median Value & 727.118 & 0.0 \\ \hline \end{tabular} \end{table} TABLE VI: Table showing the water level data statistics for the SWaT testbed and its associated \(\Delta\)-sequence. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{n} & \multicolumn{3}{c|}{Simple Mean Difference} & \multicolumn{3}{c|}{Mean Difference} \\ \cline{2-9} & \multicolumn{3}{c|}{Mean Difference} & \multicolumn{3}{c|}{Mean Difference} \\ \cline{2-9} & FP (\%) & FN (\%) & FP & FN (\%) & FP & FN (\%) \\ \hline 5000 & 5.87 & 2.11 & 2.02 & 17.22 & 8.51 & 8.54 \\ \hline 100000 & 0.85 & 0.63 & 0.51 & 5.5 & 2.32 & 2.40 \\ \hline 150000 & 0.09 & 0.04 & 0.10 & 1.8 & 1.07 & 1.06 \\ \hline 20000 & 0.01 & 0.00 & 0.01 & 0.90 & 0.45 & 0.45 \\ \hline 250000 & 0.00 & 0.00 & 0.00 & 0.22 & 0.20 & 0.23 \\ \hline 30000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{table} TABLE VIII: Table showing the false positive and the false negative rate (expressed as percentage) for different \(n\) for simple and \(\Delta\) mean difference algorithms done over 10,000 trials for a worst-case synthetic dataset with \(\epsilon=0.5\). Fig. 7: The water level and \(\Delta_{i}\) fluctuation of raw water tank in the SWaT testbed. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{n} & \multicolumn{3}{c|}{Simple Mean Difference} & \multicolumn{3}{c|}{Mean Difference} \\ \cline{2-9} & \multicolumn{3}{c|}{Deference} & \multicolumn{3}{c|}{Difference} & \multicolumn{3}{c|}{Mean Difference} \\ \cline{2-9} & FP & FN (\%) & FN (\%) & FP & FN (\%) & FP & FN (\%) \\ \hline 50000 & 5.87 & 2.11 & 2.02 & 17.22 & 8.51 & 8.54 \\ \hline 100000 & 0.85 & 0.63 & 0.51 & 5.5 & 2.32 & 2.40 \\ \hline 150000 & 0.09 & 0.04 & 0.10 & 0.18 & 1.07 & 1.06 \\ \hline 200000 & 0.01 & 0.00 & 0.01 & 0.90 & 0.45 & 0.45 \\ \hline 250000 & 0.00 & 0.00 & 0.00 & 0.22 & 0.20 & 0.23 \\ \hline 30000 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \end{tabular} \end{table} TABLE VIII: Table showing the false positive and the false negative rate (expressed as percentage) for different \(n\) for simple and \(\Delta\) mean difference algorithms done over 10,000 trials for a worst-case synthetic dataset with \(\epsilon=0.5\). of leaking the secret key due to those legacy receivers being compromised. Our evaluation shows that physically-induced micro distortion works well for systems that are relatively steady (i.e., not highly volatile most of the time). However, for systems that undergo frequent rapid fluctuations, physically-induced distortion would require a longer duration to detect an attacker. In those settings, a digitally-induced distortion approach may be more suitable. ## Acknowledgments This research is supported in part by the National Research Foundation, Singapore, under its National Satellite of Excellence Programme "Design Science and Technology for Secure Critical Infrastructure" (Award Number: NSoE DeST-SCI2019-0008) and "Design Science and Technology for Secure Critical Infrastructure: Phase II" and in part by the SUTD Start-up Research Grant (SRG Award No: SRG ISTD 2020 157). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
2302.01155
Deep COVID-19 Forecasting for Multiple States with Data Augmentation
In this work, we propose a deep learning approach to forecasting state-level COVID-19 trends of weekly cumulative death in the United States (US) and incident cases in Germany. This approach includes a transformer model, an ensemble method, and a data augmentation technique for time series. We arrange the inputs of the transformer in such a way that predictions for different states can attend to the trends of the others. To overcome the issue of scarcity of training data for this COVID-19 pandemic, we have developed a novel data augmentation technique to generate useful data for training. More importantly, the generated data can also be used for model validation. As such, it has a two-fold advantage: 1) more actual observations can be used for training, and 2) the model can be validated on data which has distribution closer to the expected situation. Our model has achieved some of the best state-level results on the COVID-19 Forecast Hub for the US and for Germany.
Chung Yan Fong, Dit-Yan Yeung
2023-02-02T15:16:13Z
http://arxiv.org/abs/2302.01155v1
# Deep COVID-19 Forecasting for Multiple States with Data Augmentation ###### Abstract In this work, we propose a deep learning approach to forecasting state-level COVID-19 trends of weekly cumulative death in the United States (US) and incident cases in Germany. This approach includes a transformer model, an ensemble method, and a data augmentation technique for time series. We arrange the inputs of the transformer in such a way that predictions for different states can attend to the trends of the others. To overcome the issue of scarcity of training data for this COVID-19 pandemic, we have developed a novel data augmentation technique to generate useful data for training. More importantly, the generated data can also be used for model validation. As such, it has a two-fold advantage: 1) more actual observations can be used for training, and 2) the model can be validated on data which has distribution closer to the expected situation. Our model has achieved some of the best state-level results on the COVID-19 Forecast Hub for the US [1] and for Germany [2]. COVID-19 forecasting, time series prediction, transformer, data augmentation. ## I Introduction COVID-19 pandemic has caused tremendous damages around the world. Accurately predicting its trends, including the number of cases infected and deaths, remains one of the most essential tools for the public to understand the scale of the problem and plan for appropriate actions. Researchers from all over the world have proposed a wide range of models to achieve this important task. Compartmental models are one of the standard tools for modeling common epidemic dynamics. They have also been widely used as the foundation of COVID-19 forecasting [3, 4, 5, 6, 7]. Based on some presumed mechanisms in epidemiology, these models describe how the population of different groups, such as the infectious and the healthy ones, affect each other and evolve in numbers. During this pandemic, researchers have also proposed some other model variants to better capture particular situations that we have encountered in COVID-19, such as considering untested and unreported cases [3], non-pharmaceutical interventions (NPIs) [4], and population flows [8]. Compared to the above approach, deep learning models, such as [9, 10, 11, 12], offer an alternative which does not require prior knowledge in epidemiology. They are free from following some predefined modeling assumptions which are usually at best designers' approximations to the reality and hence may not be very accurate. On the contrary, with a data-driven perspective, deep learning models are trained to capture the underlying interactions of the real situation directly from the data collected. They can easily handle various input types [11] and are able to discover more sophisticated and multivariate correlations [12]. However, the major drawback of this approach is that it generally requires a large amount of data for model training. Since COVID-19 has a relatively short history, only with a few hundred days of observations recorded so far, training a deep learning model for generalized COVID-19 forecasting remains challenging. For instance, models that learn only from past observations in Germany may find it difficult to infer the exceedingly high number of cases in November 2021. Having to reserve part of the already scarce training data for validation is yet another issue. Aiming for models that could generalize better in the situations to come, it is a common choice to reserve data of the most recent days for validation [12]. Consequently, only earlier data can be used for training. However, holding out the most recent part of data, arguably the most valuable part, from the training set will likely hinder the performance of the models. Randomly sampling training examples for validation is neither a good choice, for COVID-19 has evolved and hence past observations may not be so indicative of what is going to happen in near future [13]. In summary, these common choices of validation splits may not be favorable in this situation and have room for improvement. To overcome issues incurred by limited observed data, we propose a data augmentation (DA) method which can help generate synthetic data for either training or validation purpose, in particular for COVID-19 death and case forecasting. With extra supply of validation data, the models can use the most recent data for training. By using a transformer based deep neural network and a simple ensemble technique, our models, HKUST-DNN and HKUST-DNN_DA, have achieved some of the best state-level forecasts among the models submitted to the COVID-19 Forecast Hub [1] and German COVID-19 Forecast Hub [2] (Hubs). The main contributions of our work are as follows: * We propose and examine a deep learning framework for the forecasting of state-level COVID-19 deaths and cases. * Our data augmentation method can help generate synthetic data of COVID-19 deaths and cases. Such data can be applied effectively for either training or validation purpose. * Results of these methods have been submitted to the COVID-19 Forecast Hub for the US [1] and Germany [2]. We have achieved some of the best prediction results among many models proposed by different groups of researchers. ## II Related Works ### _COVID-19 Forecast_ #### Ii-A1 Compartmental mechanistic models Compartmental mechanistic models are common tools for epidemic modeling [14, 15, 16, 17, 3, 6, 3]. In this domain, the whole population is divided into compartments that represent people in different conditions, such as the Susceptible, Infected and Removed in a standard SIR model [14]. By describing the dynamics between different groups, usually using a set of differential equations, these models can project into the future how populations of individual groups will evolve and hence infer the quantities of interest. During this pandemic, researchers have proposed variants of compartmental models to take various dynamics into considerations, such as untested and unreported cases [3], reporting delays [15], government interventions [4], and flows of travelers [16]. Some of them proposed methods that can find model parameters in more effective ways [6]. #### Ii-A2 Deep learning models Another popular approach to modeling the trend of this pandemic is by statistical learning using deep neural networks [9, 10, 11, 12], which generally formulated the tasks as time series prediction problems. One advantage of this approach is that the models can easily handle various input types, including those with complex underlying interactions that are difficult to describe as in mechanistic models [11]. They are also able to make use of sophisticated correlations, such as similarities across cities and time in the work of [12]. These models can generally be trained in an end-to-end manner without presuming any prior knowledge in epidemiology. However, with such short history and limited data available for training, the models tend to overfit to previous observations. Model training has to rely on various regularization schemes and the models used remain relatively shallow [9, 10, 11]. #### Ii-A3 Hybrid and ensemble methods Some researchers also proposed to incorporate both techniques in their models. While [18] used a neural network model to predict the residual of a mechanistic model, [7] used a deep structure to predict the parameters of a complex compartmental model. The COVID-19 Forecast Hub [1] and the German and Polish COVID-19 Forecast Hub [2] have been established as platforms to collect and integrate forecasts from multiple groups of researchers using different methods. They combine their submissions into a few ensemble models [19, 20] to increase their stability and effectively reduce variance. ### _Time Series Prediction with Deep Learning_ #### Ii-B1 Recurrent neural networks (RNN) Time series prediction is not a new application for deep neural networks. Amazon has developed and launched a general framework based on auto-regressive RNN, named DeepAR [21], for general time series probabilistic prediction. #### Ii-B2 Graph neural networks (GNN) Another stream of work is to use variants of Graph Convolutional Neural Networks (GCRN) [22] to better capture spatial-temporal correlation between multiple time series. Many proposed methods have been tested on traffic datasets, predicting the traffic load on any spot of the road network. While some methods work with a predefined graph [23, 24], other researchers have proposed methods which learn the underlying graph simultaneously during the training process [25, 26]. Although state-level pandemic forecasting is similar in nature to a traffic forecasting task, finding a graph which describes the inter-series spatial-temporal relationship is not as straightforward as in a road network. Not many of these methods have been tested on COVID-19 prediction. [18] has incorporated a graph of travel data in Diffusion Convolutional Recurrent Neural Networks (DCRNN) [23] to predict the residuals of a compartmental model. #### Ii-B3 Transformers Since transformers [27] were proposed in 2017 demonstrating their effectiveness for many natural language processing (NLP) tasks, their adoption has subsequently been extended to other domains, such as the Vision Transformer (ViT) [28] in computer vision. The multi-head self-attention mechanism of transformers has allowed every position of the input to find ways to relate to other positions effectively. Researchers have also applied transformers to time series prediction [29, 30, 31], aiming to leverage the powerful self-attention mechanism to capture long-term dependency between the past and the future. While they focused more on optimizing the model for longer sequence prediction tasks, such as reducing the quadratic complexity of self-attention, our work on the other hand aims to apply transformers in a different way such that multiple time series can interact with each other for relatively short-term prediction tasks. ### _Data Augmentation for Time Series Data_ Data augmentation (DA) plays a crucial role in helping neural network models learn to be robust against legitimate variations. It is a standard practice in computer vision applications [32]. For NLP tasks, researchers have also proposed a number of DA methods to augment the training data [33]. There also exist some DA techniques for general time series data [34]. Basic approaches include time-domain or frequency-domain transformations, while advanced techniques include modeling data statistically in the embedding space or using generative adversarial networks (GAN). However, these techniques are not suitable for the COVID-19 forecast task, because neither of the operations proposed can reflect its epidemic nature. The amount of training data currently available is very limited. In this work, we propose to use compartmental mechanistic epidemic models to guide the augmentation of training data. To the best of our knowledge, we are the first to synthesize data using this technique. ## III Forecasting Methodology ### _Problem Definition_ State-level COVID-19 forecasting can be formulated as a correlated time series prediction problem. We are given a set of input time series \(\mathbf{X}_{i}\in\mathbb{R}^{L\times d_{m}}\), where \(i\) is the index for one of \(N\) states or locations, \(L\) denotes the number of time steps in each time series, and \(d_{in}\) refers to the dimensionality of each observed event in time. The goal is to predict the corresponding output time series at the \(N\) locations, \(\mathbf{Y}_{i}\in\mathbb{R}^{T\times d_{out}}\), where \(T\) is the prediction horizon (number of time steps ahead) and \(d_{out}\) is the output dimensionality. One time step corresponds to one day. To solve this problem, we seek to train a neural network which corresponds to a function \(f_{\mathbf{\theta}}\) with a set of learnable parameters \(\mathbf{\theta}\) such that the predictions made by the neural network aim to minimize some appropriate loss function \(\mathcal{L}\), i.e., \(\min_{\mathbf{\theta}}\mathcal{L}\Big{(}\big{[}\mathbf{Y}_{1},\mathbf{Y}_{2},\ldots,\mathbf{Y }_{N}\big{]},f_{\mathbf{\theta}}\big{(}\big{[}\mathbf{X}_{1},\mathbf{X}_{2},\ldots,\mathbf{X}_{ N}\big{]}\big{)}\Big{)}\). In the context of COVID-19 forecasting, the input features can include the number of confirmed cases, deaths, tests and vaccinations. Other factors which are related to disease transmission in these locations can also be considered, such as social distancing policies and human mobility measures. Empirically, we find that the four basic features of numbers of cumulative cases (\(cum\_case\)), cumulative deaths (\(cum\_death\)), weekly-incident cases (\(inc\_case\)) and weekly-incident deaths (\(inc\_death\)) generalize the best. The results we are presenting in this paper use only these four basic features as inputs (\(d_{in}=4\)). Such a reduced set of input features has also allowed us to develop a simple yet effective data augmentation technique to fuel the training process with more synthetic examples. Regarding the outputs, our models are configured to predict multiple quantiles of a single target variable, i.e., the 23 quantile intervals required by the Hubs [1, 2].1 This effectively allows our deep learning models to mimic probabilistic forecasts which may be essential in some downstream applications. Compared with more complicated probabilistic modeling techniques, quantile prediction ofens a much more efficient alternative [35]. As such, a 23-dimensional output will be generated for each predicted target (\(d_{out}=23\)). Footnote 1: [https://github.com/reichlab/covid19-forecast-hub/blob/master/data-processed/README.md#Data-formatting](https://github.com/reichlab/covid19-forecast-hub/blob/master/data-processed/README.md#Data-formatting) ### _Model Design_ We first apply a gated recurrent unit (GRU) layer to encode the \(L\)-day inputs of each location \(\mathbf{X}_{i}\) into a latent representation. Since the latent representations encode information of the individual time series separately, we refer to them as _individually encoded representations_, \(\mathbf{h}_{i}\). An interaction layer is then introduced after the first encoding step to allow information to flow between time series. Each time series can interact with every other time series to refine its own representation. The result of such interaction is a set of _collaboratively encoded representations_, \(\mathbf{c}_{i}\). Figure 1 depicts the design of our model architecture which is conceptually simple. The following equations summarize the main computational steps: \[\mathbf{h}_{i}=\text{GRU}\big{(}\mathbf{X}_{i}\big{)} \tag{1}\] \[\big{[}\mathbf{c}_{1},\mathbf{c}_{2},\ldots,\mathbf{c}_{N}\big{]}=\text{ Transformer}\Big{(}\big{[}\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{N}\big{]}\Big{)}\] (2) \[\hat{\mathbf{Y}}_{i}=\text{Predictor}\big{(}\mathbf{c}_{i}\big{)} \tag{3}\] We use a transformer encoder stack [27] as the medium of this interaction, for its multi-head self-attention mechanism allows information to be queried from one part of the inputs and be added to another part in the process. Since our inputs \(h_{i}\), which encode information of different geographical locations, do not have any specific sequential order, the original positional encoding layer of the transformer is simply omitted in our application. After hyperparameter search, we decided to use a network configuration with a two-layer transformer encoder stack of 64-dimensional input/output and feed-forward layers, with 8 attention heads. ### _Dual-residual Predictors_ Instead of directly targeting the raw output values, our models are trained to predict some residuals which can be used to approximate the target values. We propose a dual-residual estimation approach, which aims to combine two kinds of residuals, namely \(\mathbf{R}^{1}\) and \(\mathbf{R}^{2}\) residuals, for better prediction. Equation (4) depicts the predictor's formulation. \[\big{[}\mathbf{R}^{1}_{i},\mathbf{R}^{2}_{i}\big{]} =\text{Linear}(\mathbf{c}_{i})\] \[\hat{\mathbf{Y}}_{i} =(1-\alpha)(\mathbf{Last\_observed\_value}_{i}+\mathbf{R}^{1}_{i})\] \[\quad+\alpha(\mathbf{Projection}_{i}+\mathbf{R}^{2}_{i}) \tag{4}\] #### Ii-D1 \(\mathbf{R}^{1}\) residuals They are the differences between the last observed values and the target values. In some of our preliminary experiments, we found that predicting these differences is much more accurate than predicting the absolute target values. In the case of forecasting the cumulative death counts, these differences are equivalent to the incident deaths since the last observation. #### Ii-D2 \(\mathbf{R}^{2}\) residuals On top of the \(\mathbf{R}^{1}\) residuals, we also propose to incorporate the \(\mathbf{R}^{2}\) residuals, which are the differences between the target values and a linear projection extrapolated from the previous week's observations. If we see this projection as a baseline prediction, the \(\mathbf{R}^{2}\) residuals measure how much the target values deviate from this baseline. In Equation (4), \(\alpha\in[0,1]\) is a hyperparameter adjusting the weighting between the two residuals to obtain a convex combination for predicting the target values. Separate predictors are used to approximate the two residuals. Figure 2 shows an illustration of \(\mathbf{R}^{1}\) and \(\mathbf{R}^{2}\). ### _Loss Function_ It is usually more desirable to obtain probabilistic predictions in epidemic forecasting [36] because it gives us a better sense of the uncertainty of the forecast and, more importantly, allows us to prepare for the worst. To approximate such outputs using a deterministic method, our model jointly forecasts a set of quantiles of the target variables. In both the US and German Hubs [1, 2], their official ensembles are designed to aggregate models of 23-quantile forecast (\(Q=23\)). Similar to previous work in quantile predictions [37, 38, 39], we use a loss function which can produce multiple quantiles of the target variables, as shown in Equation (5) and (6). \[l_{quantile}(i,t,q)=\begin{cases}\tau_{q}(y_{i,t}-\hat{y}_{i,t,q})&\text{if}\ \ \hat{y}_{i,t,q}<y_{i,t}\\ (1-\tau_{q})(\hat{y}_{i,t,q}-y_{i,t})&\text{if}\ \ \hat{y}_{i,t,q}\geq y_{i,t} \end{cases} \tag{5}\] \[\mathcal{L}_{quantile}=\frac{1}{NTQ}\sum_{i=1}^{N}\sum_{t=1}^{T}\left(e^{t/ \kappa}\sum_{q=1}^{Q}l_{quantile}(i,t,q)\right) \tag{6}\] In Equation (5), \(y_{i,t}\) represents the target value of the \(i\)-th location at time \(t\), and \(\hat{y}_{i,t,q}\) its predicted value at the \(q\)-th quantile. While \(q\) is the quantile index, \(\tau_{q}\) denotes the \(q\)-th quantile value. For example, when \(Q=23\), the \(12\)-th quantile corresponds to the median, therefore \(\tau_{12}=0.5\). In order to put more emphasis on the forecast further into the future, we add an exponentially increasing term to put more weight on those errors. The value of \(\kappa\) is chosen such that the weight doubles after every seven days. \[\mathcal{L}_{crossing}=\frac{1}{NTQ}\sum_{i=1}^{N}\sum_{t=1}^{T} \sum_{q=2}^{Q}\text{ReLU}\big{(}\hat{y}_{i,t,q-1}-\hat{y}_{i,t,q}\big{)} \tag{7}\] \[\mathcal{L}=\mathcal{L}_{quantile}+\lambda_{c}\mathcal{L}_{crossing} \tag{8}\] Since individual quantiles are predicted separately, a major drawback of this approach is that some predicted values of the lower-quantiles may end up being larger than those of the higher-quantiles, which is known as the _quantile crossing issue_[37, 38]. To favor the monotonic property of the quantile values, we introduce an additional term \(\mathcal{L}_{crossing}\) to regularize the output values. This term penalizes errors proportionally to the magnitude of any quantile crossing found. In our experiments, we set \(\lambda_{c}=1\). However, \(\mathcal{L}_{crossing}\) alone is not sufficient to prevent all crossed quantile predictions and guarantee monotonicity at inference time. To correct such occurrences, we also consider predictions at different quantiles as upper bounds for other quantiles below and lower bounds for higher ones. For instance, if the \(q\)-percentile is predicted to be \(\hat{y}\), percentiles below \(q\) should also be capped below \(\hat{y}\), while percentiles above should be at least \(\hat{y}\). From this perspective, in case crossing occurs, the predicted quantiles should effectively be swapped so as to satisfy the bounds that they are applying on each other. ### _Model Selection and Ensemble_ One of the major challenges encountered in this study lies in the overfitting problem due to the limited training data. Consequently, the performance of different randomly initialized models could result in a very high variance. Single models are inadequate to achieve good generalization performance. From some preliminary studies, we observed that low validation mean absolute errors (vMAE), as well as more epochs of training before early stopping based on the validation loss rebounds, are two good indicators for spotting the better forecasts. \[\hat{y}_{ensemble}=\frac{\sum_{k=1}^{K}w_{k}\hat{y}_{k}}{\sum_{k=1}^{K}w_{ k}} \tag{9}\] \[w_{k}=\frac{1}{\text{vMAE}_{k}-\min_{\ell}\text{vMAE}_{\ell}/2}\] By combining \(K\) such models which rank best on both indicators, we can form an ensemble with better generalization performance. We have tried different weighting schemes to combine the outputs of the constituent models, such as the simple mean and exponentially weighted sum. Empirically, we found that the weighted sum described in Equation (9) achieved the best in general. In this scheme, the contrast between the weights is boosted through reducing the denominator by half of their minimum. Unless specified otherwise, each forecast we have made combines the best 10 models (\(K=10\)) out of five randomly seeded \(\mathbf{R}^{1}\)-only (\(\alpha=0\)), five \(\mathbf{R}^{2}\)-only (\(\alpha=1\)) and five balanced dual-residual (\(\alpha=0.5\)) models. Fig. 1: Model architecture Fig. 2: Illustration of the dual residuals \(R^{1}\) and \(R^{2}\). ## IV Data Augmentation ### _SIRD Data Model_ We propose to use a mechanistic compartmental model to augment actual observations with more synthetic data for training and validation. We term this as our _data model_. This _data model_ can synthesize plausible trends of numbers of cases and deaths which could have happened in the past or happen in near future. Such data are going to supplement the observed data when circumstances are never observed before. Our _data model_ is based on standard compartmental mechanistic methods [14, 16]. It consists of four compartments, namely, the susceptible or uninfected (\(S\)), infectious (\(I\)), recovered (\(R\)), and death (\(D\)). Their interactions are described by a set of difference equations shown in Equations (10)-(13). In our setting, \(S\) represents the uninfected population. It can be approximated by the complement of cumulative confirmed case count, i.e. \(S=1-cum\_case\), assuming a low reinfection rate [40] and ignoring the unreported cases. For simplicity, \(D\) is the cumulative death count reported. Therefore, they are both observed variables. \[S_{t+1} =S_{t}-\beta S_{t}I_{t} \tag{10}\] \[I_{t+1} =I_{t}+\beta S_{t}I_{t}-\delta I_{t}-\gamma I_{t}+\omega R_{t}I_{t}\] (11) \[R_{t+1} =R_{t}+\gamma I_{t}-\omega R_{t}I_{t}\] (12) \[D_{t+1} =D_{t}+\delta I_{t} \tag{13}\] Here \(S\), \(I\), \(R\) and \(D\) are normalized quantities in the range of 0 to 1 representing the corresponding populations. \(I\) represents those who are infected and at the same time infectious to others, \(R\) represents the people who have recovered from the disease but are still vulnerable to reinfection, and \(\beta\), \(\gamma\), \(\omega\) and \(\delta\) are parameters describing the rate of infection, recovery, reinfection and death, respectively. ### _Model Fitting_ We want to find how \(\beta\), \(\gamma\), \(\omega\) and \(\delta\) have evolved in the past and recently. If we can capture their distributions and sample from them, plausible instances of trends of numbers of cases and deaths can be generated. To ease the fitting process, we introduce an extra learnable parameter \(\eta\) to encode the initial conditions of \(I\) and \(R\), i.e., \(I_{t_{0}}\) and \(R_{t_{0}}\), where \(t_{0}\) denotes the first timestep of every single fitting task. It basically represents the portion of infected ones who are still infectious, rather than recovered. By initializing \(\eta\) to be close to \(0.5\) in the beginning of every fitting, the optimization processes became more stable. \[I_{t_{0}} =(1-S_{t_{0}}-D_{t_{0}})\eta \tag{14}\] \[R_{t_{0}} =(1-S_{t_{0}}-D_{t_{0}})(1-\eta) \tag{15}\] Our fitting processes were done within some moving windows of lengths \(W\in\{15,16,...,28\}\). These lengths are in number of days. As a result, observations were cropped locally within 2 to 4 weeks and fitted to recover their characteristics during that particular period of time. On a given day (timestep) \(t\), given a \(W\)-day history of cumulative deaths (\(D_{t-W+1..t}\)) and cumulative cases (\(\mathbf{1}-S_{t-W+1..t}\)), we solve for the parameters \(\phi(t,W)=[\hat{S}_{t_{0}},\hat{D}_{t_{0}},\beta,\gamma,\delta,\omega,\eta]\) which best describe the situations. In this case, the first timestep \(t_{0}=t-W+1\). We designed a loss function \(\mathcal{L}_{SIRD}\), as in Equation (16), to minimize the sum of squared errors of the generated trends to the observed data. The parameters \(\beta\), \(\gamma\), \(\delta\) and \(\omega\) are regularized to be small and \(\eta\) close to \(0.5\). \(\lambda_{D}=0.5\), \(\lambda_{b}=0.01\), \(\lambda_{o}=0.03\) and \(\lambda_{h}=0.001\) were the weights we used in our experiments. \[\mathcal{L}_{SIRD} =\frac{1}{W}\sum_{t^{\prime}=t_{0}}^{t_{0}+W-1}\Big{(}(\frac{ \hat{S}_{t^{\prime}}-S_{t^{\prime}}}{1-S_{t^{\prime}}})^{2}+\lambda_{D}(\frac {\hat{D}_{t^{\prime}}-D_{t^{\prime}}}{D_{t^{\prime}}})^{2}\Big{)}\] \[+\lambda_{b}(\beta^{2}+\gamma^{2}+\delta^{2})+\lambda_{o}\omega^ {2}+\lambda_{h}(\eta-0.5)^{2} \tag{16}\] The optimizations were computed by sklearn's L-BFGS-B algorithm [41]. Section V-C1 will show and discuss results of our fitted parameters. For every location (state) on every day, there are results obtained using different window sizes. This collection of fitted parameters allows us to capture a distribution of parameters, with which we will sample and generate synthetic data for DA. ### _Data Generation_ We opt to approximate the fitted parameters by a multivariate Gaussian distribution \(\Phi(t)=\mathcal{N}(\mu_{\phi}(t),\mathbf{\Sigma}_{\phi}(t))\), where \(\mu_{\phi}(t)\) and \(\mathbf{\Sigma}_{\phi}(t)\) are the mean and covariance matrix of \(\phi(t,W)\) across \(W\). By sampling instances of parameters from \(\Phi(t)\), then inferring them using our _data model_, new data instances of trends of cases and deaths can be generated at any given time \(t\). _Projecting forward:_ Since our SIRD _data model_ is simply a set of difference equations, with the intrinsic parameters \([\beta,\gamma,\delta,\omega,\eta]\) learned, it is able to infer longer sequences. Presuming that the past trends continue for another \(F\) days, we can approximate the following trend by inferring \(\hat{S}_{t_{1}..t_{2}}\) and \(\hat{D}_{t_{1}..t_{2}}\), where \(t_{1}=t_{0}+F\) and \(t_{2}=t_{1}+W-1\), from the model. We will refer to this set of data as _forward_ in our later discussion. ### _Data Splitting Schemes_ A common choice of data split is to reserve the most recent data for validation purpose, as in Figure (a)a. This scheme assumes that the most recent data are close to the real test data. However, the major drawback of this scheme is that a valuable subset of training data cannot be used to guide the minimization of the loss function to optimize the model's internal parameters. In COVID-19 forecasting, this issue is particularly serious as the situations can change so quickly. Our models would surely need the most recent data for training. Another common choice is to randomly sample data for validation. Yet, for the same reason, this is also not applicable to COVID-19 forecasting tasks. With the synthetic data generated, we can use it for validation too, as shown in Figure (b)b. Its advantage is two-fold. On one hand, synthetic data with \(F>0\) should be indicative of plausible outcome of the situations to come. On the other hand, it releases the recent observations to be used as training data, thus allowing the forecasting model to be trained using the most recent data too. Empirically, we can see that this is a better choice for sub-national Germany forecasts in our experimental results. ## V Experiments and Results Since June 2021, we have started our experiments and begun to submit results of our HKUST-DNN model (trained on real data only, without DA) to the US Hub [1], and, since November 2021, results of our HKUST-DNN_DA model (trained on real data and validated by augmented data) to the German Hub [2]. Since then, based on the data collected up to every Saturday, our models are trained to predict \(cum\_death\) in the US and \(inc\_case\) in Germany for the four subsequent Saturdays, i.e., four different horizons spanning 1 to 4 weeks ahead. Instead of predicting all the targets of the two countries, our scope of experiments has focused on predicting each country with a specific target, i.e., \(cum\_death\) for the US and \(inc\_case\) for Germany. Our analysis below will focus on a 9-month window between June 2021 and February 2022, in which our models were designed and results were closely monitored. This window covers the period when the dominating variant of COVID-19 was transiting from Delta to earlier versions of Omicron. For various issues and the performance after this period of time, we will discuss them later in Section VI. ### _Evaluation Metrics_ In this paper, the forecast performance of different models will mainly be evaluated using the mean absolute error (MAE) of their point estimates (if absent, the estimated medians) and the weighted interval score (WIS) [36] of their distributional forecasts in a 23-quantile format. Ground TruthWe adopt the death and case counts reported by Johns Hopkins University (JHU) [42] and Robert Koch Institut (RKI) as ground truth values of the US and Germany, respectively. The RKI's data is retrieved through the German Hub [2]. Please note that the JHU dataset is updated retrospectively, in which the latest values may be different from earlier versions. As a matter of fact, there are frequently corrections made to the historical record of JHU's \(cum\_death\). To isolate errors induced by such corrections from the prediction errors, we opt to compare the predicted increments (since the day of the last observation) to the ground truth increments instead of comparing the actual predicted values to the corresponding ground truths directly. Missing DataEvery week, the US Hub collects forecasts from dozens of models on various prediction targets. Since there could be some missing forecasts (on certain dates or in some locations), we will only compare models which have contributed at least 90% of the forecasts falling in our scope of interest, i.e., on \(cum\_death\), 51 states (including Washington DC), 23 quantiles, 4 horizons (1- to 4-week), and within our 9-month window. Furthermore, to maintain a fair comparison between the selected models which might still have slightly different missing values, our calculation of MAE and WIS will only include events in which all models have participated. If there is any missing forecast from any model, that forecast will not be included into our calculation of MAE and WIS. This same rule applies to the German forecasts. Mean Absolute Error (MAE)For point estimates (or predicted medians), their performance is simply measured by their MAE to the ground truth. However, since the reported state-level \(cum\_death\) in the US can possibly be updated after our forecast, we opt to compare the predicted increments to the ground truth increments, instead of the absolute values, as explained above in Section V-A0a. Weighted Interval Score (WIS)For probabilistic forecast evaluation (in 23-quantile format), we adopt the WIS scoring method proposed by [36] to compare forecast submissions to the German Hub. Unlike traditional methods such as the continuous ranked probability score (CRPS) for measuring the errors between continuous distributions, WIS only requires certain quantiles of such distributions to be evaluated, such as the 23-quantile format which is being used in this work. It is also adopted by other researchers [19] to compare the results in the US Hub. ### _Results in the US (without DA)_ Using our transformer-based deep learning model, we have participated in forecasting 1 to 4 weeks ahead (4 horizons) the \(cum\_death\) in 51 states (including Washington DC) of the US since June 2021. Our submitted results are named under HKUST-DNN in the US Hub [1]. Figure 4 shows the trends of MAE of the top 15 models in the US Hub based on the criteria mentioned in Section V-A0b. Its horizontal axis represents the dates being forecasted, from July 2021 to February 2022. The MAE of every forecasted date is an aggregated result of 4 different horizons (1 to 4 weeks ahead). In terms of the \(cum\_death\) forecast, our model HKUST-DNN has achieved a relatively low MAE compared to other models submitted to the Hub. It also has comparable performance to the Hub ensembles named under "COVIDhub...ensemble". Note that they are the combined and balanced results of all the models from various methods submitted to the Hub. Fig. 3: Comparing data splitting schemes The WIS of forecasts from different models have very similar trends to their corresponding MAE. It is also included here for completion. Table I reveals the overall MAE over this period of time. They are the aggregated results from all 51 states and 4 horizons (1 to 4 weeks ahead). For each date, horizon and state, each model has a rank among others, depending on who has a lower MAE. The average of these ranks is used to sort the entries in the table in ascending order. In term of this average rank, our model HKUST-DNN has ranked relatively high among others. Note that the top-5 models are all Hub ensembles which are aggregated results of all submitted models. ### _Results in Germany (with DA)_ We have also tested our model with DA on forecasting the sub-national \(inc\_case\) of the 16 states in Germany. Adopting the naming convention of the Germany Hub [2], these 16 German states are also aliased as GM01 to GM16.2 Footnote 2: [https://github.com/cfmg32/covid19-forecast-hub-de/blob/master/template/state_codes_germany.csv](https://github.com/cfmg32/covid19-forecast-hub-de/blob/master/template/state_codes_germany.csv) Compared to the US, Germany has a much smaller dataset, attributing to its fewer states (also with a smaller population and territory). From our observation, the trends of \(case\) and \(death\) in Germany exhibit less variation, especially prior to the recent waves of Omicron variants. Therefore, in order to avoid overfitting to the small training dataset, we introduce our DA method in training the models for Germany. Empirically, we found that the models trained with our DA method perform better than those without. Based on the following divergence analysis, we will also show that our augmented data has a distribution closer to the test data (to be forecasted), rather than the real training data (historical data). #### Iv-C1 Augmented Data Fitted parameters Figure 5 shows the results of the fitted parameters of our _data model_. On every day (or timestep) \(t\), the parameters are obtained by fitting our SIRD model to a recent history of \(W\in\{15,16,\ldots,28\}\) days in each of the 16 states. The parameters \(\beta\), \(\gamma\), \(\omega\) and \(\delta\) depict the rates of infection, recovery, reinfection and death, respectively, \(\eta\) is the initial portion of infectious population among the infected ones, and the ratio of \(\eta:1-\eta\) represents the proposed initial ratio of \(I\) to \(R\). Since \(\eta\) is initialized to 0.5 before fitting, unless the L-BFGS-B optimization needs a much lower or higher value to explain the trends, it tends to converge to local optima around 0.5. Generating new data Sampling from the observed or fitted distributions of \(S\), \(D\), \(\beta\), \(\gamma\), \(\omega\), \(\delta\) and \(\eta\), we obtained new parameters to generate synthetic trends for DA. This process was described in Section IV-C. Figure 6 shows samples of the best and the worst fitted 28-day trends (\(W=28\)). They respectively represent the most easy and difficult cases that our SIRD model has comprehended so far. These examples are shown primarily to demonstrate the quality of the fitting processes. A couple of points are worth noting. First of all, for visualization purpose, the death counts are multiplied by a factor of 20, so that their trends can be visually compared more easily. Green and red dashed lines are the inferred \(I\) and \(R\). Secondly, unlikely the real observations which are more noisy, these fitted trends are smoother and lack of noise. Yet we hope that they can still capture much information about the situations that are plausible to happen. Divergence analysisTo verify the suitability of our synthetic data, we conducted a divergence analysis between the original training data and our synthetic data. In particular, we want to see if the synthetic data is close to the test data, so as to enhance the predictability over the situations in the near future. Fig. 4: Trends of forecast performance (MAE and WIS) in the US \begin{table} \begin{tabular}{l r r r} \hline Model & Avg. rank & MAE & WIS \\ \hline COVIDHub-trained\_ensemble & 9.26 & 83.07 & 61.17 \\ \hline COVIDHub-4 week\_ensemble & 9.41 & 84.80 & 61.72 \\ \hline COVIDHub-ensemble & 9.42 & 86.63 & 63.45 \\ \hline COVIDHub\_CDC-ensemble & 9.42 & 86.63 & 63.45 \\ \hline Kymmetrically-selected\_ensemble & 10.23 & 376.83 & 358.43 \\ \hline JHU\_CSSE-DECGLEAM & 10.74 & 102.01 & 74.77 \\ \hline HKUST-DNN (Ours) & 10.88 & 107.84 & 81.45 \\ \hline USC-S\_SI\_alpha & 11.22 & 116.46 & 97.48 \\ \hline SteveMcConnell-CovidComplete & 11.52 & 763.33 & 754.13 \\ \hline epiforecast-ensemble & 11.71 & 113.51 & 84.43 \\ \hline GT-DeceCOVID & 11.94 & 101.72 & 83.89 \\ \hline Bragna-RJChiver & 12.13 & 103.21 & 80.00 \\ \hline MIT\_CittData-GBCF & 12.54 & 147.06 & 129.45 \\ \hline UCSD\_NEU-DeepGLEAM & 13.14 & 133.26 & 109.72 \\ \hline COVIDHub-baseline & 13.41 & 134.83 & 102.03 \\ \hline \end{tabular} \end{table} TABLE I: Overall performance of \(cum\_death\) forecast in the US We first visualize the data using t-SNE [43] plots. Training examples \([X_{i},Y_{i}]\) are first serialized into vectors \(Z_{i}\in\mathbb{R}^{L\times d_{m}+T\times d_{out}}\). Figure 7 shows two t-SNE plots which visualize these vectors in a 2-D space. Every plot corresponds to a particular date of forecast. In order to show the test and recent observations more clearly, their point sizes are enlarged. Two particular dates are shown. The first plot in Figure 7 visualizes various available datasets when forecast was carried out on 2021-Jul-17, with \(L=14\) and \(T=28\). Its test data refers to a four-week horizon till 2021-Aug-14. The recent observations (\(L+T+7\) days by 2021-Jul-17) were supposed to form the validation set in the usual practice without DA. All historical data (approximately two years) was available for training. All synthetic datasets (with \(F\in 0,14,28\)) were generated based on observations as of Jul-17. Based on our divergence analysis, as shown in Figure 8, this time reached the lowest divergence so far between the test data and our generated data. For comparison, the latter plot in Figure 7 depicts the circumstance on 2022-Jan-29. In contrast, It has the highest divergence so far in our experiments. Yet, both plots of different dates visually demonstrate that, compared to the available training data (historical as well as recent observations), our generated data could produce a smaller divergence from the tests. As the pandemic has evolved so much since its emergence in 2019, its historical course of observations (real training data) may become limited in capturing the distribution of trends which we are trying to predict (test cases). Similar to the idea of [44], we estimate the KL divergence using a \(k\)-nearest-neighbor (kNN) density estimator. In Equation 18, \(z_{i}\) denotes every sample drawn from distribution \(P\), \(v_{i}\) is the spherical volume of its \(k\) nearest neighbors occupied in \(Q\), and \(n\) and \(m\) are the numbers of samples obtained from \(P\) and \(Q\), respectively. \[D_{KL}(P||Q) =\mathbb{E}\Big{(}P(Z)log\frac{P(Z)}{Q(Z)}\Big{)} \tag{17}\] \[\approx\frac{1}{n}\sum_{i=1}^{n}log\frac{P(z_{i})}{Q(z_{i})}\] (18) \[\approx\frac{1}{n}\sum_{i=1}^{n}log\frac{1/n}{k/mv_{i}} \tag{19}\] In our analysis, \(P\) corresponds to the distribution of the test data, which is the target our models need to capture. Since every Saturday corresponds to a new forecast with an updated model trained on the latest data, the test data for each model normally contains just a few samples (i.e., one sample from each state \(i\)), which are hardly formulated by a density function. We sought to approximate it by a uniform weighting across its samples; therefore, \(P(z_{i})\) is simplified to \(1/n\) in Equation (19), whereas \(Q(z_{i})=k/mv_{i}\) is still estimated by its \(k\)-nearest-neighbors density. Figure 8 shows a comparison in terms of the estimated divergence, from the test data to the 1) historical data (training set), 2) recent data (usual validation set), and 3) our augmented data. The densities of \(Q\) are approximated by the third nearest neighbor of the data points, i.e., \(k=3\). From the comparison, Fig. 5: Fitted parameters of the 16 states in Germany. Fig. 6: Best- and worst-fitted state-level trends in Germany we can observe that the original training data has a relatively larger divergence from the test set. We can see that our synthetic data will likely be helpful for prediction because it has a smaller divergence to the target data. Please note that the negative values in the estimated divergence suggests that our kNN approximation may produce a density \(Q(z_{i})>1/n\). It means that the third nearest neighbor may be so close to the query point (test data) that it overestimates the density around. Yet it is still a fair comparison, because all datasets are estimated using the same set of \(P(z_{i})\). As a result, it shows that, over the whole period of time, the generated data are closer to the test data, rather than the training or the usual validation (recent) data. Consequently, it is expected that the generated data can be a good supplement as an additional supply of training data. #### V-B2 Forecast Performance in Germany Models includedOur results submitted to the German Hub are named HKUST-DNN_DA, which is trained on the real data and validated using synthetic data. It will also be aliased as HKUST-DNN_DA-val in our following discussion. We have also included HKUST-DNN and HKUST-DNN_DA-train in this paper for comparison. They represent our models trained without DA and using DA as training data, respectively. Figure 9 shows a comparison of three of our trained models to other models available in the German Hub. HKUST-DNN corresponds to our model without DA (trained only on historical observations) and HKUST-DNN_DA-val is our model trained using DA for validation. It is equivalent to our model HKUST-DNN_DA submitted to the German Hub. HKUST-DNN_DA-train is our model trained using DA as part of the training set, keeping the recent observations for validation. Please note that the results of HKUST-DNN and HKUST-DNN_DA-train are only available in our GitHub fork3 of the German Hub. Footnote 3: [https://github.com/cfong32/covid19-forecast-hub-de](https://github.com/cfong32/covid19-forecast-hub-de) Here, augmented data are generated using a 14-day forward (\(F=14\)) setting. To supply data which is closer to the situation to be forecasted, only new synthetic data which generated in previous seven days were included in each training or validation. Figure 9 shows the overall performance of our models compared to other models submitted to the German Hub. They are measured in MAE and WIS, same as for the US. The models being compared are submissions from other research groups, such as FIAS_FZJ-Epi1Ger from [45] and USC-SIkJalpha from [6]. For details of these models, please refer to their published meta data on the Hub [2]. To the best of our knowledge, ours are the only deep learning models based on neural networks. One of the Hub ensembles (KITCOVIDhub-median_ensemble) is also compared.4 It is a simple quantile-wise median of all the submitted models, including our HKUST-DNN_DA. Footnote 4: Since the German Hub ceased publishing official KITCOVIDhub-median_ensemble forecasts after 2021-Dec-06, we have continued calculating it for subsequent dates. Comparing our own results, both our HKUST-DNN_DA-val (red) and HKUST-DNN_DA-train (green) models could result in lower errors than the one without DA (black). It shows that our augmented data could help our deep learning models to capture and predict the trends. Especially on Jul-03, HKUST-DNN resulted in a relatively large error due to overfitting to training data. With the help of DA, in either validation or training, our models resulted in a much lower error. Fig. 8: Trend of the estimated divergence Fig. 7: t-SNE plots of datasets on 2021-Jul-17 and 2022-Jan-29 Compared with some other models in the Hub, especially during the plateaus in 2021 Sep and Dec, our model HKUST-DNN_DA-val was able to deliver more stable performance. Table II shows the overall MAE, WIS and rank of all the models. In terms of MAE, HKUST-DNN_DA-val has achieved the second best among all submissions. The average rank is calculated similarly to the US results. ### _Our DA on Other Deep Learning Models_ #### Iv-D1 Other Models As a proof of effectiveness, we have tried to apply our generated data on two other deep learning models. One of them is based on a simple LSTM encoder-decoder architecture. Each of the encoder and decoder is simply a 2-layer 64-unit LSTM. Another model is the Adaptive Graph Convolutional Recurrent Network (AGCRN) [25]. It is a multivariate time series prediction architecture proposed for traffic forecasting in road networks. By applying the same quantile loss as described in Section III-D, on these two architectures, they are also able to produce 23-quantile outputs for each of the 16 states in Germany. #### Iv-D2 Results Table III shows a comparison of our HKUST-DNN model with the two baselines with and without DA, during the period from 2021-Jun-01 to 2022-Jan-01. With our DA used for validation (DA-val) and saving the recent data for training, these models are all improved with an overall reduction in MAE. As the pandemic progresses, the number of cases sometimes can vary by orders of magnitude. The overall MAE or WIS could be dominant by times when the number of cases is high. Therefore, on top of the total reduction in percentage, here we also calculate the per-date reduction which measures the average of percentage reduction on different target dates. In terms of this per-date MAE and WIS metrics, the performance of our model HKUST-DNN and AGCRN can aslo be boosted by the augmented data. However, the WIS evaluation seems to suggest that the improvement in probabilistic forecasts (in 23-quantile format) is not as prominent as in the point (median) forecasts. The smooth and simple generated data was not able to promote performance for simple models like the standard LSTM architectures. ### _Ablation Study on Dual-residual_ To show the benefit of using our proposed dual-residual mechanism, we have carried out the following ablation study. Table IV shows a comparison of our submitted models HKUST-DNN and HKUST-DNN_DA with three other small ensemble models under different residual settings (\(\alpha\in\{0,0.5,1\}\)). The suffixes _R1,_R2 and _dualR indicate the dual-residual settings of \(\mathbf{R}^{1}\)-only (\(\alpha=0\)), \(\mathbf{R}^{2}\)-only (\(\alpha=1\)) and balanced dual-residual (\(\alpha=0.5\)), respectively. These three residual-specific models are in fact smaller ensembles combining five composites of different random seeds, while the submitted ones are bigger ensembles combining the best 10 of all these 15 composites. Please refer to Section III-E for more details of the selection process for the best. To help focus on this comparison, the reported average ranks and standard deviation of ranks are calculated among these four models only. As HKUST-DNN and HKUST-DNN_DA are ensembles of the other models, no wonder their average and variance of ranks are the lowest. Nonetheless, this comparison clearly shows that balanced dual-residual has performed more steadily (with low rank SD) and is on average better (with lower \begin{table} \begin{tabular}{l r r r} \hline Model & Avg. rank\(\downarrow\) & MAE & WIS \\ \hline KITCOVIDub-median\_ensemble & 3.93 & 8584.38 & 7308.00 \\ \hline ITWW-county\_repro & 4.11 & 10849.09 & 9355.58 \\ \hline CovidMetrics-epiBATS & 4.39 & 7698.64 & 6993.11 \\ \hline HKUST-DNN\_DA-val (Ours) & 4.42 & 8556.37 & 7978.05 \\ \hline HKUST-DNN\_DA-train (Ours) & 4.50 & 8719.78 & 7571.46 \\ \hline USC-StRJalpha & 4.52 & 8445.18 & 6894.39 \\ \hline HKUST-DNN (Ours) & 4.89 & 8789.48 & 7019.20 \\ \hline FIA\_FZJ-Epi1Ger & 5.24 & 21491.14 & 18783.82 \\ \hline \end{tabular} \end{table} TABLE II: Overall performance of \(inc\_case\) forecast in Germany \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{4}{c}{MAE} \\ \hline model & no-DA & DA-val & Total reduction & Per-date reduction \\ \hline HKUST-DNN & 3726.1 & 3349.6 & 10.1\% & 11.3\% \\ \hline AGCRN & 3447.0 & 3298.7 & 4.3\% & 3.5\% \\ \hline LSTM & 3412.2 & 3337.4 & 2.2\% & -2.6\% \\ \hline \multicolumn{5}{c}{WIS} \\ \hline model & no-DA & DA-val & Total reduction & Per-date reduction \\ \hline HKUST-DNN & 2836.1 & 2901.6 & -2.3\% & 6.1\% \\ \hline AGCRN & 2770.7 & 2633.3 & 5.0\% & 4.5\% \\ \hline LSTM & 2669.5 & 3138.8 & -17.6\% & -14.4\% \\ \hline \end{tabular} \end{table} TABLE III: MAE and WIS reduction using DA for validation Fig. 9: Trends of forecast performance (MAE and WIS) in Germany average rank) than either one of the \(\mathbf{R}^{1}\)-only or \(\mathbf{R}^{2}\)-only setting. ## VI Discussion ### _Probabilistic Forecast Performance_ When the available training data is limited, we need to generate some augmented data for training or validation so that the models will not overfit the limited dataset. In the case of COVID-19 forecasting, we have shown that our DA approach was effective to avoid such overfitting problem. Our synthetic data allows the models to consider situations which have not happened before, and thus be more generalizable to other plausible trends. However, we have observed that the DA process adopted somehow hurts the probabilistic forecast performance, which is evident in limited improvement in WIS. In our experiments, we have often observed that the faster the trends rose or fell, the more certain and less dispersed the prediction would be. This phenomenon could be quite counter-intuitive, because normally when changes come sudden, uncertainty should increase. The degradation in probabilistic forecast performance and WIS could possibly be attributed to the simplicity of our data model. Compared to real data, our generated data is indeed lack of variety and noise. Incorporating some noise into the data model would be one of the directions to explore in our future work. ### _Our Data Model_ Our SIRD data model is relatively general and simple. It is enough to provide an easy access to the trends of some latent variables like the rates of infection, recovery, reinfection and death, i.e., \(\beta\), \(\gamma\), \(\omega\) and \(\delta\), respectively. However, our way of fitting the curves window by window has violated the continuity of some continuous values of \(S\), \(I\), \(R\) and \(D\). This could cause certain errors in our estimation. However, since the whole trends are guided by the observed \(S\) and \(D\), the overall error should be limited. Due to the simplicity of our data model, it may not be able to capture complicated interactions between recent variants, such as the Omicrons. Unfortunately, especially in our recent forecasts, we observe that the DA setting is not straightly better than the no-DA setting. Using real data for validation could sometimes performs better than synthetic data. This could mean that the parameters captured by our model cannot reflect the rapidly evolving situations. To cope with this, we may need to explore more sophisticated data models or ensemble schemes. This requires more detailed studies which will be left for our future work. ## VII Conclusion In this work, we have proposed a deep learning method for COVID-19 forecasting. The whole method consists of a transformer model, an ensemble method together with a data augmentation method. While the transformer model leverages signals from the trends of multiple states, the ensemble alleviates the problem of high prediction variance. Together, they have overcome some problems induced by the limited training data and have achieved some of the best results of cumulative death prediction in the US Hub [1]. Due to the limit training data available for training and validation, our DA method has been shown to be critical in enabling our German sub-national level case prediction. Divergence analysis has confirmed that our synthetic data is indeed close to the test distribution. Such augmented data is able to help validate models, which in turn can free up more recent data for training. It also ensures that our models can be generalized to situations which have not happened before in this COVID-19 crisis and during some unprecedented moments, such as the wave caused by the variant Omicron which started in late 2021. Our real-time results have been submitted to the US [1] and German [2] Hubs weekly since Jun and Nov 2021, respectively. They have contributed to these valuable datasets and also achieved some of the best results among all the submissions. For references, our other retrospective results are also available in our forked repositories on GitHub 56. Footnote 5: [https://github.com/cfong32/covid19-forecast-hub](https://github.com/cfong32/covid19-forecast-hub) Footnote 6: [https://github.com/cfong32/covid19-forecast-hub-de](https://github.com/cfong32/covid19-forecast-hub-de)
2308.14265
A Risk-Aware Control: Integrating Worst-Case CVaR with Control Barrier Function
This paper proposes a risk-aware control approach to enforce safety for discrete-time nonlinear systems subject to stochastic uncertainties. We derive some useful results on the worst-case Conditional Value-at-Risk (CVaR) and define a discrete-time risk-aware control barrier function using the worst-case CVaR. On this basis, we present optimization-based control approaches that integrate the worst-case CVaR into the control barrier function, taking into account both safe set and tail risk considerations. In particular, three types of safe sets are discussed in detail: half-space, polytope, and ellipsoid. It is shown that control inputs for the half-space and polytopic safe sets can be obtained via quadratic programs, while control inputs for the ellipsoidal safe set can be computed via a semidefinite program. Through numerical examples of an inverted pendulum, we compare its performance with existing methods and demonstrate the effectiveness of our proposed controller.
Masako Kishida
2023-08-28T02:46:15Z
http://arxiv.org/abs/2308.14265v1
# A Risk-Aware Control: ###### Abstract This paper proposes a risk-aware control approach to enforce safety for discrete-time nonlinear systems subject to stochastic uncertainties. We derive some useful results on the worst-case Conditional Value-at-Risk (CVaR) and define a discrete-time risk-aware control barrier function using the worst-case CVaR. On this basis, we present optimization-based control approaches that integrate the worst-case CVaR into the control barrier function, taking into account both safe set and tail risk considerations. In particular, three types of safe sets are discussed in detail: half-space, polytope, and ellipsoid. It is shown that control inputs for the half-space and polytopic safe sets can be obtained via quadratic programs, while control inputs for the ellipsoidal safe set can be computed via a semidefinite program. Through numerical examples of an inverted pendulum, we compare its performance with existing methods and demonstrate the effectiveness of our proposed controller. ## I Introduction Safety-critical systems such as autonomous vehicles, aerospace vehicles, and medical devices require impeccable reliability due to the potentially catastrophic consequences of their failure. As these systems become more prevalent, ensuring their consistent safety in the face of uncertainty becomes more important and challenging. Control Barrier Functions (CBFs) [1] have been increasingly used to ensure the safety of various systems such as robotics [2, 3, 4], spacecraft [5], and automotive systems [6, 7]. Essentially, the CBF provides guarantees that the system's state will remain within a given safe set for all future times, given that the control input satisfies certain conditions, allowing us to design control systems that satisfy safety requirements. While early CBF-based control approaches rarely addressed uncertainties, recent studies have begun to consider them [8, 9, 10, 11, 12, 13, 14, 15, 16]. For example, bounded model uncertainties are considered in the control-Lyapunov-function control-barrier-function quadratic program in [8], parametric model uncertainties are considered by introducing adaptive CBF in [11], and stochastic uncertainties are considered and treated using expectations in CBF [16], just to name a few. However, these approaches may still have limited applicability in safety-critical systems. They overlook risks that could lead to severe consequences when safety-critical systems operate under uncertainty. To minimize the risk that results in a severe loss, the development of advanced risk-aware control approaches becomes essential. The Conditional Value-at-Risk (CVaR) [17, 18] is a risk measure that is defined as the conditional expectation of losses exceeding a certain threshold, thus quantifying the tail risk. CVaR was first used in the finance [17, 19], and now it is also used in the controls [20, 21, 22, 23, 24]. However, the computation of CVaR requires the exact knowledge of the uncertainty probability distribution, which may limit its practical use. The worst-case CVaR [17, 19, 25], on the other hand, does not require the exact knowledge of the uncertainty probability distributions, but considers the maximum risk over a set of possible uncertainty distributions, which enhances its applicability in practice. Moreover, it is known that the computation of the worst-case CVaR can be expressed as a quadratic problem for common cases [18, 26]. The use of the worst-case CVaR in control problems can be seen, for example, in [27, 28, 29, 30]. The primary objective of this paper is to design a risk-aware control approach to enforce safety for discrete-time nonlinear systems subject to stochastic uncertainties. This is achieved by integrating the worst-case CVaR into the control barrier function. The main contributions of the paper are threefold: 1) The derivation of useful results related to the worst-case CVaR, 2) The introduction of a discrete-time risk-aware control barrier function definition using the worst-case CVaR, and 3) The formulation of optimization-based control approaches that integrate the worst-case CVaR into CBF for three types of safe sets; half-space and polytopic (as quadratic programs) and ellipsoidal (as a semidefinite program). A paper that is closely related to this paper is [31], which presents a higher-level discussion of risk-aware CBF along with specific examples of expectation and CVaR as risk measures. Our paper focuses on the use of the worst-case CVaR and provides concrete computational formulation for implementation. Thus, the main contributions mentioned above are unique to our paper. The remainder of the paper is organized as follows. Section II presents the notation and definitions and results on the worst-case CVaR. After showing the system model we consider in Section III, Section IV presents the definition of discrete-time risk-aware control barrier function and obtains the safety constraints to be satisfied by the control input for three types of safe sets, which is followed by the controller design in Section V. The performance of these proposed controllers is illustrated and compared with an existing standard controller in Section VI, and Section VII concludes the paper. ## II Preliminaries ### _Notation_ The sets of real numbers, real vectors of length \(n\), and real matrices of size \(n\times m\) are denoted by \(\mathbb{R}\), \(\mathbb{R}^{n}\), and \(\mathbb{R}^{n\times m}\), respectively. For \(M\in\mathbb{R}^{n\times n}\), \(M\succ 0\) indicates \(M\) is positive definite. \(M^{\top}\) denotes the transpose of a real matrix \(M\) and \(\text{Tr}(M)\) denotes the trace of \(M\). \(I_{n}\) denotes the identity matrix of size \(n\). For a vector \(v\in\mathbb{R}^{n}\), \(\|v\|\) denotes the Euclidean norm. ### _Conditional Value-at-Risk_ Let \(\mu\in\mathbb{R}^{n}\) be the mean and \(\Sigma\in\mathbb{R}^{n\times n}\) be the covariance matrix of the random vector \(\xi\in\mathbb{R}^{n}\) under the true distribution \(\mathbb{P}\), which is the probability law of \(\xi\). Thus, it is implicitly assumed that the random vector \(\xi\) has finite second-order moments. Let \(\mathcal{P}\) denote the set of all probability distributions on \(\mathbb{R}^{n}\) that have the same first- and second-order moments as \(\mathbb{P}\), i.e., \[\mathcal{P}=\left\{\mathbb{P}:\mathbb{E}_{\mathbb{P}}\left[\begin{bmatrix}\xi _{i}\\ 1\end{bmatrix}\begin{bmatrix}\xi_{j}\\ 1\end{bmatrix}\right]=\begin{bmatrix}\Sigma\delta_{ij}&0\\ 0^{\top}&1\end{bmatrix},\forall i,j\right\}.\] Here \(\delta_{ij}\) denotes the Kronecker delta and \(\mathbb{E}_{\mathbb{P}}[\cdot]\) denotes the expectation with respect to \(\mathbb{P}\). The true underlying probability measure \(\mathbb{P}\) is not known exactly, but it is known that \(\mathbb{P}\in\mathcal{P}\). **Definition II.1** (Conditional Value-at-Risk [17, 18]): _For a given measurable loss function \(L:\mathbb{R}^{n}\to\mathbb{R}\), a probability distribution \(\mathbb{P}\) on \(\mathbb{R}^{n}\) and a level \(\varepsilon\in(0,1)\), the CVaR at \(\varepsilon\) with respect to \(\mathbb{P}\) is defined as_ \[\mathbb{P}\text{-CVaR}_{\varepsilon}[L(\xi)]=\inf_{\beta\in\mathbb{R}}\left\{ \beta+\frac{1}{\varepsilon}\mathbb{E}_{\mathbb{P}}[(L(\xi)-\beta)^{+}]\right\}.\] _CVaR is the conditional expectation of loss above the \((1-\varepsilon)\)-quantile of the loss function [18] and quantifies the tail risk._ The worst-case CVaR is the supremum of CVaR over a given set of probability distributions as defined below: **Definition II.2** (Worst-case CVaR [18]): _The worst-case CVaR over \(\mathcal{P}\) is given by_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L(\xi)]= \inf_{\beta\in\mathbb{R}}\left\{\beta+\frac{1}{\varepsilon}\sup_{\mathbb{P} \in\mathcal{P}}\mathbb{E}_{\mathbb{P}}[(L(\xi)-\beta)^{+}]\right\}.\] _Here, the exchange between the supremum and infimum is justified by the stochastic saddle point theorem [32]._ One reason that the (worst-case) CVaR is widely used for risk assessment is its mathematically attractive properties of coherency. **Proposition II.3** (Coherence properties [19, 33]): _The worst-case CVaR is a coherent risk measure, i.e., it satisfies the following properties: Let \(L_{1}=L_{1}(\xi)\) and \(L_{2}=L_{2}(\xi)\) be two measurable loss functions._ * _Sub-additivity: For all_ \(L_{1}\) _and_ \(L_{2}\)_,_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon} [L_{1}+L_{2}]\] \[\leq\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[L_{1}]+\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[L_{2}];\] * _Positive homogeneity: For a positive constant_ \(c_{1}>0\)_,_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[c_{1}L_{1} ]=c_{1}\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L _{1}];\] * _Monotonicity: If_ \(L_{1}\leq L_{2}\) _almost surely,_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{1}]\leq \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{2}];\] * _Translation invariance: For a constant_ \(c_{2}\)_,_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{1}+c_{2} ]=\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{1}]+c_ {2}.\] Another reason that the worst-case CVaR is used for risk assessment due to the fact that it can be computed efficiently for some special cases that appear often. If \(L(\xi)\) is quadratic with respect to \(\xi\), then the worst-case CVaR can be computed by a semidefinite program. Let the second-order moment matrix of \(\xi\) by \[\Omega=\begin{bmatrix}\Sigma+\mu\mu^{\top}&\mu\\ \mu^{\top}&1\end{bmatrix}. \tag{1}\] **Lemma II.4** (Quadratic function [18, 26]): _Let_ \[L(\xi)=\xi^{\top}P\xi+2q^{\top}\xi+r, \tag{2}\] _where \(P\in\mathbb{S}^{n}\), \(q\in\mathbb{R}^{n}\) and \(r\in\mathbb{R}\). Then_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon }[L(\xi)]= \inf_{\beta}\left\{\beta+\frac{1}{\varepsilon}\text{Tr}(\Omega N):\right.\] \[\left.N\succcurlyeq 0,\right. \tag{3}\] \[\left.N-\left[\begin{array}{cc}P&q\\ q^{\top}&r-\beta\end{array}\right]\succcurlyeq 0\right\}.\] _If the mean of the random vector \(\xi\) is zero, Lemma II.4 leads to the following useful results._ **Lemma II.5** (A property of \(L(\xi)\)): _Suppose \(\mu=0\). Let_ \[L_{1}(\xi) =\xi^{\top}P\xi+2q^{\top}\xi+r,\] \[L_{2}(\xi) =\xi^{\top}P\xi-2q^{\top}\xi+r,\] _where \(P\in\mathbb{S}^{n}\), \(q\in\mathbb{R}^{n}\) and \(r\in\mathbb{R}\). Then it holds that_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{1}(\xi)] =\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{2}(\xi)]. \tag{4}\] Let \(N\) and \(\bar{N}\) solve (3) for \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{1}(\xi)]\) and \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[L_{2}(\xi)]\). Then, they satisfy \[N=\begin{bmatrix}I_{n}&0\\ 0&-1\end{bmatrix}\bar{N}\begin{bmatrix}I_{n}&0\\ 0&-1\end{bmatrix}.\] Since \(\Omega\) in (1) is block diagonal for \(\mu=0\), the signs of off-diagonal blocks do not enter the objective function, the claim follows. **Corollary II.6**: _Suppose \(\mu=0\) and \(\xi\in\mathbb{R}\). Then it holds that_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[\xi]= \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-\xi]\geq 0. \tag{5}\] **Lemma II.7** (A bound on linear \(L(\xi)\)): _Suppose \(\mu=0\). Then,_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}\xi] \leq\sum_{i=1}^{n}|q_{i}|\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[\xi_{i}] \tag{6}\] _where \(q\in\mathbb{R}^{n}\)._ Proof:: From Proposition 2.3, it follows that \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[q ^{\top}\xi] =\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}\left[\sum_{i=1}^{n}q_{i}\xi_{i}\right] \tag{7}\] \[\leq\sum_{i=1}^{n}\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}[q_{i}\xi_{i}]\] \[=\sum_{i=1}^{n}|q_{i}|\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P} \text{-CVaR}_{\varepsilon}[\text{sign}(q_{i})\xi_{i}]\] \[=\sum_{i=1}^{n}|q_{i}|\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P} \text{-CVaR}_{\varepsilon}[\xi_{i}].\] The last equality follows from Lemma 2.5. **Remark 2.8**: _For a vector \(v=[v_{1},\cdots,v_{n}]^{\top}\in\mathbb{R}^{n}\), we define element-wise inequality, element-wise absolute value and element-wise worst-case CVaR by_ \[v\geq 0 \Leftrightarrow v_{1}\geq 0,\cdots,v_{n}\geq 0, \tag{8}\] \[|v| =\begin{bmatrix}|v_{1}|\\ \vdots\\ v_{n}|\end{bmatrix},\] (9) \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon }[v] =\begin{bmatrix}\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[v_{1}]\\ \vdots\\ \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[v_{n}] \end{bmatrix}, \tag{10}\] _respectively._ ## III Discrete-time Control-affine System This paper deals with a discrete-time control-affine system in the form of \[x_{t+1}=f(x_{t})+g(x_{t})u_{t}+w_{t},\ x(0)=x_{0}\in\mathcal{C}, \tag{11}\] where \(x_{t}\in\mathbb{R}^{n}\), \(u_{t}\in\mathbb{R}^{m}\) and \(w_{t}\in\mathbb{R}^{n}\) denote the state, the control input, and disturbance of the system at discrete time instant \(t\), respectively, \(f:\mathbb{R}^{n}\to\mathbb{R}^{n}\) and \(g:\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\) are continuous functions, and \(\mathcal{C}\) is a safe set. It is assumed that \(w_{t}\) is the sequence of independent and identically distributed random variables with a distribution having zero mean \(\mu_{w}=0\) and finite covariance \(\Sigma_{w}\succ 0\) for all \(t\). Although the precise probability measure \(\mathbb{P}\) is not known exactly, it is known that \(\mathbb{P}\in\mathcal{P}\). This set is defined as: \[\mathcal{P}=\left\{\mathbb{P}:\mathbb{E}_{\mathbb{P}}\left[\left[\begin{array} []{c}w_{i}\\ 1\end{array}\right]\left[\begin{array}{c}w_{j}\\ 1\end{array}\right]^{\top}\right]=\left[\begin{array}{cc}\Sigma_{w}\delta_{ ij}&0\\ 0^{\top}&1\end{array}\right],\forall i,j\right\}.\] Thus, the second-order moment matrix of \(w_{t}\) is given by \[\Omega_{w}=\left[\begin{array}{cc}\Sigma_{w}&0\\ 0^{\top}&1\end{array}\right]. \tag{12}\] ## IV Control Barrier Function We extend the discrete-time control barrier function described in [16] to accommodate risk considerations. We first define the safe set \(\mathcal{C}\) as the superlevel set of a continuously differentiable function \(h:\mathbb{R}^{n}\to\mathbb{R}^{p}\) \[\mathcal{C}=\{x\in\mathbb{R}^{n}:h(x)\geq 0\}. \tag{13}\] We assume \(p=1\) except for Section IV-B. With \(\mathcal{C}\), the discrete-time risk-aware control barrier function is defined below. **Definition 4.1** (Discrete-time Risk-Aware Control Barrier Function): _The function \(h\) is a discrete-time risk-aware control barrier function for (11) on \(\mathcal{C}\) if there exists an \(\alpha\in[0,1]\) such that for each \(x\in\mathbb{R}^{n}\), there exists a \(u\in\mathbb{R}^{m}\) such that:_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-h(x^{+}) ]\leq-\alpha h(x). \tag{14}\] **Remark 4.2**: _The condition (14) is sufficient for_ \[\inf_{\mathbb{P}\in\mathcal{P}}\mathbb{P}[\alpha h(x)-h(x^{+})\leq 0]\geq 1-\varepsilon. \tag{15}\] _Moreover, if \(-h(x^{+})\) is concave in \(w\) or quadratic in \(w\), then the condition (14) is a necessary and sufficient condition for (15) [18]. Thus, roughly speaking, the condition (14) aims at having \(h(x^{+})\geq\alpha h(x)\) with high probability._ **Remark 4.3**: _It's worth noting that the condition (14) is different from_ \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[h(x^{+}) ]\geq\alpha h(x) \tag{16}\] _due to the coherent property in Proposition 2.3. Thus different from the definition in [31]._ In the following, we will see how the safety constraint (14) can be simplified so that the integrated control problem can be solved efficiently. ### _Half-space safe set_ Consider a scenario where the safe set is a half-space. In this case, it can be represented by an affine function \(h(x)\): \[\mathcal{C}_{\text{hs}}=\{x\in\mathbb{R}^{n}:h(x)=q^{\top}x+r\geq 0\}. \tag{17}\] Given this representation, the constraint (14) can be expressed as a linear function of \(u\) as follows: **Theorem 4.4**: _If \(h(x)=q^{\top}x+r\), \(q\in\mathbb{R}^{n}\) and \(r\in\mathbb{R}\), then the constraint (14) holds if and only if_ \[-q^{\top}g(x)u\leq\phi(x), \tag{18}\] _where_ \[\phi(x)=q^{\top}(f(x)-\alpha x)+(1-\alpha)r-\sup_{\mathbb{P}\in\mathcal{P}} \mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]. \tag{19}\] Proof:: By substitution, the right-hand-side of (14) becomes \[-\alpha h(x)=-\alpha(q^{\top}x+r). \tag{20}\] On the other hand, using Proposition 2.3 and Lemma 2.5, the left-hand-side of (14) becomes \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-h(x^{+})]\] \[= \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-(q ^{\top}x^{+}+r)]\] \[= \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon }[-(q^{\top}(f(x)+g(x)u+w)+r)]\] \[= -\left(q^{\top}(f(x)+g(x)u)+r\right)+\sup_{\mathbb{P}\in \mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-q^{\top}w]\] \[= -\left(q^{\top}(f(x)+g(x)u)+r\right)+\sup_{\mathbb{P}\in\mathcal{P }}\mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]. \tag{21}\] Thus, the constraint (14) simplifies as \[-q^{\top}g(x)u \leq q^{\top}f(x)+r-\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}[q^{\top}w]-\alpha(q^{\top}x+r) \tag{22}\] \[=\phi(x). \tag{23}\] **Remark IV.5**: _As \(\varepsilon\) approaches to 1, \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]\) approaches to 0, thus the condition (18) approaches to_ \[-q^{\top}g(x)u\leq q^{\top}f(x)+r-\alpha(q^{\top}x+r), \tag{24}\] _which disregards the effect of uncertainties, or focusing on the performance of the expected value._ ### _Polytopic safe set_ In scenarios where the safe set takes the form of a polytope, we may also explicitly express the condition for \(u\) to satisfy (14). In this case, the safe set is defined as the intersection of the superlevel set of affine functions \(h_{i}(x)=q_{i}^{\top}x+r_{i}\) for \(i=1,\cdots,m\) for \(q_{i}\in\mathbb{R}^{n}\) and \(r_{i}\in\mathbb{R}\): \[\mathcal{C}_{\text{poly}} =\{x\in\mathbb{R}^{n}:h_{i}(x)\geq 0,i=1,\cdots,m\} \tag{25}\] \[=\{x\in\mathbb{R}^{n}:h(x)=Q^{\top}x+r\geq 0\},\] where \[Q=\begin{bmatrix}q_{1}&\cdots&q_{m}\end{bmatrix}\in\mathbb{R}^{n\times m}, \quad r=\begin{bmatrix}r_{1}&\cdots&r_{m}\end{bmatrix}^{\top}\in\mathbb{R}^{m}. \tag{26}\] Similarly to Theorem IV.4, we have the following result. **Theorem IV.6**: _If \(h(x)=Q^{\top}x+r\) with \(Q\in\mathbb{R}^{n\times m}\) and \(r\in\mathbb{R}^{m}\), then the constraint (14) holds if and only if_ \[-Q^{\top}g(x)u\leq\phi(x), \tag{27}\] _where_ \[\phi(x)=Q^{\top}(f(x)-\alpha x)+(1-\alpha)r-\sup_{\mathbb{P}\in\mathcal{P}} \mathbb{P}\text{-CVaR}_{\varepsilon}[Q^{\top}w]. \tag{28}\] The proof directly follows from the proof of Theorem IV.4. **Remark IV.7**: _Similar to earlier observations, as \(\varepsilon\) approaches to 1, clearly the condition (27) approaches to_ \[-Q^{\top}g(x)u\leq Q^{\top}(f(x)-\alpha x)+(1-\alpha)r. \tag{29}\] **Remark IV.8**: _This case, as well as the following two case, there is no guarantee that there is a feasible input that satisfies the constraint (27). If infeasible, we may want to introduce some penalty for the violation of this constraint. This is discussed in the controller design in Section V._ ### _Ellipsoidal safe set_ Another tractable scenario arises when the safe set takes the form of an ellipsoid. In such situations, the safe set is expressed by using a positive definite matrix \(E=E^{\top}\succ 0\): \[\mathcal{C}_{\text{ell}} =\{x\in\mathbb{R}^{n}:x^{\top}Ex\leq r\} \tag{30}\] \[=\{x\in\mathbb{R}^{n}:h(x)=-x^{\top}Ex+r\geq 0\}.\] In this case, a sufficient condition for the constraint (14) to be satisfied can be expressed using a quadratic function of \(u\) as follows: **Theorem IV.9**: _Let \(h(x)=-x^{\top}Ex+r\), \(E\in\mathbb{R}^{n\times n}\) with \(E=E^{\top}\succ 0\) and \(r\in\mathbb{R}\). Also define \(\bar{u}=[u^{\top},v^{\top}]^{\top}\) for the control input \(u\in\mathbb{R}^{m}\) and some variable \(v\in\mathbb{R}^{n}\). Then, the constraint (14) holds if_ \[\begin{split} 1)&\bar{u}^{\top}\bar{H}(x)\bar{u}+\bar{q}^{ \top}(x)\bar{u}+\bar{r}(x)\leq 0\text{ and }\\ 2)&\bar{A}(x)\bar{u}\leq 0,\end{split} \tag{31}\] _where_ \[\begin{split}\bar{H}(x)&=\begin{bmatrix}g(x)^{\top}Eg(x )&0\\ 0&0\end{bmatrix}\\ \bar{q}(x)&=2\begin{bmatrix}g(x)^{\top}Ef(x)\\ \sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[w]\end{bmatrix} \\ \bar{r}(x)&=\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[w^{\top}Ew+(2Ef(x))^{\top}w]\\ &\quad+f(x)^{\top}Ef(x)-r-\alpha(x^{\top}Ex-r)\\ \bar{A}(x)&=\begin{bmatrix}Eg(x)&-I\\ -Eg(x)&-I\end{bmatrix}.\end{split} \tag{32}\] Using Proposition II.3 and Lemmas II.5 and II.7, it follows that \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-h(x^{+} )]+\alpha h(x) \tag{33}\] \[=\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon }[(f(x)+g(x)u+w)^{\top}E(f(x)+g(x)u+w)\] \[\qquad\qquad-r]-\alpha(x^{\top}Ex-r)\] \[=\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[w^{\top}Ew+2(f(x)+g(x)u)^{\top}Ew]\] \[\qquad\qquad+(f(x)+g(x)u)^{\top}E(f(x)+g(x)u)^{\top}\] \[\qquad\qquad-r-\alpha(x^{\top}Ex-r)\] \[\leq\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[w^{\top}Ew+(2Ef(x))^{\top}w]\] \[\qquad\qquad+\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR }_{\varepsilon}[(2Eg(x)u)^{\top}w]\] \[\qquad\qquad+u^{\top}g^{\top}(x)Eg(x)u+(2g^{\top}(x)Ef(x))^{\top}u\] \[\qquad\qquad+f^{\top}(x)Ef(x)-r-\alpha(x^{\top}Ex-r)\] \[\leq\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[w^{\top}Ew+(2Ef(x))^{\top}w]\] \[\qquad\qquad+2v^{\top}\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P} \text{-CVaR}_{\varepsilon}[w]\] \[\qquad\qquad+u^{\top}g^{\top}(x)Eg(x)u+(2g^{\top}(x)Ef(x))^{\top}u\] \[\qquad\qquad+f^{\top}(x)Ef(x)-r-\alpha(x^{\top}Ex-r),\] where \[-v\leq Eg(x)u\leq v. \tag{34}\] Thus, a sufficient condition for \[\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-h(x^{+})]+ \alpha h(x)\leq 0 \tag{35}\] is to satisfy \[\begin{split}\sup_{\mathbb{P}\in\mathcal{P}}&\mathbb{P} \text{-CVaR}_{\varepsilon}[w^{\top}Ew+(2Ef(x))^{\top}w]\\ &+2v^{\top}\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[w]\\ &+u^{\top}g^{\top}(x)Eg(x)u+(2g^{\top}(x)Ef(x))^{\top}u\\ &+f^{\top}(x)Ef(x)-r-\alpha(x^{\top}Ex-r)\leq 0\end{split} \tag{35}\] and (33). Those can be expressed as in (30)-(31). **Remark IV.10**: _Unlike the previous two cases in Subsections IV-A and IV-B, the obtained condition (30)-(31) is only sufficient for the safety condition (14) is satisfied. This is because we utilize Lemma II.7 to pull out the control input outside of the worst-case CVaR. Moreover, there is no guarantee that there is a feasible input that satisfies the safety constraint (30). As in the case of Subsection IV-B, if infeasible, we may want to introduce some penalty for the violation of this constraint, which is discussed in Section V._ ### _General safe set_ For the general safe set, which is expressed by using a general function of \(h(x)\), there is no simple way to obtain an equivalent or sufficient condition as the safety constraint (14). Similarly to [16], this motivates us to consider the relation between \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[-h(x^{+})]\) and the function \[-h\left(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[x ^{+}]\right), \tag{36}\] which is likely to be more tractable. Here, we restrict the case that \(h\) is cocave in \(x\) and that it satisfies \[-\underline{\sigma}I\leq\nabla^{2}h(x),\ \forall x\in\mathbb{R}^{n} \tag{37}\] for some \(\underline{\sigma}\geq 0\). Let define \[\begin{split}\bar{w}&=\sup_{\mathbb{P}\in\mathcal{P}} \mathbb{P}\text{-CVaR}_{\varepsilon}[w],\\ \bar{x}^{+}&=\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P }\text{-CVaR}_{\varepsilon}[x^{+}]=f(x)+g(x)u+\bar{w},\\ z&=x^{+}-\bar{x}^{+}=w-\bar{w}.\end{split} \tag{38}\] Then we have the following result. **Lemma IV.11**: _It holds that_ \[\begin{split}&\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}[-h(x^{+})]\leq\\ &-h(\bar{x}^{+})+\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}[-\nabla h(\bar{x}^{+})^{\top}z+\frac{\underline{\sigma}}{2}z^{ \top}z].\end{split} \tag{39}\] By Taylor's theorem, there exists some \(c\) on the line segment connecting \(x^{+}\) and \(\bar{x}^{+}\) such that \[h(x^{+})-h(\bar{x}^{+})=\nabla h(\bar{x}^{+})^{\top}z+\frac{1}{2}z^{\top} \nabla^{2}h(c)z. \tag{40}\] Under the assumption (37), it follows that \[\begin{split}-h(x^{+})&=-h(\bar{x}^{+})-\nabla h( \bar{x}^{+})^{\top}z-\frac{1}{2}z^{\top}\nabla^{2}h(c)z\\ &\leq-h(\bar{x}^{+})-\nabla h(\bar{x}^{+})^{\top}z+\frac{ \underline{\sigma}}{2}z^{\top}z.\end{split} \tag{41}\] Thus, using Proposition II.3, (39) is obtained. **Remark IV.12**: _As \(\varepsilon\) approaches to 1, this agrees with the result in [16]._ **Corollary IV.13**: _From Lemma IV.11, a sufficient condition that the constraint (14) is satisfied is_ \[-h(\bar{x}^{+})+\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{ \varepsilon}[-\nabla h(\bar{x}^{+})^{\top}z+\frac{\underline{\sigma}}{2}z^{ \top}z]\leq-\alpha h(x). \tag{42}\] **Remark IV.14**: _The control input \(u\) enters \(h(\bar{x}^{+})\) as well as \(\nabla h(\bar{x}^{+})\). Thus, still, the condition for \(u\) to satisfy the safety constraint cannot be explicitly expressed. Nevertheless, we may check the feasibility of the condition (42) once \(u\) is given using the semidefinite programming: Here, the substitution of (38) leads to_ \[\begin{split}&\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}\left[-\nabla h(\bar{x}^{+})^{\top}z+\frac{\underline{ \sigma}}{2}z^{\top}z\right]\\ =&\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}\left[-\nabla h(\bar{x}^{+})^{\top}(w-\bar{w})+\frac{ \underline{\sigma}}{2}(w-\bar{w})^{\top}(w-\bar{w})\right]\\ =&\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{- CVaR}_{\varepsilon}\left[\frac{\underline{\sigma}}{2}w^{\top}w-\left( \nabla h(\bar{x}^{+})+\underline{\sigma}\bar{w}\right)^{\top}w\\ &\hskip 56.905512pt+\frac{\underline{\sigma}}{2}\bar{w}^{\top} \bar{w}+\nabla h(\bar{x}^{+})^{\top}\bar{w}\right].\end{split} \tag{43}\] Namely, \(-\nabla h(\bar{x}^{+})^{\top}z+\frac{\underline{\sigma}}{2}z^{\top}z\) is quadratic and convex with respect to \(w\), thus the worst-case CVaR can be evaluated using semidefinite programming as in Lemma II.4 once \(x\) and \(u\) are given. ## V Controller Design Given the derived safety constraints, our objective is to design controllers that minimally modify a nominal controller that does not take the safe set into account. For this purpose, optimization-based control approaches are developed [16, 34]: \[\begin{split} u^{*}(x)=\text{argmin}_{u}\norm{u-u_{\text{nom}}(x)} ^{2}\\ \text{s.t.}\ \ \text{risk-aware safety constraint}\end{split} \tag{44}\] This controller ensures safety and strives for minimal point-wise divergence from the nominal controller \(u_{\text{nom}}\) which does not take the safe set in consideration. The following are specific optimization problems to obtain control inputs for each of the three safe sets. We do not discuss the general case of safe set. #### V-1 Half-space safe set If \(h(x)=q^{\top}x+r\), \(q\in\mathbb{R}^{n}\) and \(r\in\mathbb{R}\), then using the result of Theorem IV.4, the control input \(u^{*}(x)\) is obtained by repeatedly solving \[\begin{split} u^{*}(x)&=\text{argmin}_{u}\norm{u-u_{ \text{nom}}(x)}^{2}\\ \text{s.t.}\ \ -q^{\top}g(x)u\leq\phi(x),\end{split} \tag{45}\] where \[\phi(x)=q^{\top}(f(x)-\alpha x)+(1-\alpha)r-\sup_{\mathbb{P}\in\mathcal{P}} \mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]. \tag{46}\] This is a quadratic program and can be solved efficiently. Moreover, the term \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]\) can be precomputed because it does not depend on \(x\). This term serves as a safety margin to avoid the potential risk of violation due to disturbances. This margin becomes smaller as we choose a larger \(\varepsilon\), leading to a scenario where the disturbances are effectively disregarded. #### V-B2 Polytopic safe set If \(h(x)=Q^{\top}x+r\), \(Q\in\mathbb{R}^{n\times m}\) and \(r\in\mathbb{R}^{m}\), then using the result of Theorem IV.6, the control input is obtained by solving \[\begin{split} u^{*}(x)&=\text{argmin}_{u}\|u-u_{ \text{nom}}(x)\|^{2}\\ \text{s.t.}&-Q^{\top}g(x)u\leq\phi(x),\end{split} \tag{47}\] where \[\phi(x)=Q^{\top}(f(x)-\alpha x)+(1-\alpha)r-\sup_{\mathbb{P}\in\mathcal{P}} \mathbb{P}\text{-CVaR}_{\varepsilon}[Q^{\top}w]. \tag{48}\] As mentioned in Subsection IV-B, this optimization problem (47) can be infeasible, in which case, one way of revising it is \[\begin{split} u^{*}(x)&=\text{argmin}_{u}\|u-u_{ \text{nom}}(x)\|^{2}+\rho\delta\\ \text{s.t.}&-Q^{\top}g(x)u\leq\phi(x)+\delta,\\ &\delta\geq 0\end{split} \tag{49}\] by introducing a parameter \(\rho>0\). Other simple modifications would work as well. In any case, both optimizations in (47) and (49) are quadratic programs and can be solved efficiently. #### V-B3 Ellipsoidal safe set If \(h(x)=-x^{\top}Ex+r\) with \(E=E^{\top}\succ 0\) and \(r\in\mathbb{R}\), then using the result of Theorem IV.9, the control input is obtained by solving \[\begin{split} u^{*}(x)&=\text{argmin}_{u}\|u-u_{ \text{nom}}(x)\|^{2}\\ \text{s.t.}&\bar{u}^{\top}\bar{H}(x)\bar{u}+\bar{q}^{ \top}(x)\bar{u}+\bar{r}(x)\leq 0,\\ &\bar{A}(x)\bar{u}\leq 0\end{split} \tag{50}\] using the expressions in (30). If infeasible, the optimization in (50) can be revised in a similar manner as in the polytopic safe sets \[\begin{split} u^{*}(x)&=\text{argmin}_{u}\|u-u_{ \text{nom}}(x)\|^{2}+\rho\delta\\ \text{s.t.}&\bar{u}^{\top}\bar{H}(x)\bar{u}+\bar{q}^{ \top}(x)\bar{u}+\bar{r}(x)\leq\delta,\\ &\bar{A}(x)\bar{u}\leq 0,\\ &\delta\geq 0\end{split} \tag{51}\] by introducing a parameter \(\rho>0\). The constraint is quadratic in this case, thus, the optimization problem (51) is a quadratically constrained quadratic program. However, because the objective function is convex and \(\bar{H}(x)\) is positive semidefinite, this can be written as a semidefinite program which is tractable. ## VI Numerical Example In this section, we evaluate the performance of the controllers developed in Section V, through simulations on an inverted pendulum model. The discrete-time dynamics of this model around its upright equilibrium position is given by \[\begin{bmatrix}x_{t+1}\\ y_{t+1}\end{bmatrix}=\begin{bmatrix}x_{t}+y_{t}\Delta t\\ y_{t}+\sin(x_{t})\Delta t\end{bmatrix}+\begin{bmatrix}0\\ \Delta t\end{bmatrix}u_{t}+w_{t}, \tag{52}\] where \(x_{t}\) is the angle from the upright position and \(y_{t}\) is the angular velocity, \(u_{t}\) is the control input and \(w_{t}\) is the disturbance. It is assumed that the disturbance is mean zero and that the covariance is \(\Sigma_{w}=\begin{bmatrix}0.001^{2}&0\\ 0&0.003^{2}\end{bmatrix}\) and the sampling time \(\Delta t=0.01\). To quantify the uncertainty, the worst-case CVaR is used with \(\varepsilon=0.3\). For the risk-aware safety constraint, \(\alpha=0.8\) is chosen. Take a nominal stabilizing controller \[u_{t}=-x_{t}-\sin(x_{t})-y_{t}, \tag{53}\] which is obtained by feedback linearization. In the followings, all simulations are performed for the duration of time 8, starting from the initial state \([0.3,0.2]^{\top}\). First, we compare the proposed controller with the nominal controller (53) using the three types of safe sets; * Half-space: \(h(x)=q^{\top}x+r\) with \(q=\begin{bmatrix}1.125&1\end{bmatrix}^{\top}\) and \(r=0.075\) * Polytope: \(h(x)=Q^{\top}x+r\) with \(q=\begin{bmatrix}1.125&1\\ 0.5&1\end{bmatrix}\) and \(r=\begin{bmatrix}0.75&0.1\end{bmatrix}^{\top}\) * Ellipsoid: \(h(x)=-x^{\top}Ex+r\) with \(E=\begin{bmatrix}6&-5\\ -5&6\end{bmatrix}\) and \(r=1\). For the polytopic safe set, it turns out that the optimization problem (47) is always feasible, so (47) is used instead of (49). For the ellipsoidal safe set, however, the optimization problem (50) is not always feasible, thus (51) is used. For each of the three types of safe sets, the following three trajectories are compared. * Nominal \(w=0\): with the nominal controller (53) without disturbance * Proposed: with the proposed risk-aware controller (45), (47) or (51) with \(\rho=500\) subject to Gaussian disturbance with the assumed mean and covariance * Proposed \(w=0\): with the proposed risk-aware controller (45), (47) or (51) with \(\rho=500\) without disturbance The results are shown in Figures 0(a)-0(c). In all cases, the trajectories of the proposed controllers successfully stay inside the safe sets. Comparing the cases "Nominal \(w=0\)" and "Proposed \(w=0\)", we see that when the states are far from the boundaries of \(h=0\), the trajectories coincide, but as soon as the trajectories approach close to the boundary, there are clear differences; cases of "Proposed \(w=0\)" keep some distances from the boundaries; those distances are in fact, \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[q^{\top}w]\) and \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\text{-CVaR}_{\varepsilon}[Q^{\top}w]\) in the cases of half-space and polytopic safe sets, respectively. The case of an ellipsoidal safe set, the distance is not exactly known because the revised controller (51) does not guarantee the satisfaction of the safety constraint. Comparing the cases "Proposed" and "Proposed \(w=0\)", we see the effects of the disturbances. The trajectories of "Proposed" deviate, but stay around "Proposed \(w=0\)". Thanks to the risk-aware design, the trajectories stay inside the safe sets. Next, the proposed controller is compared with the standard control-barrier function based controller for the polytopic safe set described above. The standard controller basically sets \(\sup_{\mathbb{P}\in\mathcal{P}}\mathbb{P}\)-CvaR\({}_{\varepsilon}[Q^{\top}w]=0\) in (47), which is equivalent to using the expected value instead of the worst-case CVaR or disregarding the uncertainties in the controller design. The result is shown in Figure 2. Since the trajectory of "Standard \(w=0\)" case goes on the boundary, \(h=0\), the trajectory of "Standard" case enters the unsafe set quite often due to the disturbance. Thus, we see that considering the expected value for stochastic systems may not be suitable for safety-critical systems; the trajectory of the proposed approach may still enter the unsafe set, but its risk is quantified and limited in design. ## VII Conclusions This paper presented a risk-aware control approach to enforce safety that integrates the worst-case CVaR with the control barrier function for discrete-time nonlinear systems with stochastic uncertainties. The approach is based on some useful findings about the worst-case CVaR, and effectively integrates safe sets while considering the tail risk in the controller design. More specifically, we formulate specific computational problems to compute control inputs for three different safe sets: a quadratic program for half-space and polytopic sets and a semidefinite program for an ellipsoidal set. Our validation with the inverted pendulum confirmed its effectiveness and demonstrated improved performance over existing methods. Future work should explore extensions of event- and self-triggered controllers to reduce resource usage as discussed in [35].
2305.05394
Matching partition functions of deformed JT gravity and the cSYK model
Motivated by recent analogies between the large-$q$ cSYK model and charged black holes, we aim to find a concrete gravitation theory with a matching partition function. Our main focus is to match the thermodynamics of the $(0+1)$-dimensional cSYK model, with that of a $(1+1)$-dimensional gravitational model. We focus on a model of deformed JT gravity, characterized by some unknown dilaton potential function and unknown dilaton-to-Maxwell field coupling. By finding the general solutions, we are able to find the Lagrangian which produces the same partition function and equation of state as that of the considered SYK model. We go beyond showing that the thermodynamics overlaps, by also showing that the Lyapunov exponents, characterizing the degree of chaos, overlap close to the second order phase transition. In the low temperature rescaled regime, there remains open questions about the Lyapunov exponents, given that our analysis ignores the black hole back action which can be large in this regime.
Jan C. Louw, Sizheng Cao, Xian-Hui Ge
2023-05-09T12:41:13Z
http://arxiv.org/abs/2305.05394v2
# Matching partition functions of deformed JT gravity and cSYK model ###### Abstract Motivated by recent analogies between the large-\(q\) cSYK model and charged black holes, we aim to find a concrete gravitation theory with a matching partition function. Our main focus is to match the thermodynamics of the \((0+1)\)-dimensional cSYK model, with that of a \((1+1)\)-dimensional gravitational model. We focus on a model of deformed JT gravity, characterized by some unknown dilaton potential function and unknown dilaton-to-Maxwell field coupling. By finding the general solutions, we are able to find the Lagrangian which produces the same partition function and equation of state as that of the considered SYK model. We go beyond showing that the thermodynamics overlaps by also showing that the Lyapunov exponents, characterizing the degree of chaos, overlap close to the second-order phase transition. In the low-temperature rescaled regime, there remain open questions about the Lyapunov exponents, given that our analysis ignores the black hole back action which can be large in this regime. ## I Introduction and outline The Sachdev-Ye-Kitaev (SYK) model is a simple quantum model that proposes a gravity-condensed matter correspondence. One of its key findings is the emergence of conformal symmetry with nearly AdS\({}_{2}\) geometry in its configuration space of reparametrization modes [1], which is also observed in black holes. Both systems are also maximally chaotic [2; 3]. Significant progress has been made in understanding this duality, including the discovery that fluctuations away from conformality are described by a Schwarzian action [4], which is also the boundary theory of Jackiw-Teitelboim (JT) gravity. There is a wealth of literature on the connections between the SYK models and JT gravity [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. The chaotic-integrable transition in the SYK model can be achieved by introducing a generalized SYK model with an additional one-body infinite-range random interaction [16]. This transition is interpreted as the Hawking-Page (HP) phase transition in the bulk gravity [16]. Attempts have been made to extend such holographic analogies to charged black holes by considering complex SYK (cSYK) models [17]. The cSYK model exhibits a second-order phase transition in the maximally chaotic regime, which is believed to be associated with a universal class of phase transitions in spherical Reissner-Nordstrom (RN)-anti-de Sitter (AdS) black holes [18; 19]. On a thermodynamic level, analogies have been drawn between RN black holes and van der Waals liquid-gas phase transition, and recently also to the phase transition found in the cSYK models [17; 18]. Similar phase transitions can be found in \((1+1)\)-dimensional deformed JT gravity if a dilaton coupling is included [7]. The power laws associated with the continuous phase transition match those of the cSYK model. Given this, it is natural to ask how explicit one can make such analogies. For instance, would it be possible to have a gravitational model with the exact same thermodynamic potential and equation of state? Similar questions can be asked about the Lyapunov exponents reflecting the degree of chaos found in the respective models. In this paper, we give partial answers to these questions. We explore the phase structure of deformed JT gravity and the cSYK model by comparing their partition functions. Our focus is on the on-shell physics, which corresponds to the solutions that minimize the action and characterize the leading order thermodynamics. To achieve this, we consider the \(q/2\)-body interacting complex SYK model. One can then derive the exact thermodynamic potential in powers of \(1/q\). On the cSYK side, we place emphasis on the fluctuations away from on-shell, described by the Schwarzian [20], by neglecting the \(q\)-dependent contributions. In the context of holography, the focus is usually placed on these off-shell fluctuations [21; 22]. Typically, these fluctuations cannot be ignored at nonzero temperatures. The resulting action can be expanded around the conformal solution to yield fluctuations described by the Schwarzian action. However, by expanding in \(1/q\), we find that they are sub-leading, in orders of \(1/q\), to the on-shell contributions [20]. The \(1/q\) expansion, however, goes beyond this, also providing information about the harmonic oscillator-like phase, where conformal symmetry is strongly broken [23; 24]. This is because it provides the full phase diagram to leading order in \(1/q\), hence it is not restricted to certain charge densities or low temperatures. As for finding the candidate bulk dual, we start with a rather general model of deformed JT gravity. It is characterized by a dilaton potential energy \(\mathcal{U}\) and a dilaton coupling \(\mathcal{W}\) to Maxwell fields. In the context of the charged SYK model, such a theory has been proposed before as the low-energy dual [8]. Since the focus was placed on the low-energy limit, the considered deformations, were power laws. To capture the thermodynamics away from the strictly low-energy limit, we must consider more general deformations \(\mathcal{W}\) and \(\mathcal{U}\). This is possible, since, like large-\(q\) cSYK, the generally deformed model admits exact solutions [25]. Starting with some unknown potentials \(\mathcal{W}\) and \(\mathcal{U}\), we calculate various quantities. For instance, we find the general form of the equation of state (EOS), which is related to the Hawking temperature \(T_{\text{H}}\), the Wald entropy \(\mathcal{S}_{\text{W}}\) and the Amrowitt-Deser-Misner (ADM) mass \(M\). We also find the associated Gibbs free energy \(G\). All of these quantities are given in terms of the unknown functions \(\mathcal{U}\) and \(\mathcal{W}\), which we constrain such that we obtain the same thermodynamics as the cSYK model. By expressing the charge density as a function of the entropy, we are able to show that the same thermodynamic relations hold for both models. This relation allows us to identify the enthalpy on the SYK side, while the ADM mass corresponds to the enthalpy on the gravitational side. Equating these two enthalpies, we show that the equations of states also match. This requirement then fixes the potentials, identifying the sought deformation. It is further shown that their partition functions exactly match \(Z_{\text{cSYK}}=Z_{\text{JJT}}\), in the regimes of interest. With this bulk dual, we go on to describe its gravitational properties, such as its scalar curvature, and how it relates to the condensed matter system. We find two different dictionaries which still provide the same thermodynamics, These correspond to the two different analogies that one can draw between the van der Waals liquid, RN black holes, and the complex SYK model [18; 24]. ## II The \(q\)-dependent cSYK model We start from the \(q/2\)-body interacting cSYK model [26] \[\hat{\mathcal{H}}=J\sum_{\begin{subarray}{c}1\leq i_{1}<\cdots<i_{q/2}\leq N \\ 1\leq j_{1}<\cdots<j_{q/2}\leq N\end{subarray}}X^{i_{1}\cdots i_{q/2}}_{j_{1} \cdots j_{q/2}}c^{\dagger}_{i_{1}}\cdots c^{\dagger}_{i_{\frac{q}{2}}}c_{j_{ \frac{q}{2}}}\cdots c_{j_{i}}, \tag{1}\] with a conserved \(\text{U}(1)\) charge density \(\hat{\mathcal{Q}}=\frac{1}{N}\sum_{i}c^{\dagger}_{i}c_{i}-1/2\), with expectation values \(\mathcal{Q}\in[-1/2,1/2]\), where \(c^{\dagger},c\) are fermionic creation and annihilation operators, respectively. Here \(N\) is the number of lattice sites, hence the thermodynamic limit corresponds to taking \(N\to\infty\). The couplings, \(X\), are complex random variables with zero mean, and a variance \(\overline{|X|^{2}}=[q^{-1}(q/2)!]^{2}[2/N]^{q-1}\). We will work in the grand canonical ensemble \[Z_{\text{cSYK}}=\text{tr}\,\exp(-\beta[\hat{\mathcal{H}}-\mu N(\hat{\mathcal{ Q}}-1/2)]).\] By considering \(q/2\)-body interactions instead of two-body interactions, one may solve the SYK model exactly, treating \(1/q\) as an expansion parameter. It was first pointed out by Davison et al. in [21] that the equilibrium state described by \(\mathcal{H}\) (1) tends to free fermions, for any non-zero charge density \(\mathcal{Q}=\mathcal{O}(q^{0})\), in the large \(q\) limit. This can be seen in the effective interaction strength \[\mathcal{J}(\mathcal{Q})\equiv J[1-4\mathcal{Q}^{2}]^{q/4-1/2} \tag{2}\] going to zero as \(q\to\infty\). Even for small charge densities \(\mathcal{J}(\mathcal{Q})\sim e^{-q\mathcal{Q}^{2}}J\). To avoid this tendency, Davison et al. considered an altered Hamiltonian \(H_{\text{ah}}(\beta\mu)\), where the bare system coupling \(J\to J_{\text{ah}}(\beta\mu)\) grows as a function of inverse temperature \(\beta\) and chemical potential \(\mu\) to compensate for the effective suppression. The acquired \(\beta\mu\)-dependence of \(H_{\text{ah}}(\beta\mu)\), however, leads to starkly different thermodynamics from \(\mathcal{H}\) (1), for any \(q\)[23; 27], as discussed in App. E. By not making any changes to the Hamiltonian (1), we preserve the non-trivial thermodynamics at small fluctuations \(\mathcal{Q}=\mathcal{O}(q^{-1/2})\) away from \(\mathcal{Q}=0\)[17]. Remarkably, this unaltered cSYK model leads to a liquid-gas phase diagram which bares a striking resemblance to the small-large black hole phase diagrams found in black hole thermodynamics. The "liquid" and "gaseous" phases reflect their respective (charge) densities. In particular (1) exhibits a phase transition below a critical temperature \(T_{\text{c}}=\mathcal{O}(q^{-1})\) or critical chemical potential \(\mu_{\text{c}}=\mathcal{O}(q^{-3/2})\)[17] because the temperature is \(q\)-dependent and the scaling transformation given in [21] is broken. Explicitly the critical point is at \[T_{\text{c}}=2\mathcal{J}(\mathcal{Q}_{\text{c}})/q,\quad\mu_{\text{c}}=6T_{ \text{c}}\mathcal{Q}_{\text{c}},\quad\mathcal{Q}_{\text{c}}=\sqrt{3/(2q)}. \tag{3}\] Regarding the relation to gravity, there are two regimes of interest, the first considers a scaling \(T=q^{-1}\tilde{T}\), \(\mu=q^{-3/2}\tilde{\mu}\), with tilde'd quantities are \(q\)-independent. Around the transition point, the strongly coupled cSYK model dominates due to the relatively small charge densities. This rescaled regime corresponds to the IR regime, small \(\beta\mathcal{J}\), hence both phases are maximally chaotic, reflected in their Lyapunov exponents saturating the Maldacena-Shenker-Standford (MSS) bound \(\lambda_{L}\to 2\pi T\)[3]. This feature is shared with black holes. In particular, it is shared by both large and small black hole phases in the extended space [18]. The corresponding phase transition also shares a universality class with that of the cSYK model. Both cases have mean-field critical exponents. We further consider a second rescaled regime \(T=q^{-2}\tilde{T}\), \(\mu=q^{-2}\tilde{\mu}\), where barred quantities are held fixed as \(q\to\infty\), with the corresponding phase diagram given in fig. 1. The gaseous phase in this regime corresponds to an uncharged, \(\mathcal{Q}=1/q\) (in the large \(q\) limit), and maximally chaotic SYK Figure 1: Schematic phase diagram for our particular deformed JT and cSYK models in different \(q\)-scaling regimes under the large-q limit condition. The upper regime encompasses the critical endpoint of the coexistence line. Close to the origin, there is a near-extremal phase transition. model. The liquid phase becomes incompressible and has an exponentially small entropy which tends to zero. The incomprehensibility stems from it reaching a maximal charge density, which is of the order \(\mathcal{Q}=\mathcal{O}(q^{0})\). As noted before, such a large density fully suppresses the SYK interactions, yielding a free non-interacting model. The non-zero to zero entropy drop is analogous to the black hole to the thermal radiation Hawking-Page transition. In the large \(q\) limit, the jump in charge density from \(1/q\) to \(1/2\), caused by a small perturbation \(\mu_{0}=4J/q^{2}\) to the chemical potential, is reminiscent of spontaneous symmetry breaking. To see how this behavior of the charge density nearby the coexistence line emerges, we can go back to the first scaling regime and plot the chemical potential \(\widetilde{\mu}\) as a function of charge density \(\widetilde{\mathcal{Q}}\). One can directly find that as the decreasing of the temperature \(\widetilde{T}\), the charge density of liquid phase \(\widetilde{\mathcal{Q}}_{l}\) goes to infinity which implies that the corresponding non-rescaled charge density \(\mathcal{Q}_{l}\) is the order of \(\mathcal{O}(q^{0})\) as the rescaled temperature \(\widetilde{T}\) goes to zero. At the same time, the charge density of the gas phase vanishes. This phenomenon indicates the jump in the charge density we described above. This highlights a difference between the two rescaled regimes. In the first regime, close to the critical point, the liquid and gaseous charge densities are of the same order. As such, for the specific rescaled quantities \((\widetilde{\mu},\widetilde{T},\widetilde{\mathcal{Q}})\), the limit as \(q\to\infty\) is well-defined. In the regime associated with the zero \(T\) limit, we have two different scalings in the charge density, namely \(\mathcal{Q}=\mathcal{O}(1/q)\) and \(\mathcal{Q}=\mathcal{O}(q^{0})\). As such, the two charge densities diverge from one another. For any finite, but large \(q\), the phase transition still exists. In the limit \(q\to\infty\), one can, however, argue that this phase transition no longer makes sense due to this diverging separation. Similarly, the limit of a spherical to a flat Euclidean space as the parameter \(k\to 0\)[28], the phase transition also disappears, in which the parameter \(k\) represents the topological parameter of the RN-AdS black hole [29]. In this sense, one might be able to associate \(k\) with \(1/q\). ## III The deformed JT gravity model We consider general deformed JT gravity [30] together with coupling to a Maxwell field [25], with action \[I[\varphi,A]=G_{\text{N}}^{-1}\int_{M}\!\!\mathrm{d}^{2}x\sqrt{-g}\mathcal{L }(\varphi,A)+I_{\text{bdy}}.\] Here the boundary action contribution \(I_{\text{bdy}}\), described in App. A.2, regularizes the theory. In \((1+1)\)-dimensions, the constant \(G_{\text{N}}\) is dimensionless in natural units \(\hbar=c=1\). Its inverse will play the role of the large parameter \(N\) selecting out the on-shell solution in the classical limit. To have a well-defined limit, we must focus on "intensive" quantities, for instance focusing on the intensive bulk Lagrangian density \[\mathcal{L}(\varphi,A)=\frac{\varphi}{4\pi}\mathcal{R}_{2}+P\,\mathcal{U}( \varphi)-\frac{\mathcal{W}(\varphi)}{4}F(A)^{2}, \tag{4}\] instead of \(\mathcal{L}/G_{\text{N}}\). Here \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is the electromagnetic tensor and \(\mathcal{R}_{2}\) is the 2-dimensional Ricci scalar. The dilaton \(\varphi\) couples to the electromagnetic field via a term \(\mathcal{W}(\varphi)\). The field also has its own potential energy \(P\mathcal{U}(\varphi)\), where we have a thermodynamic pressure term \(P\). This pressure is associated with a negative cosmological constant [19], which is the pressure of empty space. Since the characteristic length scale is associated with the scalar curvature at the conformal boundary, we would assume it to be related in some way to the interacting contribution of the quantum system. We assume the black hole solution, in the Schwarzschild gauge, takes the form \(ds^{2}=-f(r)dt^{2}+dr^{2}/f(r)\). Solving the Euler-Lagrange equations, see App. A, we find the dilaton field solution (11) \(\varphi=\gamma r\), where \(\gamma\) is the dilaton coupling strength. Setting \(\gamma=1\) amounts to measuring distance in units of \(\gamma\). We also have the emblackening factor (15) \[f(r)/(4\pi)=-M+PV(r)-Q_{\text{B}}A_{t}(r)/2, \tag{5}\] for some integration constant \(M\) and black hole charge \(Q_{\text{B}}\). Here we have defined the anti-derivatives \[V(r)=\int\!\!\mathrm{d}r\,\mathcal{U}(r),\quad A_{t}(r)=Q_{\text{B}}\int\!\! \mathrm{d}r\,\frac{1}{\mathcal{W}(r)}. \tag{6}\] By definition, the event horizon is at the root \(r=r_{\text{H}}\) of \(f(r)\), i.e., \(f(r_{\text{H}})=0\). With this, (5) implies \[M=PV_{\text{H}}+Q_{\text{B}}\Phi_{\text{H}}/2,\quad V_{\text{H}}\equiv V(r_{ \text{H}}),\ \Phi_{\text{H}}\equiv-A_{t}(r_{\text{H}}), \tag{7}\] where we have defined the thermodynamic quantities as (6) evaluated at the horizon \(\varphi_{0}=r_{\text{H}}\). For instance, in black hole chemistry, the pressure is conjugate to the volume [31] leading to the identification of \(V_{\text{th}}\) as the thermodynamic volume. We identify, as usual, the Hawking temperature as \(T_{\text{H}}\equiv f^{\prime}(r_{\text{H}})/(4\pi)\) which is the conjugate to the Wald entropy [32]\(\mathcal{S}_{\text{W}}=r_{\text{H}}\). As such the function \(M(\mathcal{S},P,Q_{\text{B}})\) satisfying the differential relation \[dM=\Phi_{\text{H}}dQ_{\text{B}}+V_{\text{th}}dP+T_{\text{H}}d\mathcal{S}_{ \text{W}}, \tag{8}\] Figure 2: The chemical potential \(\widetilde{\mu}\) as a function of the charge density \(\widetilde{\mathcal{Q}}\), the dash lines of different colors represent the real physically acceptable solution which satisfies the Maxwell area law. which is the first law of (black hole) thermodynamics [19], which also serves to define the thermodynamic volume. Indeed, it can be identified as the ADM mass [33]. In considering \(\Phi_{\text{th}}\) to be the black hole's chemical potential [25], we may also view it as an enthalpy. From (8), using \(\mathcal{S}_{\text{W}}=r_{\text{H}}\), we may also obtain the EOS \[T_{\text{H}}=\left(\frac{\partial M}{\partial\mathcal{S}_{\text{W}}}\right)_{P, Q_{\text{B}}}=PV^{\prime}(r_{\text{H}})-\frac{Q_{\text{B}}}{2}A_{\ell}^{ \prime}(r_{\text{H}}), \tag{9}\] where, unless specified otherwise, derivatives are evaluated keeping \(P\) and \(Q_{\text{B}}\) constant, \(V^{\prime}(r_{\text{H}})\equiv(\partial_{r_{\text{H}}}V(r_{\text{H}}))_{P,Q_{ \text{B}}}\). The thermodynamic potential which selects out the favorable state is the Gibbs free energy [19]\(G(T_{\text{H}},P,Q_{\text{B}})=M-T_{\text{H}}\mathcal{S}_{\text{W}}\). This is identified with the on-shell action (101) of the uncharged black hole dual to the described charged system. All other expressions would remain unchanged if we had instead worked with this uncharged dual from the start. The Gibbs free energy also arises naturally in the dimensionality reduction of \((3+1)\)-dimensional charged black holes [25]. ## IV Matching the partition functions Our goal is to find the gravitational Lagrangian dual to the cSYK model, which is defined by the yet to be determined potentials \(\mathcal{U}\),\(\mathcal{W}\). Equivalently, we may focus on the related anti-derivatives \(V\), \(A_{\ell}\) defined in (6). We do this by focusing on the large \(q\) cSYK model's grand potential \[\Omega\equiv-T\ln Z_{\text{cSYK}}/N=E+(1/2-\mathcal{Q})\mu-T\mathcal{S}, \tag{10}\] with the interaction energy [24] \[E\sim-2\epsilon(\mathcal{Q})/q^{2},\quad\epsilon(\mathcal{Q})\equiv\mathcal{ J}(\mathcal{Q})\sin(\pi v/2) \tag{11}\] and entropy density \(\mathcal{S}=\mathcal{S}_{2}(\mathcal{Q})-(\pi v/q)^{2}/2\), as shown in App. C, where \[\mathcal{S}_{2}(x)\equiv-\frac{1-2x}{2}\ln\biggl{|}\frac{1-2x}{2}\biggr{|}- \frac{1+2x}{2}\ln\biggl{|}\frac{1+2x}{2}\biggr{|}, \tag{12}\] which is an even function of \(x\). Here \(v\) is the solution to the closure relation \(\mathcal{J}(\mathcal{Q})/T=\pi v\sec(\pi v/2)\)[20], which is also related to the Lyapunov exponent as \(\lambda_{L}=2\pi Tv\). The phase transition is reflected in the EOS [24, eq.(43)] \[T=\frac{\mu-4\mathcal{Q}\epsilon/q}{2\tanh^{-1}(2\mathcal{Q})}, \tag{13}\] becoming three-to-one for \(T<T_{\text{c}}\), or \(\mu<\mu_{\text{c}}\), where the critical temperature and critical chemical potential scales as \(T_{\text{c}}=\mathcal{O}(1/q)\) and \(\mu_{\text{c}}=\mathcal{O}(q^{-3/2})\). Equation (13) is q-dependent since for example, it breaks the scaling symmetry \(T\to T/q^{2}\), \(\mu\rightarrow\mu/q^{2}\), and \(\mathcal{Q}\rightarrow\mathcal{Q}/q\). Note that this equation is invalid for \(\mathcal{Q}=0\), amounting to division by zero, in which case the temperature becomes an independent free parameter. One may show that this EOS remains valid for large \(q\), for any polynomial (in \(q\)) scaling for temperature and chemical potential [17], i.e., the cases we consider. To have matching thermodynamics, we not only require the same thermodynamic potentials \(\Omega,G\), but also matching equations of states. If the quantity \(\Omega+T\mathcal{S}\): \[H\sim-2\epsilon(\mathcal{Q})/q^{2}+(1/2-\mathcal{Q})\mu \tag{14}\] satisfies the same relation as the mass (9) \[T=\left(\frac{\partial H}{\partial\mathcal{S}}\right)_{\mu,J}=\left(\frac{ \partial\mathcal{Q}}{\partial\mathcal{S}}\right)_{\mu,J}\left(\frac{\partial H }{\partial\mathcal{Q}}\right)_{\mu,J}, \tag{15}\] i.e., yields the same EOS, then it can also be identified with the enthalpy. The above relation may be rewritten as \[\beta\left(\frac{\partial H}{\partial\mathcal{Q}}\right)_{\mu,J}=\left(\frac{ \partial\mathcal{S}}{\partial\mathcal{Q}}\right)_{\mu,J}=-2\tanh^{-1}(2 \mathcal{Q})-\partial_{\mathcal{Q}}\frac{(\pi v/q)^{2}}{2} \tag{16}\] where unless specified otherwise, we assume that \(\mu,J\) are kept constant, meaning that \(v^{\prime}(\mathcal{Q})\equiv\left(\partial_{\mathcal{Q}}v(\mathcal{Q})\right) _{\mu,J}\). Using the closure relation \(\beta\mathcal{J}(\mathcal{Q})\cos(\pi v/2)=\pi v\), the left-hand-side of (16) reduces to \[-2\beta\epsilon^{\prime}(\mathcal{Q})/q^{2}-\beta\mu=4\mathcal{Q}\beta\epsilon (\mathcal{Q})/q-\pi^{2}v^{\prime}(\mathcal{Q})v/q^{2}-\beta\mu.\] Finally, from (13), we have \(4\mathcal{Q}\beta\epsilon/q=\beta\mu-2\tanh^{-1}(2\mathcal{Q})\), which leaves the right-hand side of (16), thus finished the proof identifying \(H\) as an enthalpy. Considering (9) and (15) we note that the same equation of state is obtained if we identify the temperatures and entropies and enthalpies with another which also then implies that \(G=\Omega\), since \(G=M-T_{\text{H}}\mathcal{S}_{W}\) and \(\Omega=H-T\mathcal{S}\). This (partial) dictionary is summarized in table 1. With these identifications, one finds not only an isomorphism between the EOS's and thermodynamic potentials, but _equivalent_ partition functions \[Z_{\text{dJT}}=e^{-\beta NG}=Z_{\text{cSYK}}=e^{-\beta N\Omega}.\] Since the thermodynamics is uniquely encoded by the partition function and EOS, we also have the exact phase diagram matching Fig. 1. The same holds true in the maximally chaotic regime, where the phase diagram has been given in [17]. To further specify the dictionary, we consider the differential relations of the two models. For the cSYK model, we have \[\left(\frac{\partial\Omega}{\partial J}\right)_{\mu,T}=\frac{E}{J},\quad \left(\frac{\partial\Omega}{\partial\mu}\right)_{J,T}=\frac{1}{2}-\mathcal{Q} \tag{17}\] \begin{table} \begin{tabular}{|c|c|c|} \hline Model & cSYK & dJT \\ \hline large parameter & \(N\) & \(1/G_{\text{N}}\) \\ enthalpy & \(H\) (9) & \(M\) (7) \\ entropy density & \(\mathcal{S}\) & \(\mathcal{S}_{\text{W}}\) \\ temperature & \(T\) (13) & \(T_{\text{H}}\) (9) \\ thermodynamic potential & \(\Omega\) (10) & \(G\) \\ \hline \end{tabular} \end{table} Table 1: Dictionary between the thermodynamics of the q-dependent cSYK model and deformed JT (dJT) gravity. Each row identifies the two quantities which equate to another. while for the gravitational model's Gibbs free energy, we have \[\left(\frac{\partial G}{\partial Q_{\text{B}}^{2}}\right)_{P,T}=\frac{\Phi_{ \text{th}}}{2Q_{\text{B}}},\quad\left(\frac{\partial G}{\partial P}\right)_{Q_{ B},T}=V_{\text{th}}. \tag{18}\] By comparing these two, we note the two possible options given in Table. 2. This choice will _not_ influence the thermodynamics, except for its interpretation on the black hole side. Such that we do not restrict ourselves to a particular choice, we typically use the notation on the condensed matter side. These two options in fact directly overlap with the two different analogies which can be drawn between the van der Waals liquid, RN black holes, and the charged SYK model [18; 24]. ### Equivalence of thermodynamics Since we know the thermodynamics match, we can consider the equation of state (13) in the context of the black hole's thermodynamics. Below a critical chemical potential, associated with either charge (dictionary II.a) or pressure (dictionary II.b), or temperature, (13) becomes three-to-one. The Wald entropy is equal to the horizon radius, but also equal to the cSYK entropy \(r_{\text{H}}=\mathcal{S}(\mathcal{Q})\), given the dictionary table. 1. As such the three different charge densities \(\mathcal{Q}\) correspond to three different entropies (horizon radii), i.e., three different states as plotted in fig. 3. These entropies \[\mathcal{S}\in\{\mathcal{S}_{\text{large BH}},\mathcal{S}_{\text{unstable BH}},\mathcal{S}_{\text{small BH}}\},\] correspond to three different horizon radii; hence we have a large black hole, a small unstable black hole and a small stable black hole, as expected from a charged extended space system [18]. Those with a positive specific heat, corresponding to negative horizon curvature, are the stable phases [30]. These entropies exactly correspond to the phases of the gaseous, unstable liquid, and stable liquid phases of the cSYK model, reflected in the different charge densities \(\mathcal{Q}\). These three phases are seen in the three-to-one behavior in the rescaled chemical potential \(\widetilde{\mu}(\mathcal{Q})\) (or temperature) as a function of entropy \(\widetilde{\mathcal{S}}(\mathcal{Q})\). Between the temperature \(\widetilde{\mu}_{1}\sim\widetilde{\mu}_{2}\), there are three different phases corresponding to the three different horizon radii. The thermodynamically preferred radius corresponds to the minimum Gibbs free energy between the three. It is important to note that since the partition functions and equations of states exactly overlap, given the dictionary I, we are guaranteed to have equivalent thermodynamics for both dual models. This means that they share the same critical exponents, given in table 3, hence the same universality class. Here we take a moment to describe the thermodynamics from the gravitation perspective. This is done by translating the known results for the \(q\) body cSYK model [17] into gravitational language via the dictionary. To get some idea of the interpretations on the gravitational side, let us for the moment consider Table 2.b. The charge density is provided in color on these diagrams and is then directly related to the thermodynamic volume of the black hole \(V_{\text{th}}=1/2-\mathcal{Q}\). Note that when we approach the boundary, we consider smaller values of \(\mathcal{Q}\), corresponding to larger volumes. Various power laws emerge as the critical point \((P_{\text{c}},T_{\text{c}})\) is reached which can differ from the critical exponents. This is due to a feature well known in the field of statistical mechanics known as field mixing [34]. The prototypical example is that of the van der Waals liquid. These _effective_ power laws are still physically relevant. For instance, the specific heat will diverge as \(C_{P}\propto|T-T_{\text{c}}|^{-2/3}\), i.e. \(\alpha_{P}=2/3\). Given its relation to the Ricci scalar (25), we note that this ensures a finite horizon curvature. It remains well-defined at constant volume \(C_{V}\propto T\sim t^{0}\), as is common to RN system [35]. The remaining _effective_ exponents can be obtained from [17], and are listed in 3. The equivalence is over the entire coexistence line, meaning that we have the same thermodynamics also in the regime where the quantum model has a chaotic-to-nonchaotic transition \(T=\mathcal{O}(q^{-2})\), \(P=\mathcal{O}(q^{-2})\). Here the chaotic phase corresponds to the maximally large black hole \(r_{\text{H}}=r_{\text{max}}\). The nonchaotic phase on the quantum side corresponds to an evaporated black hole, where the horizon radius goes to zero \(r_{H}\to 0\). This transition occurs at a pressure \(P_{0}=4Q_{\text{B}}/q^{2}\). We will further consider the degree of chaos, a dynamical Figure 3: The red, green, and blue curves represent the three phases of black hole, small, medium, and large, respectively. The dashed line stands for the thermodynamically favorable solution, in which the area of both sides is the same (Maxwell area law). The yellow line and brown lines are for \(\widetilde{T}=\widetilde{T}_{\text{crit}}\) and \(\widetilde{T}>\widetilde{T}_{\text{crit}}\) respectively. property, in the next section. ## V Metric dual to the cSYK model To make the mapping more explicit, we must fully specify the functions \(\mathcal{U}\) and \(\mathcal{W}\) which define the dJT model. Extending the identification \(r_{\text{H}}=\mathcal{S}(\mathcal{Q})\) to all radii, we have the equation \(r=\mathcal{S}(x)\), or the inverse \(x(r)=\mathcal{S}^{-1}(r)\). When evaluated at the horizon, we find the order parameter \(\mathcal{Q}=x(r=r_{\text{H}})\). We perform this inversion in various regimes in App.C.1. Given the above, we can fully specify the functions \(V\) and \(A_{t}\), hence \(\mathcal{U}\) and \(\mathcal{W}\), given (6). In other words, we can fully specify the particular deformation. Using the relations in (6), we have that \[\mathcal{Q}^{\prime}(r_{\text{H}})=\begin{cases}-2/\mathcal{W}(r_{\text{H}})& (a)\\ -\mathcal{U}(r_{\text{H}})&(b)\end{cases}\] which is equal to \(1/\mathcal{S}^{\prime}(\mathcal{Q})\). With this, we note that \(\mathcal{S}^{\prime}(\mathcal{Q})\) measures the coupling to the Maxwell fields given dictionary \((a)\), while dictionary \((b)\) yields the reciprocal dilaton potential. Hence, \((a)\) identifies the \(\text{U}(1)\) charges on the gravitational side with that of the condensed matter side. Using either of the tables II.a or II.b would yield the enthalpy "functions" \[PV(r)-Q_{\text{B}}A_{t}(r)/2=\mu[1/2-x(r)]-2\epsilon(x(r))/q^{2}. \tag{19}\] Here the second term stems from the relation with the interaction energy density function (11) \[\epsilon(x)\equiv\mathcal{J}(x)\sin(\pi v(x)/2),\quad\mathcal{J}(x)\sim[1-4x^{ 2}]^{q/4}J. \tag{20}\] Using (19) we find the metric corresponding to the cSYK model, defined by the emblackening factor (5) written directly in terms of the dual condensed matter model's parameters \[f(r)/(4\pi)=\mu[\mathcal{Q}-x(r)]+2\epsilon(\mathcal{Q})/q^{2}-2\epsilon(x(r) )/q^{2}. \tag{21}\] The roots of this function yield the horizons. The largest root is the event horizon \(r_{\text{H}}\) of the black hole, i.e., \(x(r_{\text{H}})=\mathcal{Q}\). The smaller root \(r_{-}\), corresponding to large \(x\), is the Cauchy horizon. For instance, where the interaction energy becomes exponentially small \(\epsilon(x)\sim e^{-qx^{2}}\) we have a root at \(x(r_{-})=\mathcal{Q}+2\epsilon(\mathcal{Q})/(\mu q^{2})\). As before, the temperature is obtained from the function \(f^{\prime}(r_{\text{H}})\). For other values of \(r\), we define the function \[\mathcal{T}(x)\equiv\frac{f^{\prime}(r(x))}{4\pi}=\frac{\mu-4x\epsilon(x)/q}{ 2\tanh^{-1}(2x)} \tag{22}\] where \(T=\mathcal{T}(\mathcal{Q})\). With this, the closure relation becomes \[\mathcal{J}(x)/\mathcal{T}(x)=\pi v(x)\sec(\pi v(x)/2). \tag{23}\] Solving (23) in the limiting cases, we find \[\frac{\epsilon(x)}{\mathcal{J}(x)}\sim\begin{cases}1+\mathcal{O}(\mathcal{T} ^{2}(x)/\mathcal{J}^{2}(x)),\\ \frac{\mathcal{J}(x)}{\mu}\,\tanh^{-1}(2x),\end{cases}\text{for }x=\mathcal{O}(q^{ 0}). \tag{24}\] Evaluating (22) at the horizon, where \(x(r_{\text{H}})=\mathcal{Q}\), yields the cSYK EOS (13) as expected from a dual theory. Evaluated at the horizon, the curvature may be written as \[\mathcal{R}_{2}(r_{\text{H}})=-4\pi\left(\frac{\partial T_{\text{H}}}{ \partial\mathcal{S}_{W}}\right)_{\mu,J}=-\frac{4\pi T_{\text{H}}}{C_{\mu}}, \tag{25}\] where \(C_{\mu}\) is the heat capacity at constant chemical potential \[C_{\mu}\equiv T_{\text{H}}\left(\frac{\partial\mathcal{S}_{W}}{\partial T_{ \text{H}}}\right)_{\mu,J}. \tag{26}\] For \(\mathcal{Q}=\mathcal{O}(q^{0})\), the SYK interactions are suppressed, yielding a near-free system \(\mathcal{Q}\sim\tanh(\beta\mu/2)/2\) with specific heat \(C_{\mu}\sim 2(\beta\mu)^{2}e^{-\beta\mu}\) as entropy tends to zero (\(\mathcal{Q}\to 1/2\)). This means that the curvature at the horizon blows up as the dual system becomes a free Fermi gas, as was found in [36]. In this sense, the mapping is a weak-strong duality. An analogy would be how the shear viscosity diverges in the free theories with holographic duals considered in [37; 38]. Given the above discussion, we can now gain an idea of the metric dual to the cSYK model. Recall that the stable phases have positive specific heat. Noting that \(f^{(1)}(r_{\text{H}})=4\pi T_{\text{H}}\) and \(f^{(2)}(r_{\text{H}})=4\pi T_{\text{H}}/C_{\mu}\), we may express the near-horizon emblackening factor as \[f(r_{\text{H}}+\delta)=4\pi T_{\text{H}}\delta\left(1+\delta/(2C_{\mu})\right)+ \mathcal{O}(\delta^{3}). \tag{27}\] As such the stable phases, above and near the horizon, will have a positive concave-up emblackening factor. ### Need for an IR cutoff As \(x\to 1/2\), the interaction energy contributions are fully suppressed, leaving a free theory. As such, we need only invert \(\mathcal{S}_{2}(x)\), which yields \[x(r)\sim\frac{1}{2}-\frac{r}{\ln(1/r)}\quad\xrightarrow{r\to 0^{+}}\quad 1/2. \tag{28}\] We have \(r_{\text{H}}=0\) corresponding to \(\mathcal{Q}=1/2\). We, however, exclude this point from our space, i.e., \(r>0\). From this, we also note that \(r\geq r_{\text{H}}\), i.e., when \(x(r)\leq\mathcal{Q}\). A naive calculation of \(x\), when \(x\) is small, yields the inverse \(x(r)\sim\sqrt{(\ln 2-r)/2}\). The diverging second derivative at \(r=\mathcal{S}(0)\) would also yield a diverging scalar curvature \(\mathcal{R}_{2}(r)=-f^{(2)}(r)\). This simple expression is due to the simplicity of the two-dimensional static metric we have, yielding simple Christoffel symbols. As mentioned in Sec. IV, the EOS (13) is not valid for \(\mathcal{Q}=0\), seen in its diverging temperature. This is because, on the condensed matter side, it determines the chemical potential \[\mu=2T\tanh(2\mathcal{Q})+4\mathcal{Q}\epsilon/q\] rather than the temperature. As such, \(\mathcal{Q}=0\), directly implies \(\mu=0\), leaving \(T\) a free variable. In the form of (13), we are thus effectively dividing by zero, when \(\mathcal{Q}=0\). Since this corresponds to a zero charge SYK model, this point \(r=\ln 2\) is also where the EOS (13) fails. We fix this by limiting our scope to small but non-zero charge densities. On the gravitational side, this means that we consider a minimal \(x=x_{\text{min}}\neq 0\). This is equivalent to introducing an IR cutoff radius \(r_{\text{max}}\). The square root is then modified to \[x(r)\sim\sqrt{x_{\text{min}}^{2}+(r_{\text{max}}-r)/2}\quad \xrightarrow{r\to r_{\text{max}}}\quad x_{\text{min}} \tag{29}\] As such, to have a well-defined theory, we should have some non-zero minimum value \(\mathcal{Q}=x_{\text{min}}\). Such a minimum appears when considering a particular IR cutoff \(r_{\text{max}}\). We choose this cutoff such that our theory will satisfy two conditions: _(I)_ Given the cutoff we have access to the full liquid-gas coexistence line of the SYK model. _(II)_ The scalar curvature \(\mathcal{R}_{2}(r_{\text{max}})\) remains finite for any finite value of \(q\). This condition would, for instance, be violated given an emblackening factor \(f(r)\propto\sqrt{r_{\text{max}}-r}\), which has both a diverging temperature function (related to \(f^{\prime}(r)\)) and scalar curvature (related to \(f^{(2)}(r)\)). One choice in cutoff is such that we include the minimum charge density which occurs along the coexistence line in the cSYK model, \(x_{\text{min}}=1/q\)[17]. Given that the entropy function relates \(x\) to the radius, we substitute this value to find \[r_{\text{max}}=\ln 2-\frac{4+\pi^{2}+\mathcal{O}(q^{-2})}{2q^{2}}. \tag{30}\] Note that for both the first or second rescaled regimes \[\mu=q^{-3/2}\tilde{\mu}=\mathcal{O}(q^{-3/2}),\quad\mu=q^{-2}\tilde{\mu}= \mathcal{O}(q^{-2}) \tag{31}\] we are guaranteed a small temperature function at the cutoff \[\mathcal{T}(x_{\text{min}}=1/q)\sim q\mu/4-J/q+\mathcal{O}(q^{-3/2}). \tag{32}\] Further motivations for this choice are provided in App.D. From the above, we also note the endpoint of the coexistence line \(\mu_{0}=4J/q^{2}\), corresponding to zero temperature. At the boundary \(\mathcal{T}(x_{\text{min}})\), (26) is given by \(\beta C_{\mu}\sim 16/\mu\), for \(\mu\) of order \(q^{-3/2}\) or lower. Now using (26), we find the scalar curvature \(\mathcal{R}_{2}(r_{\text{th}})=-4\pi T/C_{\mu}\), yielding the boundary curvature \(\mathcal{R}_{2}(r_{\text{max}})=-\pi q^{3}\mu/4\) which is indeed finite for any finite \(q\), hence our chosen cutoff satisfies condition _(II)_. The dictionary II.b is required if we wish to identify the pressure with the cosmological constant, as standard in black hole chemistry [19]. A different cutoff would yield a different curvature. Since the cutoff is not unique, one could view this specific cutoff as being the most appropriate in that it yields the expected curvature. From the above, we note that the near-extremal limit, we are left with \(f(r)\propto-\pi qJ(r-r_{\text{H}})^{2}\). Close to the cutoff, for small \(\delta=q^{2}(r-r_{\text{max}})\), we have (42) \(f^{\prime}(r)=q\pi\mu(1-\delta/2)^{-1/2}-q\pi\mu_{0}\), implying the emblackening factor \[f(r)=f(r_{\text{max}})+q\pi[\mu-\mu_{0}]\delta/2+\frac{q\pi\mu}{8}\delta^{2}+ \mathcal{O}(\delta^{3}) \tag{33}\] working to explicit order \(\mathcal{O}(q^{-1})\). There are multiple other choices of cutoffs that would satisfy both above conditions. One could also consider UV cutoffs to regularize the theory at smaller distances. ## VI Comparison of Lyapunov Exponents In this section, we wish to compare the dynamical properties of the two models with matching thermodynamics. While it was true that the choice of particular dictionary in Table 2 did not affect the thermodynamics, the same cannot be said about the dynamics. This is because we are choosing which cSYK term should be identified with the electrical field. Here we will consider both cases. We focus on their Lyapunov exponents measuring the sensitivity to initial conditions. We write the Lyapunov exponent as \(\lambda_{L}=2\pi vT\). For the SYK model, \(v\) is the solution to the closure relation (23) \(\beta\mathcal{J}(\mathcal{Q})=\pi v\sec(\pi v/2)\). In the maximally chaotic regime \(T=q^{-1}\tilde{T}\), \(\mu=q^{-3/2}\tilde{\mu}\) with tilde'd quantities are \(q\)-independent, it is solved by \[v=1-2q^{-1}\tilde{T}/\mathcal{J}(\mathcal{Q})+\mathcal{O}(q^{-2}) \xrightarrow{q\to\infty}1. \tag{34}\] The liquid phase becomes near-integrable in the second rescaled regime \(\beta=q^{2}\bar{\beta}\), \(\mu=q^{-2}\bar{\mu}\), where barred quantities are held fixed as \(q\to\infty\). In this same regime, the gaseous phase remains maximally chaotic. The tendency to integrability is driven by its large charge density \(\mathcal{Q}=\mathcal{O}(q^{0})\) which suppresses the effective coupling \(\mathcal{J}(\mathcal{Q})\sim Je^{-q\mathcal{Q}^{2}}\), leading to an exponentially small Lyapunov exponent \(v=q^{2}\bar{\beta}\mathcal{J}(\mathcal{Q})/\pi\xrightarrow{q\to\infty}0\). For a non-extremal black hole, the maximal Lyapunov exponent is usually given by the surface gravity \(\kappa=f^{\prime}(r_{\text{H}})/2=2\pi T_{\text{H}}\)[39] which is MSS bound [3]. We find \(\lambda_{L}\) by focusing on the near-horizon trajectory of a charged particle close to the black hole. The corresponding equations of motion are [40]\(\dot{r}=\pi_{r}f\), \(\dot{t}=-[\pi_{t}+Q_{\text{e}}A_{t}]/f\) and \[\dot{\pi}_{r}=-\pi_{r}^{2}/(2f^{\prime})-\dot{t}^{2}f^{\prime}/2-Q_{\text{e}}A_ {t}^{\prime}\dot{t}, \tag{35}\] where \(\pi_{t}\) and \(\pi_{r}\) are the \(t\) and \(r\) components of particle momentum, respectively. The particle's charge is given by \(Q_{\text{e}}\) and \(A_{t}=-\Phi\). Note that we are focusing on the particle's geodesic for a non-dynamic metric. As such, an implicit assumption is the particle's back-reaction on the metric can be ignored. The two-velocity's normalization condition \(\dot{x}_{\nu}\dot{x}^{\nu}=-1\), for massive particles, implies that \(1=f\dot{t}^{2}-\dot{r}^{2}/f\). Substituting the above expressions leaves the two solutions \(\dot{t}=\sqrt{\pi_{r}^{2}+1/f}\). Using this, the equations of motion of \(\mathbf{\rho}=(r,\pi_{r})\) are \(\partial_{t}\mathbf{\rho}=\dot{\mathbf{\rho}}/\dot{t}=\mathbf{F}(\mathbf{\rho})\), with \[F_{1}(\mathbf{\rho}) =\frac{\pi_{r}f}{\sqrt{\pi_{r}^{2}+1/f}},\] \[F_{2}(\mathbf{\rho}) =-\frac{\pi_{r}^{2}/f^{\prime}}{2\sqrt{\pi_{r}^{2}+1/f}}-f^{\prime }\frac{\sqrt{\pi_{r}^{2}+1/f}}{2}-Q_{\text{e}}A_{t}^{\prime}.\] We next linearize these equations around the fixed point \(\mathbf{\rho}_{0}\), \(\mathbf{F}(\mathbf{\rho}_{0})=0\), to first order \(\mathbf{F}(\mathbf{\rho})=K(\mathbf{\rho_{0}})(\mathbf{\rho}-\mathbf{\rho_{0}})\), where \[K(\mathbf{\rho_{0}})=\begin{bmatrix}\partial_{r}F_{1}&\partial_{\pi_{r}}F_{1}\\ \partial_{r}F_{2}&\partial_{\pi_{r}}F_{2}\end{bmatrix}_{\mathbf{\rho}=\mathbf{\rho_{0}}} \tag{36}\] is the Jacobian matrix. Slight perturbations away from a fixed point the dynamics is described by \(\mathbf{\rho}=e^{tK(\mathbf{\rho}_{0})}\mathbf{\rho}_{0}\). In terms of the phase space \((r,\pi_{r})\), we have a fixed point at \(\pi_{r}=0\) and for massive particles the additional condition that \[Q_{\text{e}}=-\frac{f^{\prime}(r_{i})}{2f(r_{i})^{1/2}A^{\prime}_{t}(r_{i})}. \tag{37}\] From here we can either find the corresponding initial \(r_{i}\) given a charge \(Q_{\text{e}}\), or we can just consider any \(r_{i}\), but set the charge accordingly. The results are equivalent, but the analysis is simpler for the latter. For massive particles, the matrix \(K\) is off-diagonal \(K_{11}=K_{22}=0\), with \(K_{12}=f^{3/2}\) and \[K_{21}=f^{-3/2}\left[(f^{\prime}/2)^{2}-Q_{\text{e}}A_{t}^{(2)}f^{3/2}-ff^{(2) }/2\right]. \tag{38}\] It has eigenvalues \(\lambda_{\pm}=\pm\sqrt{\det K}\), where the largest eigenvalue is the Lyapunov exponent \(\lambda_{+}\). To get a measure of how much MSS bound is saturated, we focus on \(v_{\text{dJT}}\equiv\lambda_{+}/\kappa\), explicitly given by \[v_{\text{dJT}}=\frac{f^{\prime}(r_{i})}{f^{\prime}(r_{\text{H}})}\sqrt{1+ \frac{2f(r_{i})}{f^{\prime}(r_{i})}\left[\frac{A_{t}^{(2)}(r_{i})}{A^{\prime}_ {t}(r_{i})}-\frac{f^{(2)}(r_{i})}{f^{\prime}(r_{i})}\right]}, \tag{39}\] which is \(1\) if the system is maximally chaotic, in the sense of saturating the MSS bound. Using the near horizon emblackening factor, we find (27) \[\frac{2f(r_{\text{H}}+\delta)}{f^{\prime}(r_{\text{H}}+\delta)}\sim\delta \frac{2C_{\mu}+\delta}{C_{\mu}+\delta},\quad\frac{f^{(2)}(r_{\text{H}}+\delta) }{f^{\prime}(r_{\text{H}}+\delta)}\sim\frac{1}{C_{\mu}+\delta}. \tag{40}\] Let us further assume that \[\frac{A_{t}^{(2)}(r_{\text{H}}+\delta)}{A^{\prime}_{t}(r_{\text{H}}+\delta)}= \frac{1}{\Phi^{\prime}_{\text{H}}(r_{\text{H}})/\Phi^{(2)}_{\text{H}}(r_{ \text{H}})+\delta} \tag{41}\] is of order \(\mathcal{O}(\delta^{0})\). If we now take the limit as \(\delta\to 0\), without specifying any dependent on \(q,T_{\text{H}}\) we get one of two results. For \(T\neq 0\), we have a non-extremal black-hole and \(v_{\text{dJT}}(\delta)=\sqrt{1+\mathcal{O}(\delta)}\to 1\). In other words, at finite \(\beta\), we obtain a Lyapunov exponent saturating the MSS bound. This is in both phases, which agrees with the Lyapunov exponents of the gaseous and liquid phases in the rescaled regime \(T=q^{-1}\tilde{T}\), \(\mu=q^{-3/2}\tilde{\mu}\) of the cSYK model [17]. ### Near-extremal case We now wish to compare to the results in the second rescaled regime \(T=q^{-2}\tilde{T}\), \(\mu=q^{-2}\bar{\mu}\). As \(T_{\text{H}}\to 0\), so does the specific heat, meaning that \[\frac{2f(r_{\text{H}}+\delta)}{f^{\prime}(r_{\text{H}}+\delta)}\to\delta,\quad \frac{f^{(2)}(r_{\text{H}}+\delta)}{f^{\prime}(r_{\text{H}}+\delta)}\to\frac{ 1}{\delta}. \tag{42}\] With this (39) reduces to \[v_{\text{dJT}}(\delta)=\sqrt{1+\delta[\mathcal{O}(\delta^{0})-\delta^{-1}]} \to 0. \tag{43}\] corresponding to an extremal black-hole with emblackening factor \(f(r)=f^{(2)}(r_{\text{H}})\delta^{2}/2\). An exception to the above occurs if the electrical potential contribution leads to a perfect cancelation such that \(v\) remains equal to \(1\). We have assumed that (41) remains of order \(\delta^{0}\). To assess the validity of this assumption we calculate (41) for both possible dictionaries II.a and II.b. We write this as deviations from the specific heat \[C^{(a/b)}\equiv\frac{\Phi^{(1)}_{\text{th}}(r_{H})}{\Phi^{(2)}_{\text{th}}(r_{ H})}=C_{\mu}-\delta^{(a/b)}_{\pm} \tag{44}\] Here we use the \(a/b\) to denote the cases given the two dictionaries in Table. 2. In this notation, a perfect cancelation will occur if \(\delta^{(a/b)}_{+}\) goes to zero. Here \(\Phi_{\text{th}}\propto\mathcal{Q}-1/2\) for dictionary II.a and \(\Phi_{\text{th}}\propto E=H+(\mathcal{Q}-1/2)\mu\) for dictionary II.b. With this, we have \[\Phi^{(1)}_{\text{th}}(r_{H})=\begin{cases}\mathcal{Q}^{\prime}(r_{H})&(a)\\ T+\mu\mathcal{Q}^{\prime}(r_{H})&(b)\end{cases}\] where we have used the enthalpy relation \(\partial_{\mathcal{S}}H=T\). Further, recalling that \(C_{\mu}=T\partial_{T}\mathcal{S}\), we have the second derivatives \[\Phi^{(2)}_{\text{th}}(r_{H})=\begin{cases}\mathcal{Q}^{(2)}(r_{H})&(a)\\ T/C_{\mu}+\mu\mathcal{Q}^{(2)}(r_{H})&(b),\end{cases}\] where \(\mathcal{Q}^{(2)}(\mathcal{S})=-\mathcal{S}^{(2)}(\mathcal{Q})/\mathcal{S}^{ \prime}(\mathcal{Q})^{3}\). For the non-interaction system we find \(\mathcal{S}^{\prime}_{0}=-\bar{\beta}\bar{\mu}\) and \(\mathcal{S}^{(2)}_{0}=-(\bar{\beta}\bar{\mu})^{2}C^{(0)}_{\mu}\). As such, without any interactions, one finds that (44) is exactly equal to \(C^{(0)}_{\mu}\), in other words, the same term as in (40). However, there are still contributions stemming from the interactions. Now for the lower boundary \(\mathcal{Q}\to 1/2\), where \(C^{(0)}_{\mu}\sim 2(\bar{\beta}\bar{\mu})^{2}e^{-\bar{\beta}\bar{\mu}}\), \[\delta^{(a)}_{-}\sim\frac{(\pi v/2)^{2}}{2}(\bar{\beta}\bar{\mu})^{2},\quad \delta^{(b)}_{-}\sim C^{(0)}_{\mu}/2\] and for the upper boundary \(\mathcal{Q}\to 1/q\), we find \[\delta^{(a/b)}_{+}\sim 2\frac{v-1}{q}+\frac{2(2+\pi^{2})/q^{2}}{\pi^{4}-4\pi^{2}-2} \tag{45}\] From the above, a perfect cancelation in the larger black hole if we first take the \(q\to\infty\) limit in (45). This then implies that the large black hole is still maximally chaotic in the sense that \(v_{dJT}\to 1\). This result would then match with the gaseous Majorana-like (\(\mathcal{Q}=0\)) SYK phase at low temperature. The same can happen in the smaller black hole depending on how the limit is taken. The smaller black hole seems rather badly behaved in terms of the emblackening factor. Especially when considering the black hole charge to be the conjugate driving the phase transition, one should also consider a possible free AdS phase. In other words, one should perform a similar analysis to that of Hawking and Page [41], examining the free energy of the pure AdS solutions to determine when and how this crossover occurs. This would modify the interpretation of the low-temperature regime. Given the above analysis, one should also note its possible limitation. This lies in the fact that for the extremal black hole the charge of the test particle (37) tends to diverge at the fixed point. For such a diverging charge, it is unlikely that one can ignore the back-reaction from the charged particle [42]. ## VII Conclusion Previous analogies between RN-AdS black holes with spherical event horizons and the van der Waals liquid past due to their similar phase structure [18]. However, on the dual field theory side, there is a lack of equivalent holographic descriptions in the literature. In this work, we provided such a holographic description between the \((0+1)\)-dimensional cSYK model and \((1+1)\)-dimensional JT gravity with a particular deformation. In particular, we have provided a deformed JT gravitational model with a matching partition function to the \(q/2\)-body interacting cSYK model for large \(q\). Moreover, together with matching equations of states, we have an exact equivalence in the thermodynamics. We achieved this by introducing a deformed JT gravity model characterized by a dilaton potential \(\mathcal{U}(\varphi)\) and dilaton-to-Maxwell field coupling \(\mathcal{W}(\varphi)\), and deriving the black hole metric in terms of the physical quantities of the cSYK model. One of the original reasons for believing that the SYK model should have a holographic dual was its maximal Lyapunov exponent, which is also found in gravitation theories [2]. As such, we went beyond the thermodynamic description and also considered the chaotic nature of the black hole. Close to the second-order phase transition, both liquid and gaseous phases of the cSYK model are maximally chaotic. We estimated the Lyapunov exponent on the gravitational side via linear stability analysis. This indicated the standard maximal Lyapunov exponents associated with both large and small black hole phases. As such, in the first rescaled regime of the phase diagram, we not only found the same thermodynamics but also the same Lyapunov exponents. It is known that the Lyapunov exponents of black holes in the extremal limit tend to zero. This is a side effect of the bound \(2\pi T\) tending to zero since the extremal limit corresponds to the zero temperature limit. As such we focused on the ratio \(v=\lambda_{L}/(2\pi T)\). For the cSYK model, the liquid phase would remain maximally chaotic \(v=1\), while the gaseous phase becomes regular \(v\to 0\). Depending on the choice of dictionary, and how the limits are taken, one can get different results for the small and large black holes. This highlights the need for a more in-depth analysis taking the black hole back action into account. Open questions also remain in terms of the appropriate UV and IR cutoffs to prevent unphysical behavior in the black hole. As an example, for ordinary \((1+3)\)-dimensional black hole chemistry the smaller black hole's radius does not shrink to zero [18]. When setting the charge to zero, the black hole no longer exists at \(T=0\). A natural question is whether some interpretation changes could yield similar results. The provided dictionaries directly overlap with the analogies between the charged SYK model and charged black holes provided in [17]. As such this paper directly serves as an answer to said paper, by both showing that the analogies can be used as dictionaries. In conclusion, our results encourage the use of holography away from the low-temperature regime, i.e., beyond the near-extremal regime. ## Acknowledgements We would like to thank Wenhe Cai, Yicheng Rui, and Rishabh Jha for their helpful discussions. This work is partly supported by NSFC, China (Grant No. 12275166 and No. 11875184) and partly by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SFB 1073.
2301.01405
Towards the Identifiability in Noisy Label Learning: A Multinomial Mixture Approach
Learning from noisy labels (LNL) plays a crucial role in deep learning. The most promising LNL methods rely on identifying clean-label samples from a dataset with noisy annotations. Such an identification is challenging because the conventional LNL problem, which assumes a single noisy label per instance, is non-identifiable, i.e., clean labels cannot be estimated theoretically without additional heuristics. In this paper, we aim to formally investigate this identifiability issue using multinomial mixture models to determine the constraints that make the problem identifiable. Specifically, we discover that the LNL problem becomes identifiable if there are at least $2C - 1$ noisy labels per instance, where $C$ is the number of classes. To meet this requirement without relying on additional $2C - 2$ manual annotations per instance, we propose a method that automatically generates additional noisy labels by estimating the noisy label distribution based on nearest neighbours. These additional noisy labels enable us to apply the Expectation-Maximisation algorithm to estimate the posterior probabilities of clean labels, which are then used to train the model of interest. We empirically demonstrate that our proposed method is capable of estimating clean labels without any heuristics in several label noise benchmarks, including synthetic, web-controlled, and real-world label noises. Furthermore, our method performs competitively with many state-of-the-art methods.
Cuong Nguyen, Thanh-Toan Do, Gustavo Carneiro
2023-01-04T01:54:33Z
http://arxiv.org/abs/2301.01405v2
# Towards the Identifiability in Noisy Label Learning: ###### Abstract Learning from noisy labels plays an important role in the deep learning era. Despite numerous studies with promising results, identifying clean labels from a noisily-annotated dataset is still challenging since the conventional noisy label learning problem with single noisy label per instance is not identifiable, i.e., it does not theoretically have a unique solution unless one has access to clean labels or introduces additional assumptions. This paper aims to formally investigate such identifiability issue by formulating the noisy label learning problem as a multinomial mixture model, enabling the formulation of the identifiability constraint. In particular, we prove that the noisy label learning problem is identifiable if there are at least \(2C-1\) noisy labels per instance provided, with \(C\) being the number of classes. In light of such requirement, we propose a method that automatically generates additional noisy labels per training sample by estimating the noisy label distribution based on nearest neighbours. Such additional noisy labels allow us to apply the Expectation - Maximisation algorithm to estimate the posterior of clean labels. We empirically demonstrate that the proposed method is not only capable of estimating clean labels without any heuristics in several challenging label noise benchmarks, including synthetic, web-controlled and real-world label noises, but also of performing competitively with many state-of-the-art methods. ## 1 Introduction The great advances in machine learning, and especially deep learning, in the last decade has created many applications that help to solve increasingly-complex problems in computer vision [8, 22], natural language processing (NLP) [3, 38] and reinforcement learning [20, 33]. To achieve such performance, those solutions often rely on high capacity models which are trained on a massive amount of annotated data. Such large amount of data is often annotated via crowd-sourcing services, such as Amazon Mechanical Turk, or via automated approaches based on NLP or search engines, which might, generally, produce poor-quality annotated labels, particularly when data is ambiguous. This poor annotation, combined with the fact that deep neural networks can easily overfit to randomly-labelled data [50], might lead to catastrophic failures, especially in some critical applications such as autonomous vehicles or medical diagnostics. Noisy label learning has, therefore, attracted research interest in supervised learning. Some papers have provided a more theoretical investigation of certain types of label noise, such as random label noise [2] or Massart label noise [4], from a statistical machine learning point of view to determine the sufficient number of samples to achieve certain level of performance. Other papers have focused on more practical aspects of deep-learning methods, leading to two main research directions: (i) the design of loss or regularisation functions that are robust to label noise [11, 43, 50], and (ii) the proposal of heuristics to detect and re-label samples with noisy labels [14, 23]. Despite being effective under certain simulated types of label noise (e.g., symmetric and asymmetric), these approaches tend to be challenged by more natural types of label noise, particularly the ones present in real-world datasets. Currently, methods relying on semi-supervised learning and heuristics to detect noisy samples have achieved state-of-the-art results in several label noise settings and even close to the ones trained on "pure" clean data [23]. Nevertheless, those methods still lack theoretical explanation, especially the heuristic criteria, e.g., _small loss hypothesis_, to detect noisy samples. Indeed, without such heuristics or any further assumptions, the label noise problem does not have a unique clean label solution and hence, becomes un-identifiable. It is crucial to know when the label noise problem is identifiable, so we can address it properly. Despite its importance, studies about the identifiability of noisy label learning are still limited by only a few papers [25, 32, 49, 51]. In this paper, we carry out a new study on the challenging identifiability issue, where our aim is to find the condition that makes the label noise problem identifiable, and hence, address it in a principled way. Our contributions can be summarised as follows: * We provide a theoretically rigorous formulation of the identifiability condition of the label noise problem, which concludes that we need multiple additional labels per training sample; in particular, approximately \(2C-1\) labels per training sample are required to make the problem sufficiently identifiable, where \(C\) denotes the number of classes. * Maximisation (EM) algorithm to gradually clean noisy labels and train the model of interest simultaneously. Our empirical results show competitive performance to several state-of-the-art methods in many instant-dependent and real-world label noise benchmarks. Our proposed method is tested on many noisy-label learning benchmarks (including synthetic, web-controlled and real-world label noise problems) and shown to successfully estimate clean labels without any heuristics. Furthermore, even though the main goal of this paper is the theoretical investigation of the identifiability condition, our method shows competitive results with several state-of-the-art techniques. ## 2 Related work Noisy label learning has been studied since the 1980s with some early works focusing on the statistical point of view [2, 4], such as determining the number of samples to achieve certain prediction accuracy under certain types of label noise. The field has then attracted more research interest, especially in the era of deep learning where an increasing number of annotated data is required to train large deep learning models. Noisy label learning has received even more attention when Zhang et al. [47] empirically pointed out that any convolutional neural network can easily fit a randomly labelled dataset, showing a severe overfitting capability by deep learning models. There have been numerous studies aiming to propose practical methods to address the noisy label learning problem. One research direction is to design _robust loss functions_ in which training on noisily-labelled data results in the same classifier as if training on the unobserved cleanly-labelled data [11, 43, 50]. Some other methods model the data generation process where the clean label is considered as a latent random variable, allowing to correct the training loss [16, 30] or integrating additional modules to model the label noise [12]. Another popular research direction is to employ the _small loss hypothesis_ in which training samples with small loss values are assumed to be clean. Training is then carried out either on only those low-risk samples [14] or cast as a semi-supervised learning approach with those clean samples representing labelled data while the others denoting un-labelled data [23]. Although this line of research achieves state-of-the-art results in several benchmarks, they still lack theoretical foundation to understand why the _small loss hypothesis_ is effective. There is one recent attempt that tries to explain the theory behind the _small loss hypothesis_, but it is applicable only to the class-dependent (a.k.a. instance-independent) label noise setting [13]. Despite a large number of already-published and on-going research papers, the identifiability issue in noisy label learning has not been formally studied, but only discussed and partially addressed under certain assumptions [51] or heuristic constraints [6, 49, 25]. To the best of our knowledge, the most relevant study about the identifiability issue in noisy label learning is the on-going (but still unpublished) work that investigates the identifiability of transition matrix [27]. Liu et al. [27] use the results in the mixture proportion estimation [32] to derive the identifiability condition for noisy label learning, in which at least 3 "informative" noisy labels per instance are required. This is a more optimistic than our result in Claim 1, where we theoretically show that at least almost twice the number of additional noisy labels per sample is sufficient to make the label noise problem identifiable. Our result agrees with [27] for binary classification: \(C=2\), but deviates from [27] for multi-class classification. Please refer to Appendix A for further detailed discussion about the difference. Our work is also connected to the identifiability in mixture models [37, Sec. 3.1] that investigates sufficient (or useful sufficient) conditions to recover the unique parameter of mixture models. Certain mixture models, such as Gaussian, Poisson or negative binomial, are identifiable up to the permutation of labels. However, some others, especially mixtures with discrete distributions, are only identifiable under some conditions. For example, mixtures of binomial or multinomial distributions are identifiable when there are sufficient number of samples generated from such mixtures [10, 21, 36]. Our work does not focus on mixture models, but cast the noisy label learning to a mixture model and impose the sufficient identifiability condition to the model. ## 3 Background ### Noisy label learning Let \(X\in\mathcal{X}\subseteq\mathbb{R}^{d}\) be a random variable that represents input data and \(Y\) be another random variable denoting the corresponding annotated label. In \(C\)-class classification problems, the label \(Y\) can be represented as a scalar, represented by \(\mathcal{Y}\subseteq\mathbb{Z}_{+}\), or a one-hot vector, corresponding to \(\mathcal{Y}\subseteq\Delta_{C-1}\) (in this paper, we use the label \(Y\) either as a scalar or a probability vector interchangeably), where \(\Delta_{C-1}\) is a probability simplex defined as \(\Delta_{C-1}=\left\{\mathbf{y}\in\mathbb{R}^{C}:\mathbf{1}^{\top}\mathbf{y}=1 \wedge\mathbf{y}_{c}\in[0,1],\forall c\in\{1,\dots,C\}\right\}\). Our aim is to learn a model that maps from \(X\) to \(Y\), by maximising some utility function, e.g., maximum likelihood. Instead of observing the "clean" label \(Y\) of an instance \(X\), in practice, we are often given a noisy label \(\hat{Y}\) that might or might not be the same as \(Y\). The aim is to learn a good model to predict the clean label of an unseen instance correctly, even though the training dataset, denoted as \(\{(\mathbf{x}_{i},\mathbf{\dot{y}}_{i})\}_{i=1}^{M}\), contains noisy labels. This is often known as noisy label learning or label-noise problem. One way to model the label noise problem is to consider the clean label \(Y\) as a latent variable and then applying the sum rule of probability to obtain the following: \[p(\hat{Y}|X)=\sum_{c=1}^{C}p(\hat{Y}|X,Y=c)\,p(Y=c|X), \tag{1}\] where \(C\) is the number of classes. In the literature of noisy label learning, the matrix \(T(X)=[p(\hat{Y}=j|X,Y=c)]_{j,c=1}^{C}\) is also known as the transition matrix representing the probability to flip the label from one class to another class. As the noisy label data is observed, the left-hand side term in Eq. (1) can be estimated. Thus, the clean label probability \(p(Y|X)\) can be easily calculated if \(T(X)\) is known. ### Mixture models A mixture model of \(C\) distributions can be written as: \[p(X)=\sum_{c=1}^{C}\mathbf{\pi}_{c}\mathbb{P}_{c}(X), \tag{2}\] where \(X\in\mathcal{X}\) is a random variable, \(\mathbf{\pi}\in\Delta_{C-1}\) is the mixture coefficient vector in the \(C-1\) probability simplex and \(\{\mathbb{P}_{c}\}_{c=1}^{C}\) is a set of \(C\) distributions (a.k.a. mixture components). Compared to a single distribution, mixture models are more flexible with higher modelling capacity, and hence, widely used to provide computationally convenient representation of complex data distributions. Some of the most common ones include Gaussian-, Bernoulli- and multinomial-mixture models. And since mixture models are an instance of latent variable models, we can use the Expectation Maximisation (EM) algorithm [7] via maximum likelihood or maximum a posterior to infer their parameters. ### Identifiability issues in mixture models The study of identifiability is to investigate whether one may, in principle, recover the exact parameters of the distribution of interest from observed variables. In particular, we are interested in distributions defined in a family \(p(X;\theta)\) over random variable \(X\) parameterised by \(\theta\in\Theta\) where \(\Theta\) denotes a parametric space. The identifiability can be defined as follows: **Definition 1**: \(\forall\theta,\theta^{\prime}\in\Theta\)_: if \(\theta\neq\theta^{\prime}\) then \(p(X;\theta)\neq p(X;\theta^{\prime})\)._ In statistical inference for mixture models, we often encounter the identifiability issue of \(\theta\). For example, if all the \(C\) component distributions in (2) belong to the same parametric family, \(p(X)\) is invariant under \(C!\) permutations by simply swapping the indices of the component distributions, a phenomenon known as _label-switching_. In practice, the identifiability issue due to _label-switching_ (we will refer to this identifiability issue as label-switching from now on) is of no concern since one can impose an appropriate constraint on \(\theta\) to obtain a unique solution. Nevertheless, parameter identifiability up to the permutation of class labels (we will refer this as identifiability in the remaining of this paper) is still a practical problem, at least in maximum likelihood for mixture models where the distribution components of such mixtures belong to certain distribution families. According to [37, Section 3.1], most mixture models supported on continuous space, e.g., Gaussian mixture models (excluding the mixture of uniform distributions), are identifiable. However, when the support space is discrete, the identifiability of such mixtures might not always hold. For example, a mixture of Poisson distribution[36] or a mixture of negative binomial distribution [45] is identifiable, while a mixture of binomial distributions is only identifiable under certain conditions [36, Proposition 4]. Another example is multinomial mixture models which is, according to Theorem 1, identifiable only when the number of samples is at least almost twice the number of class labels. **Theorem 1** (Lemma 2.2 in [21] and Theorem 4.2 in [10]): _If \(Mult(\mathbf{x};N,\mathbf{p}_{c})\) is a multinomial distribution with \(N\in\mathbb{N}\) being the number of trials and \(\mathbf{p}_{c}\in\Delta_{d-1}\) being the success probability vector of \(d\) categories, then the class of multinomial mixture models:_ \[\mathcal{M}_{N,C}=\left\{M(\mathbf{x}):M(\mathbf{x})=\sum_{c=1}^{C}\mathbf{\pi}_{ c}\operatorname{Mult}(\mathbf{x};N_{c},\mathbf{p}_{c})\right\}\] _is identifiable (up to label permutation) if \(\min_{c\in\{1,\ldots,C\}}N_{c}\geq 2C-1\)._ Methodology Given that only the instance \(X\) and its single noisy label \(\hat{Y}\) are available, the noisy label learning problem shown in Eq. (1) is ill-defined, potentially resulting in infinite values of the clean label \(Y\). Also, if there is a unique solution \((p(\hat{Y}|X,Y),p(Y|X))\), one can freely swap the columns of the transition matrix \(p(\hat{Y}|X,Y)\) and the corresponding rows in the clean label vector \(p(Y|X)\) without changing the distribution of the noisy label \(p(\hat{Y}|X)\). This type of identifiability results into \(C!\) solution and is referred to as _label-switching_, mentioned in Sec. 3.3. In general, there are two types of identifiability issues regarding to the noisy label learning: (i) the uniqueness of the solution \((p(\hat{Y}|X,Y),p(Y|X))\) up to the label permutation and (ii) the label-switching problem of such unique solution. In the following subsections, we address the identifiability issue by formulating the noisy label learning to multinomial mixture models and impose the identifiable condition. We then present a mitigation solution of the label-switching problem by imposing certain constraints on the initialisation when inferring the posterior of the clean label. ### Noisy label learning as a multinomial mixture The likelihood of the noisy label \(p(\hat{Y}|X)\) presented in Eq. (1) can be considered to be a multinomial mixture model where \(p(\hat{Y}|X,Y=c)=\operatorname{Mult}(\hat{Y};N_{c},\rho_{c})\) is a multinomial component and \(p(Y=c|X)\) is the corresponding mixture coefficient with \(N_{c}\in\mathbb{Z}_{+},\rho_{c}\in\Delta_{C-1}\) and \(c\in\{1,\dots,C\}\). We can, therefore, rewrite the likelihood of the noisy label in the form of multinomial mixture models as following: \[p\left(\hat{Y}|X;\rho\right)=\sum_{c=1}^{C}p(Y=c|X)\operatorname{ Mult}(\hat{Y};N,\rho_{c}), \tag{3}\] where we assume that \(N_{c}=N,\forall c\in\{1,\dots,C\}\) to simplify the analysis. The case varying \(N_{c}\) can be straight-forwardly extended by considering \(N=\min_{c\in\{1,\dots,C\}}N_{c}\). As the noisy label learning can be formulated as a multinomial mixture model shown in Eq. (3), we can employ Theorem 1 to obtain the following claim about the identifiability in noisy label learning: **Claim 1**: _Any noisy label learning problem where the noisy label distribution is modelled as a multinomial mixture model shown in Eq. (3) is identifiable if there are at least \(2C-1\) samples of noisy label \(\hat{Y}\) for an instance \(X\) with \(C\) being the number of classes._ Given Claim 1, one can derive some interesting results in noisy label learning. For example, the conventional noisy label learning has only one annotated noisy label per sample: \(N=1\). Therefore, according to Claim 1, it is unidentifiable for \(C\geq 2\). Another example is that binary classification on noisy labels, corresponding to \(C=2\), is identifiable if there are at least 3 noisy labels per sample, which agrees with studies in the literature of identifiability for noisy label learning [27, 51]. Hence, to address the identifiability issue in noisy label learning, we need at least additional \(2C-2\) noisy labels per instance. One naive way is to annotate more labels, e.g., via crowd-sourcing, to obtain the number of noisy labels per instance that satisfies the identifiable condition in Claim 1. Such approach is, however, costly, time-consuming and poorly scalable, especially when the number of classes \(C\) is very large. For example, WebVision dataset [24] with \(C=1,000\) will require at least an addition of 1,998 noisy labels per sample, resulting in an intractable solution. To obtain additional noisy labels per sample without the need of additional resources, we propose to approximate the complex noisy label distribution \(p(\hat{Y}|X)\) following a data-driven approach that takes the similarity between instance features into account. Our hypothesis is that instances with similar features tend to be annotated similarly, or in other words, similar instances have similar noisy labels. Thus, we can employ the single noisy label per instance available in the training dataset to approximate the noisy label distribution \(p(\hat{Y}|X)\). The approximated distribution is then used to generate many noisy labels that satisfy the identifiability condition specified in Claim 1. Subsequently, Expectation-Maximisation algorithm is employed to infer the parameter of the multinomial mixture model of noisy label learning in Eq. (3) (refer to Appendix C for the details of EM on multinomial mixtures). ### Approximate noisy label distribution To generate additional noisy labels that satisfies the identifiable condition in Claim 1, we approximate the noisy label distribution of each training sample by exploiting the information of nearest neighbours. The complex noisy label distribution of an instance \(\mathbf{x}_{i}\), denoted as \(p(\hat{Y}|X=\mathbf{x}_{i})\), is slimutaneously derived not only from the one-hot noisy label vector \(\hat{\mathbf{y}}_{i}\), but also the noisy labels of other instances whose features are similar to the instance. Specifically, such approximated distribution can be written as: \[p(\hat{Y}|\mathbf{X}=\mathbf{x}_{i})\approx\mu\hat{\mathbf{y}}_{i}+(1-\mu) \sum_{j\neq i,j=1}^{K}\mathbf{A}_{ij}\hat{\mathbf{y}}_{j}, \tag{4}\] where \(\mu\in[0,1]\) is a hyper-parameter reflecting the tradeoff between the noisy label of the instance and the noisy labels of other instance, \(K\in\{1,\dots,M\}\) is the number of instances considered in the approximation, and \(\mathbf{A}_{ij}\in[0,1]\) is a coefficient representing the similarity between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). Since \(p(\hat{Y}|\mathbf{X}=\mathbf{x}_{i})\) is a probability distribution, one constraint for \(\mathbf{A}_{ij}\) is that \(\sum_{j\neq i,j=1}^{K}\mathbf{A}_{ij}=1\). There are several ways to find the similarity matrix \([\mathbf{A}_{ij}],\mathbf{A}_{ii}=0,i\in\{1,\ldots,M\},j\in\{1,\ldots,K\}\). For example, [15] employed sparse subspace clustering method [9] to approximate the label distribution when learning human age from images. In this paper, we use a slightly similar but more efficient method that utilises the nearest neighbour information: locality-constrained linear coding (LLC) [39]. In particular, the coefficient \(\mathbf{A}_{ij}\) can be determined via the following optimisation: \[\min_{\mathbf{A}_{i}}\|\mathbf{x}_{i}-\mathbf{B}_{i}\mathbf{A}_{ i}\|_{2}^{2}+\lambda\|\mathbf{d}_{i}\odot\mathbf{A}_{i}\|_{2}^{2}\] \[\text{s.t.: }\mathbf{1}^{\top}\mathbf{A}_{i}=1,\mathbf{A}_{ij} \geq 0,\forall j\in\{1,\ldots,K\}, \tag{5}\] where \(\mathbf{B}_{i}\in\mathbb{R}^{d\times K}\) is the matrix containing \(K\) nearest neighbours of instance \(\mathbf{x}_{i}\) (each column is a nearest-neighbour instance), \(\mathbf{A}_{i}=\begin{bmatrix}\mathbf{A}_{i1}&\mathbf{A}_{i2}&\ldots&\mathbf{ A}_{iK}\end{bmatrix}^{\top}\) is the \(K\)-dimensional vector representing the coding coefficients, \(\mathbf{d}_{i}=\exp(\nicefrac{{\mathrm{dist}(\mathbf{x}_{i},\mathbf{B}_{i}) }}{{\sigma}})\) is the locality adaptor with \(\mathrm{dist}(\mathbf{x}_{i},\mathbf{B}_{i})\) being a vector of Euclidean distances from \(\mathbf{x}_{i}\) to each of its nearest neighbours, and \(\sigma\) being used for adjusting the weight decay speed for the locality adaptor. Nevertheless, since our interest is locality, not sparsity, in our implementation, we ignore the second term in Eq. (5) by setting \(\lambda=0\). Note that the optimisation in (5) is slightly different from the original LLC due to the additional constraint of non-negativity of \(\mathbf{A}_{ij}\). Nevertheless, the optimisation resembles a quadratic program, and therefore, can be efficiently solved by off-the-shelf solvers, such as OSQP [35] which is available in the deep learning framework JAX (refer to OSQP solver in JAXopt). Another problem when solving the optimisation in (5) for the coding vector \(\mathbf{A}_{i}\) of each instance \(\mathbf{x}_{i}\) is to find the nearest neighbours for an instance \(\mathbf{x}_{i}\). In light of efficiently obtaining nearest neighbours, we employ FAISS [19] - a library for efficient similarity search and clustering of dense vectors written in C++ with Python wrappers and able to run on GPU. In addition, for datasets that contain millions of samples, we randomly sample a subset (about 10,000 samples) and run the nearest neighbour search on that subset. ### Infer clean label posterior with EM Once the noisy label distribution \(p(\hat{Y}|X)\) is approximated, we can generate \(L\) sets of \(N\) noisy labels for each instance where \(N\geq 2C-1\) and perform maximum a posterior to infer the parameter of the multinomial mixture model for noisy label learning shown in Eq. (3). In particular, the objective function can be written as: \[\max_{\rho}\ln p(\rho|X,\hat{Y})=\max_{\rho}\ln p(\hat{Y}|X,\rho)+\ln p(\rho| \beta), \tag{6}\] where \(\rho\in\{\rho\in\mathbb{R}^{C\times C}:\rho_{c}\in\Delta_{C-1}\}\) is the matrix of instance \(X\) that contains \(C\) probability vectors as its columns, and \(\beta\) is the parameter of the prior of \(\rho\). Maximising the objective function in (6) is difficult since the log-likelihood \(\ln p(\hat{Y}|X,\rho)\) cannot be evaluated unless the hidden variable \(Y\) is available. To optimise such latent variable model, we employ the Expectation - Maximisation algorithm - an iterative optimisation method alternating between two steps. In the E-step, we calculate the expectation Figure 1: An illustration of the proposed method that consists of 4 steps: (i) extract features, (ii) search for nearest neighbours in the feature space, (ii) approximate the noisy label distribution, and (iv) use EM to obtain pseudo-clean label. of \(\ln p(\rho|X,\hat{Y},Y)\) w.r.t. \(p(Y|X,\hat{Y},\rho^{t})\) as follows: \[Q(\rho,\rho^{t})=\mathbb{E}_{p(Y|X,\hat{Y},\rho^{t})}\left[\ln p( \rho|X,\hat{Y},Y)\right]\] \[=\mathbb{E}_{p(Y|X,\hat{Y},\rho^{t})}\left[\ln p(\hat{Y}|X,Y,\rho) +\ln p(\rho|\beta)\right]+\text{const}.\] \[\propto\mathbb{E}_{p(Y|X,\hat{Y},\rho^{t})}\left[\ln\mathrm{ Mult}(\hat{Y};N,\rho)+\ln p(\rho|\beta)\right], \tag{7}\] where \(\rho^{t}\) denotes the parameter at \(t\)-th iteration. In the M-step, we maximise \(Q\) to obtain the parameter \(\rho^{t+1}\) used in the next iteration: \[\rho^{t+1}=\arg\max_{\rho}Q(\rho,\rho^{t}). \tag{8}\] The algorithm iterates until \(\left\|\rho^{t+1}-\rho^{t}\right\|\) is sufficiently small. For the prior term, we assume that each probability column vector in \(\rho\) follows a Dirichlet distribution: \[\ln p(\rho|\beta)=\sum_{c=1}^{C}\ln\mathrm{Dir}(\rho_{c};\beta_{c}). \tag{9}\] Such conjugate prior allows us to calculate both the E-step and M-step exactly for each training sample \(X\) (refer to Appendix C for the detailed derivation of EM used for multinomial mixtures). The clean label posterior \(p(Y|X,\hat{Y},\rho^{t})\) obtained in the E-step at the final iteration can then be used as the soft label for training. This is the main idea of our proposed method which is visually illustrated in Fig. 1. Despite the apparent simplicity, the procedure is, however, associated with an important drawback related to the approximation of \(p(\hat{Y}|X)\). Initially, it is modelled as a multinomial mixture, but then approximated as a categorical distribution. Thus, the performance would heavily depend on how accurate that approximation is. To overcome such weakness, we propose to slightly modify the procedure by using the clean label posterior \(p(Y|X,\hat{Y},\rho^{t})\) obtained from EM as a pseudo-clean label to substitute \(\hat{Y}\). The "cleaner" \(\hat{Y}\) is then used to approximate \(p(\hat{Y}|X)\) shown in Eq. (4), which in turn, generates many noisy labels to estimate \(p(Y|X,\hat{Y},\rho^{t})\) via EM. This process is then repeated until the clean label posterior \(p(Y|X,\hat{Y},\rho^{t})\) converges. The intuition of this iterative procedure is to progressively correct the noisy labels in the training set until they become clean. Further details can be referred to Algorithm 1. ``` 1:procedureGradually relabeling(\(\mathbf{X},\hat{\mathbf{Y}},\mu,\eta\)) 2:\(\triangleright\quad\mathrm{X}\in\mathbb{R}^{d\times M}\); a matrix of instances 3:\(\triangleright\quad\hat{\mathbf{Y}}\in\mathbb{R}^{C\times M}\); a matrix of one-hot noisy labels 4:\(\triangleright\quad K\): number of nearest neighbours 5:\(\triangleright\quad L\): number of \(N\)-trial multinomial samples 6:\(\triangleright\quad\mu\): trade-off coefficient 7:\(\triangleright\quad\eta\): number of EM iterations 8: initialise parameter of feature extractor:\(\theta\) 9: initialise parameter of a classifier:\(\mathbf{w}\) 10:\(\theta,\mathbf{w}\leftarrow\texttt{Warm-up}(\mathbf{X},\hat{\mathbf{Y}},( \theta,\mathbf{w}))\) 11:while\((\theta,\mathbf{w})\) not converged do 12:\(\Upsilon\leftarrow\varnothing\)\(\triangleright\) an empty set to store updated labels 13:for each \((\mathbf{x}_{i},\hat{\mathbf{y}}_{i})\in(\mathbf{X},\hat{\mathbf{Y}})\)do 14: extract features: \(\phi(\mathbf{x}_{i};\theta)\) 15:\(\mathbf{B}_{i},\{\hat{\mathbf{y}}_{i}\}_{i=1}^{K}\leftarrow\text{KNN}(\phi( \mathbf{x}_{i};\theta),K)\)\(\triangleright\) Eq. (5) 16:\(\mathbf{A}_{i}\leftarrow\text{LIC}(\mathbf{x}_{i},\mathbf{B}_{i})\)\(\triangleright\) Eq. (4) 17:\(\mathbf{p}_{i}\leftarrow\mu\hat{\mathbf{y}}_{i}+(1-\mu)\sum_{j}\mathbf{A}_{ij} \hat{\mathbf{y}}_{ij}\)\(\triangleright\) Eq. (4) 18: sample \(N\times\text{L samples}\): \(\hat{\mathbf{y}}_{iin}\sim\mathrm{Cat}(Y;\mathbf{p}_{i})\) 19:\(\hat{\mathbf{Y}}_{i}=\{(\{(\hat{\mathbf{y}}_{iin}\}_{i=1}^{N})\}_{i=1}^{L})\}_{i=1}^{L}\)\(\triangleright\)\(\pi=p(Y|X,\hat{Y},\rho^{t})\) 20:\(\rho,\pi\leftarrow\text{EM}(\hat{\mathbf{Y}}_{i},\eta)\)\(\triangleright\)\(\pi=p(Y|X,\hat{Y},\rho^{t})\)\(\triangleright\)\(\pi=p(Y|X,\hat{Y},\rho^{t})\) 21:\(\Upsilon\leftarrow\text{APPend}(\gamma)\)\(\triangleright\) store the updated label 22: update noisy labels:\(\hat{\mathbf{Y}}\leftarrow\Upsilon\) 23:\(\theta,\mathbf{w}\leftarrow\text{Train}(\mathbf{X},\hat{\mathbf{Y}},(\theta, \mathbf{w}))\) 24:return\((\theta,\mathbf{w})\) ``` **Algorithm 1** A progressive approach to address label noise As shown in Algorithm 1, the proposed method relies on the extracted features to perform nearest neighbour search. Thus, if the features extracted are biased, it will worsen the quality of the nearest neighbours, reducing the effectiveness of the proposed method. To avoid such bias, we follow a co-teaching approach [14] that trains two models simultaneously where the noisy labels being cleaned by one model are used to train the other model and vice versa. ### Overcome label switching Although generating more noisy labels per sample by LLC resolves the identifiability issue to obtain a "unique" solution, we still end up with \(C!\) permutations due to label-switching. This issue is, however, intrinsic to mixture models in general, unless we impose further constraints. In the context of noisy label learning, the noise is often assumed to be non-dominant [25, 49, 51]. In other words, \(\forall c,j\in\{1,\dots,C\}\): \[p(\hat{Y}=c|X,Y=c)\geq p(\hat{Y}=j|X,Y=c), \tag{10}\] which means that the transition matrix is diagonally dominant. Otherwise, the concept of a class label might completely be switched to another class, causing a class mismatch with the ones defined in evaluation sets. One way to integrate the constraint in (10) is to design the prior parameter \(\beta\) to enforce the matrix \(\rho\) to be close to some diagonally dominant matrix, such as the identity matrix. Imposing such prior, however, increases the complexity of the objective function in Eq. (6), resulting in a complicated fine-tuning for \(\beta\). In the implementation, we observe that simply initialising the parameter matrix \(\rho\) of \(p(\hat{Y}|Y,X)\) as a diagonally dominant matrix gives us quite accurate results. We, therefore, follow this approach as a simple way to mitigate the label-switching issue. ## 5 Experiments We empirically evaluate our method using several noisy label learning benchmarks designed to test the robustness of learning methods to the most realistic type of label noise, namely: the instance-dependent noise. In particular, our experiments are carried out on both synthetic and real-world instance-dependent label noise benchmarks. In addition, since the focus of our paper is on the theory side of the identifiability in noisy label learning, we show that the proposed method is effective and competitive to other state-of-the-art methods in the literature, but we do not aim to achieve state-of-the-art results by further fine-tuning or employing highly-complex neural network architectures. All the implementation is done in PyTorch and JAX and will be released upon the acceptance of this paper. **Datasets** We evaluate on two types of instance-dependent label noises: synthetic and real-world. For the synthetic noise setting, we use CIFAR-10 and CIFAR-100 as our evaluation datasets, and follow [41] to generate synthetic instance-dependent noisy labels. For real-world label noise, we use three common benchmarks: Controlled Noisy Web Labels (CNWL) [18], mini-WebVision [24] with additional evaluation on the validation of ImageNet ILSVRC 2012 [31], and Animal-10N [34]. For CNWL, we use the web label noise (or red noise) setting where the labels of internet-queried images are annotated manually. For mini-WebVision, we follow previous works that take a subset containing the first 50 classes in the WebVision 1.0 dataset for training and evaluate on the clean validation set. The model trained on mini \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100**} \\ \cline{2-5} Noise rate & **0.2** & **0.4** & **0.2** & **0.4** \\ \hline Cross-entropy & 85.66 & 76.89 & 57.26 & 41.33 \\ Peer loss [28] & 89.52 & 83.44 & 61.13 & 48.01 \\ L\({}_{\text{DMI}}\)[43] & 88.67 & 83.65 & 57.36 & 43.06 \\ L\({}_{\text{q}}\)[50] & 85.66 & 75.24 & 56.92 & 40.17 \\ Co-teaching [14] & 88.84 & 72.61 & 43.47 & 23.20 \\ Co-teaching+ [46] & 89.82 & 73.44 & 41.62 & 24.74 \\ Jocorb [40] & 88.82 & 71.13 & 44.55 & 23.92 \\ Forward [30] & 87.87 & 79.81 & 57.69 & 42.62 \\ T-Revision [42] & **90.31** & 84.99 & 58.00 & 40.01 \\ \hline **Ours** & 89.79 & **85.42** & **63.27** & **56.32** \\ \hline \hline \end{tabular} \end{table} Table 1: Prediction accuracy on instance-dependent noise on CIFAR-10 and CIFAR-100 where results are as reported in [51]. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline **Method** & \multicolumn{4}{c}{**CIFAR-10**} & \multicolumn{4}{c}{**CIFAR-100**} \\ \cline{2-9} & **0.2** & **0.3** & **0.4** & **0.5** & **0.2** & **0.3** & **0.4** & **0.5** \\ \hline PTD-R-V [41]1 & 76.58 & 72.77 & 59.50 & 56.32 & 65.33 & 64.56 & 59.73 & 56.80 \\ kMEIDTM [6]1 & **92.26** & **90.73** & 85.94 & 73.77 & 69.16 & 66.76 & 63.46 & **59.18** \\ HOC global [51]2 & 89.71 & - & 84.62 & - & 68.82 & - & 62.29 & - \\ HOC local [51]2 & 90.03 & - & 85.49 & - & 67.47 & - & 61.20 & - \\ **Ours (with DINO)** & 91.16 & 89.67 & **86.85** & **76.03** & **75.45** & **73.69** & **70.32** & 58.02 \\ \hline \hline \end{tabular} * 1 Resnet-34 2 Resnet-50 pre-trained on ImageNet \end{table} Table 2: Comparison of prediction accuracy on various instance-dependent label noise rates for CIFAR-10 and CIFAR-100 with different network architecture including pre-trained on ImageNet and self-supervised ones trained on the corresponding un-labelled datasets. WebVision is also evaluated on the clean validation set of ImageNet ILSVRC 2012. Finally, we evaluate the proposed method on Animal-10N dataset that contains 5 pairs of confusing animals. **Models** We use PreAct Resnet-18 as the backbone to evaluate the proposed method on CIFAR-10, CIFAR-100 and Red CNWL datasets. Note that for CNWL, we preprocess the images by resizing from 84-by-84 pixel\({}^{2}\) to 32-by-32 pixel\({}^{2}\). For mini-WebVision, we download the small image version and further resize all images to 224-by-224 pixel\({}^{2}\) before passing the images into a Resnet-50. For Animal-10N, we keep the original image size of 64-by-64 pixel\({}^{2}\) and use VGG-19 as the backbone to obtain a fair comparison with existing baselines. **Hyper-parameters**: refer to Appendix B for hyper-parameters and further details of the experiments. ### Results Tab. 1 shows a comparison in terms of prediction accuracy for synthetic label noises between some common baselines and our proposed method. The results show that our proposed method is on par with those baselines on CIFAR-10 dataset, while out-performing competing approaches on CIFAR-100 dataset. We further evaluate our method by pre-training a PreAct Resnet-18 using the unlabelled data of each dataset with DINO [5], which is a self-supervised training method developed to initialise models with the goal of improving their performance in downstream tasks. Tab. 2 shows the results of several state-of-the-art methods with different network architectures pre-trained on ImageNet. In both datasets, our method demonstrates a competitive performance compared to the state-of-the-art methods at small noise rates and slightly better at larger noise rates. For Red CNWL, we follow the experiment setup in [44] and show the results in Tab. 3 (_left_). The results in Tab. 3 (_left_) includes baselines evaluated on small size images (32-by-32 pixel\({}^{2}\)) for a fair comparison. In this benchmark, the proposed method consistently outperforms the state-of-the-art methods. We further evaluate the proposed method on other real-world label noise datasets: mini-WebVision and Animal-10 and show the results in Tab. 3 (_middle_) and (_right_), respectively. For mini-WebVision, we use a Resnet-50 that is pre-trained on ImageNet for 100 epochs by the self-supervised learning method DINO [5] as an initialisation. For Animal-10N, we use DINO to pre-train a VGG-19 for 800 epochs and use the pre-trained parameters as an initialisation to train our model. In general, the results show competitive performance compared to common baselines. ### Ablation studies We carry out additional studies to investigate the effect of the number of noisy labels per sample, the number of nearest neighbours and the effectiveness of the relabelling. Note that no self-supervised learning is used for pre-training the model to avoid potential confounding factors. We run experiments with the same setting on Red CNWL at 0.6 noise rate with various number of noisy labels per sample \(N\in\{3,20,100,199,400\}\) and plotted the results in Fig. 2 (_left_), where \(L\) is the number of \(N\) multinomial noisy labels defined in Algorithm 1. When \(L\) is small, the more noisy labels per sample, the more effective, and the \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Noise rate**} \\ \cline{2-3} & **0.2** & **0.4** & **0.6** \\ \hline Cross-entropy & 47.36 & 42.70 & 37.30 \\ MixUp [48] & 49.10 & 46.40 & 40.58 \\ DivideMix [23] & 50.96 & 46.72 & 43.14 \\ MentorMix [18] & 51.02 & 47.14 & 43.80 \\ FaMUS [44] & 51.42 & 48.06 & 45.10 \\ **Ours** & **52.78** & **49.18** & **46.00** \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy evaluated on real-world datasets: _(left)_ Red CNWL, _(middle)_ mini-WebVision and ImageNet, and _(right)_ Animal-10N. Figure 2: Ablation studies on: _(left)_ the effect of number of noisy labels per sample on Red CNWL at 0.6 noise rate, _(middle)_ and _(right)_ the accuracy of the relabelling and the influence of nearest neighbours on CIFAR-100. effectiveness diminishes after the threshold of \(2C-1\), which in this case is 199. This empirically confirms the validity of Claim 1 about the identifiability in noisy label learning. However, when \(L\) is large, the performance difference when varying \(N\) is not as noticeable. In this regime (of large \(L\)), Claim 1 might result in a conservative requirement in terms of number of noisy labels per sample. The current setting of noisy label learning might contain some common latent structure between samples, which we have not exploited yet to bring down the number of required noisy labels per sample. Future work will need to address such issue to make the problem more practical. We investigate the effectiveness of the proposed method by measuring the accuracy on the training set between the pseudo labels "cleaned" by EM and the ground truth labels. The results in Fig. 2_middle_) show that the proposed method improve by about 16 to 24 percent the accuracy on the CIFAR-100 training set compared to the original noisy one. This is equivalent to cleaning 33 to 67 percent of noisy labels. We also investigate the effect of the number of nearest neighbours \(K\) used to estimate noisy label distribution of each sample and show the results evaluated on CIFAR-100 at 0.5 noise rate in Fig. 2_right_). In light of testing accuracy, the larger \(K\), the more accurate. However, the trade-off is the running time as shown in Fig. 2_right_). Since \(K=100\) gives a good balance between the performance and running time, we use this value in all of our experiments in Sec. 5.1. ## 6 Conclusion This work has formally investigated the identifiability of noisy label learning in the lens of finite mixture models by formulating the label noise problem as a multinomial mixture model, where the mixture coefficient is represented by the clean label probability and the multinomial components are denoted by the columns in the transition matrix. Under this modelling approach, we show that the conventional noisy label learning with a single noisy label per instance is un-identifiable, even up to the permutation of labels. To make it identifiable, we impose a constraint from multinomial mixture by requiring at least \(2C-1\) noisy labels per instance, where \(C\) is the number of classes. We then propose to employ locality-constrained linear coding approach that relies on nearest neighbours to generate additional noisy labels per sample. Such approach allows to estimate the clean labels via the EM algorithm. Experimental results show that our proposed method is competitive with many existing state-of-the-art methods in several challenging benchmarks, especially the instance-dependent and real-world label noises.